url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://stats.stackexchange.com/questions/tagged/pls
Tagged Questions The pls tag has no wiki summary. 0answers 9 views What are guidelines for SmartPLS boostrapping case size? In SmartPLS, bootstrapping is used to generate the t statistic from which statistical significance can be judged. The two main bootstrapping parameters are case and sample size. Increasing the sample ... 1answer 68 views What's the best way to choose data for Crossvalidation on linear regression settings (PCA, PLS) We are extracting features from EEG, which is a time dependent signal. We have signals of 10,000 datapoints over 64 channels, and we extract 10 features per timestamp per channel, so at the end we ... 0answers 31 views Predict missing value(s) using existing measurement's data I have a few measurements (6) with 13 different features (so I have more parameters than measurements). Let's say I would have a new measurement with a few missing values, considering I have existing ... 0answers 78 views Understanding PLS output in R I am running PLS regression in R using the 'pls' package. ... 0answers 84 views Looking for example of multi-response PLS in R [closed] Could anyone point me to an example of using R's PLS package to analyse data with multiple responses. I'm new to R and am having trouble setting up the data frame correctly. Specifically I'm ... 0answers 96 views Minimum sample size for PLS SEM I have a structural model with 6 latent variables and 26 related items (indicators). The maximum number of indicators for a latent variable is 5. How should I calculate the minimum required sample ... 1answer 43 views How do you determine the effect of a simple predictor variable after a PLS analysis? So, I am running PLS on a genetic dataset with phenotypic and genotypic information. I have about 1000 binary predictors (X), representing molecular markers, for each individual. My indicator ... 0answers 67 views How do you predict the value of new instance, when the training data were normalized? I estimated a Partial Least Squares model where the X matrix had normalized columns. Now I want to predict the value for a new instance (which is a frequency vector summing to one.) I assume that if I ... 0answers 112 views Multilevel structure in Partial Least Squares Path Modelling [closed] I have some complex survey data and need to fit a partial least squares path model (PLSPM) with the following structure M1, M2, M3, are latent variables from the item batteries Batt1, Batt2, Batt3. ... 1answer 83 views Under what conditions can a PLS regression model be expressed by single linear equation? I am confused by two, yet inconsistent for me, facts: Since the PLS regression is expressed by matrices of scores and loadings as $$X=TP^T+E\\Y=UQ^T+F$$ how it can be translated into linear equation ... 3answers 181 views Dimension reduction technique [closed] As i know, PCA and PLS are two famous methods of dimension reduction Could you please name for other (neural network is it useful)? 1answer 215 views How to fit data with nonlinear partial least squares in R? I am looking for a way to do nonlinear partial least squares in R or matlab. I thought kernel pls was a way to do it but it is not directly related to nonlinear pls. Do I have to calculate my own ... 1answer 257 views Combining principal component analysis and partial least squares I know PCA and PLS are considered as alternative method to each other. But I am thinking about a kind of combination of the two in case of lots of predictors with little variability. In that case, ... 1answer 185 views how to find a linear combination of predictors maximizing correlation between its score and dependent variable in R Please correct me if I am wrong as I am not good at R. I think I can find a linear combination maximizing correlation between predictors and dependent variables by running partial least squares ... 0answers 84 views PLS regression is not working on weighted data I was running PLS regression on the data which is weighted and gives the following error message: ... 1answer 2k views PLS in R with the pls package I am very new in PLS and I try to understand the output of the R function plsr(). Let us simulate data and run the PLS: ... 1answer 483 views PCA, LDA, CCA, and PLS How are PCA, LDA, CCA, and PLS related? They all seem "spectral" and linear algebraic and very well understood (say 50+ years of theory built around them). They are used for very different things PCA: ... 1answer 334 views Measuring predictive accuracy for multiple dependent variables In machine learning and in statistics there exist plenty of measures which estimate the performance of a predictive model. For example, classification accuracy, area under ROC curve ... for ... 0answers 194 views What, if any, dissimilarity is preserved in partial least squares (PLS)? When we perform a principal components analysis (PCA) on a multivariate data set we are interested in finding orthogonal components that explain maximal variance in the data set. We can form a biplot ... 2answers 998 views How to compute the confidence intervals on regression coefficients in PLS? The underlying model of PLS is that a given $n \times m$ matrix $X$ and $n$ vector $y$ are related by $$X = T P' + E,$$ $$y = T q' + f,$$ where $T$ is a latent $n \times k$ matrix, and \$E, ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.911989688873291, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/wavelength+frequency
# Tagged Questions 0answers 9 views ### Low Frequency Standing Waves in Cylindrical Structures [closed] What would be the frequency of the fundamental vibration in a 50 meter tall cylindrical structure if it were closed at the bottom? L = wavelength/4 for closed end cylinders so the wavelength I ... 2answers 1k views ### Relationship between frequency and wavelength I am currently writing up a report for science class on the relationship between frequency and wavelength. And so i was wondering if anyone knew where i could find published results (literature value) ... 0answers 70 views ### Sum and Difference Frequencies - Amplitude Modulation I understand that while transmitting an envelope through a carrier wave to the receiver, an upper sideband and a lower sideband form adjacent to the carrier wave. The sidebands are apparently an ... 2answers 867 views ### Frequency of the sound when blowing in a bottle I'm sure you have tried sometime to make a sound by blowing in an empty bottle. Of course, the tone/frequency of the sound modifies if the bottle changes its shape, volume, etc. I am interested in ... 1answer 151 views ### How to reconstruct information from a graph of an oscillation? [closed] We are given a graph of the position of a wave (amplitude). How can we calculate the wavelength, frequency and the maximum speed of a particle attached to that wave? We have Speed = wave length ... 0answers 36 views ### How do you super impose two or more signals to occupy a fix area of space with the resultant summed wave? Is it possible to super-impose two or more signals all sent from different directions as a standing wave with the resulting summed wave occupying a fix area of space that is also a complex area? Do ... 2answers 3k views ### Can the equation $v=\lambda f$ be made true even for non sinusoidal waves? The known relation between the speed of a propagating wave, the wave length of the wave, and its frequency is $$v=\lambda f$$ which is always true for any periodic sinusoidal waves. Now consider: ... 2answers 396 views ### Is all kind of light same speed? Is there any speed different between blue or red color? Is there speed different? or there are same speed? 2answers 12k views ### Why does wavelength change as light enters a different medium? When light waves enter a medium of higher refractive index than the previous, why is it that: Its wavelength decreases? The frequency of it has to stay the same? 6answers 4k views ### What determines color — wavelength or frequency? What determines the color of light -- is it the wavelength of the light or the frequency? (i.e. If you put light through a medium other than air, in order to keep its color the same, which one would ... 1answer 111 views ### Will a photon emitted from something moving quickly have a shorter wavelength? If a photon is emitted from a light source moving at any speed, the photon will nonetheless always move at c (assuming it is emitted in a vacuum.) If the speed of a photon's emitter cannot influence ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320282340049744, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/5296/how-can-i-use-eulers-totient-and-the-chinese-remainder-theorem-for-modular-expon/5300
# How can I use eulers totient and the chinese remainder theorem for modular exponentiation? I'm trying to implement modular exponentiation in Java using Lagrange and the Chinese remainder theorem. The example we've been given is: Let $N = 55 = 5 · 11$ and suppose we want to compute $27^{37} \pmod N$. He doesn't give us the answer, but says: The most efficient way to do it is is using Lagrange's theorem, a few multiplications modulo 5 and 11 and CRT to combine both results. Using Lagrange / Euler totient I get $\varphi(N) = 40$, which it seems I'm supposed to use calculate the congruences needed for putting into the Chinese remainder theorem. I know I can calculate the congruences using the Extended Euclidean algorithm, but the answers need to be reduced or the run time will still be unfeasible (maybe not in this case, but for the 1024 bit numbers I'm working with, this is a huge problem). I know they can be reduced, from a document I found while researching this, which states: $$a^k \equiv a ^ { k \pmod{\varphi(n)}} \pmod n.$$ I've tried and tried and tried to follow how he does the reduction but I just don't get it. He also doesn't mention what $m$ is when he says $k = m · \varphi(n) + k'$. As you can probably gather, my math is not so hot, so if possible maybe give a "for dummies" answer. The example given – Let $N = 55 = 5 · 11$ and suppose we want to compute $27^{37} \pmod N$ – is not homework, so if anyone could step me through it, in particular the reductions to get to the simplified Chinese remainder theorem congruences, I would be VERY grateful. I originally placed this question on Stack Overflow. - After reading your question, I honestly have no idea what you are asking. Perhaps the best thing you could do for now would be to simplify. Remove all the code and just post some simple mathematical equations on what you are trying to do. Also, please post a link to the StackOverflow question so we can keep track of both. – mikeazo♦ Nov 8 '12 at 12:29 – mikeazo♦ Nov 8 '12 at 12:33 Hey mikeazo, I've edited it, hopefully it's much clearer now. The link you posted for the CRT is certainly helpful, but again, I have trouble following the math near the end of the answer. – Saf Nov 8 '12 at 13:53 ## 1 Answer To step through it simply: • Step 1: we first compute compute $M^e \bmod p$ (where $p$ is one of the factors of the modulus. By Fermat's Little Theorem, this is the same as $M^{e \mod p-1} \bmod p$ (and that's where the real savings are), and so in your example ($p=5$), we get: $M^e \bmod p = 27^{37 \bmod 4} \bmod 5 = 2^1 \bmod 5 = 2$ • Step 2: we do the same for the other prime $q$: $M^e \bmod q = 27^{37 \bmod 10} \bmod 11 = 5^7 \bmod 11 = 3$ • Step 3: using the Chinese Remainder theorem, we reconstruct $M^e \bmod pq$ from $M^e \bmod p$ and $M^e \bmod q$: $C_p = M^e \bmod p = 2$ $C_q = M^e \bmod q = 3$ $M^e \bmod pq = C_q + q \cdot (q^{-1} (C_p - C_q) \bmod p) = 3 + 11 \cdot (1 \cdot (2-3) \bmod 5) = 3 + 11 \cdot 4 = 47$ - Wow, thank you so much, that makes so much sense. I don't understand, however, how (1 * (2 - 3) mod 5) is 4. 2-3 is -1, multiply by 1 is still -1, mod 5 of -1 is -1? – Saf Nov 8 '12 at 16:41 Well, that's depends on how the mod operator is interpreted when given negative numbers. The traditional mathematical approach (which the above formula uses) is to say 'the value of $a \bmod b$ is the value $0 \le c < b$ such that $a \equiv c (\bmod b)$. With this interpretation, $-1 \bmod 5$ can't be -1, as -1 is not in the range $[0, 5)$. Now, some computer languages don't use this interpretation; those languages were typically not designed by mathematicians. – poncho Nov 8 '12 at 19:01 1 To do the computations, you use what is the most efficient: -1 or 4. For instance, I'd compute the $5^7 \mod 11$ above as $5^{-3} \mod 11$ instead. (As $5^3=4\mod 11$ and $4\cdot 3=1\mod 11$, $4^{-1}=3\mod 11$.) For a mathematician, there's an equivalence class, the label does not really matter, although common practice is indeed to use non negative numbers. – bob Nov 8 '12 at 19:15 @bob: Yeah, you're correct; the most mathematical way of thinking about it is equivalence classes. I should have phrased my explanation better. – poncho Nov 8 '12 at 19:24 Poncho if you also have a stack overflow account and would like to maybe answer by linking to this page I'll mark that the answer too. – Saf Nov 8 '12 at 21:43 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519479274749756, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=208716
Physics Forums First order linear differential equation 1) Solve y' + (1/t) y = t^3. Integrating factor =exp ∫(1/t)dt =exp (ln|t| + k) =exp (ln|t|) (take constant of integration k=0) =|t| ... and then I've found that the gerenal solution is: y = 1/|t| + [c + ∫(from 0 to t) |s| s^3 ds] Is this the correct final answer and is there any way to simply this? Can I get rid of all the absolute values? 2) Solve the initial value problem ty'+2y=4t^2, y(1)=2. Integrating factor=t^2 ... General solution is y = t^2 + c/t^2 Put y(1)=2 => c=1 So the solution to the initial value problem is y=t^2 + 1/t^2, t>0. Note that the function y=t^2 + 1/t^2, t<0 is NOT part of the solution of this initial value problem. ============================== I have no idea (red part) why you have to put the restriction t>0, and why is the part for t<0 definitely NOT part of the solution? What's the problem here? This example is driving me crazy... I am a beginner of this subject, and I hope that someone would be nice enough to explain these. Thanks! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Quote by kingwinner the initial value problem ty'+2y=4t^2, y(1)=2. Integrating factor=t^2 ... General solution is y = t^2 + c/t^2 Put y(1)=2 => c=1 So the solution to the initial value problem is y=t^2 + 1/t^2, t>0. Note that the function y=t^2 + 1/t^2, t<0 is NOT part of the solution of this initial value problem. ============================== I have no idea (red part) why you have to put the restriction t>0, and why is the part for t<0 definitely NOT part of the solution? What's the problem here? This example is driving me crazy... I am a beginner of this subject, and I hope that someone would be nice enough to explain these. Thanks! As to this one, your integrating factor is correct but your simplification of it's original form is what hides the problem from you regarding the t > 0 restriction. Look at the exponential form of the integrating factor and notice that it has an ln |t| in it. This has a discontinuity at t = 0, and since your initial value is assigned as t = 1, we can only use the interval containing t = 1 (which is precisely the interval t > 0). In other words, that is the largest allowable interval upon which you can define a continuous integrating factor which contains the initial value. Quote by Mathdope As to this one, your integrating factor is correct but your simplification of it's original form is what hides the problem from you regarding the t > 0 restriction. Look at the exponential form of the integrating factor and notice that it has an ln |t| in it. This has a discontinuity at t = 0, and since your initial value is assigned as t = 1, we can only use the interval containing t = 1 (which is precisely the interval t > 0). In other words, that is the largest allowable interval upon which you can define a continuous integrating factor which contains the initial value. 2) Sorry, I still don't fully understand what you mean... y=t^2 + 1/t^2, t<0 and y=t^2 + 1/t^2, t>0 are just the left and right branches of a single function y=t^2 + 1/t^2, so shouldn't y=t^2 + 1/t^2 be the full and complete answer? I don't get what's the problem with the t<0 part at all... Recognitions: Gold Member Science Advisor Staff Emeritus First order linear differential equation You don't see a problem at t= 0? Quote by HallsofIvy You don't see a problem at t= 0? Well, at t=0, the solution is undefined, but why is there a problem for t<0? You've used ln(t) to derive your solution, but ln(t) is not defined for $t \leq 0$. The fact that you know ln(t) is continuous in it's domain and that there is a singularity at t=0 is a bit of a tip-off. Recognitions: Gold Member Science Advisor Staff Emeritus There isn't a problem with t< 0 but you cannot extend a solution across t= 0. Typically, for a first order differential equation, you also have an "initial condition" which here must be given at a non-zero value of t. If you initial condition is $y(t_0)= y_0$, with $t_0> 0$, then your solution is only defined for t> 0. If $y(t_0)= y_0$, with $t_0< 0$, then your solution is only defined for t< 0. Quote by HallsofIvy There isn't a problem with t< 0 but you cannot extend a solution across t= 0. Typically, for a first order differential equation, you also have an "initial condition" which here must be given at a non-zero value of t. If you initial condition is $y(t_0)= y_0$, with $t_0> 0$, then your solution is only defined for t> 0. If $y(t_0)= y_0$, with $t_0< 0$, then your solution is only defined for t< 0. 2) Are there any intuitive/theoretical reasons for choosing only one branch, but not the other? As far as I know, in this case, the left and right branches are defined by the SAME function, then why should we even consider rejecting part of it?? There are theoretical reasons that derive from the existence and uniqueness theorem regarding solutions to ODEs. Intuitive? Not that I can see. The theorem states that in a first order ODE of the form in this example that the derivative (y') must be a continuous function of t and y on some interval containing the initial value (t = 1 in this case), i.e. y' = f(t,y) where f(t,y) is continuous in some interval containing t = 1. If you put your equation in that form (i.e. solve it for y') you have a problem at t = 0. Thus any solution is valid on an interval a < t < b only if (1) t = 1 is in the interval and (2) f(t,y) is continuous on the interval. t > 0 is that interval. Thank you! It really helps! How about Q1? I am stuck at the point y = 1/|t| + [c + ∫(from 0 to t) |s| s^3 ds], how can I proceed from here? The absolute values look unpleasant........ Quote by kingwinner 1) Solve y' + (1/t) y = t^3. Integrating factor =exp ∫(1/t)dt =exp (ln|t| + k) =exp (ln|t|) (take constant of integration k=0) =|t| ... and then I've found that the gerenal solution is: y = 1/|t| + [c + ∫(from 0 to t) |s| s^3 ds] Is this the correct final answer and is there any way to simply this? Can I get rid of all the absolute values? OK. First, since you have a similar problem at t = 0, a general solution will have to be on either t > 0 or t < 0. Notice what happens in each case with your integrating factor |t|. If t > 0, |t| = t. If t < 0 |t| = -t. Do each general solution separately, and then see if they can be "recombined" into the one that you gave above. Quote by Mathdope OK. First, since you have a similar problem at t = 0, a general solution will have to be on either t > 0 or t < 0. Notice what happens in each case with your integrating factor |t|. If t > 0, |t| = t. If t < 0 |t| = -t. Do each general solution separately, and then see if they can be "recombined" into the one that you gave above. How can I deal with the |s|? Is there any relationship between s and t? Does t<0 imply s<0, too? s goes from 0 to t in your integral. If t > 0 |s| = s. If t < 0 you can reverse the order of the integral so that s goes from t to 0 and replace |s| with -s (notice the effect of both of these will produce something nice). By the way, is the answer you gave what you found or is it the known solution? Recognitions: Gold Member Science Advisor Staff Emeritus Quote by kingwinner 2) Are there any intuitive/theoretical reasons for choosing only one branch, but not the other? As far as I know, in this case, the left and right branches are defined by the SAME function, then why should we even consider rejecting part of it?? In the problem you gave, there is a very practical reason, neither "intuitive" not "theoretical"! You were asked to solve the equation with the initial condition y(1)=2. The domain of your solution must include x= 1 so you must choose the "branch" that includes x= 1, the function defined on the positive numbers. You cannot then extend your solution solution, in a unique way, to the negative numbers because you cannot pass 0. You could define a function having the correct solution (with y(1)= 2) for the positive numbers and the given formula with any value for the undetermined constant for the negative numbers and have an infinite number of solutions to the equation satisfying y(1)= 2. By the way, a "function" is NOT a "formula". The same formula is used on the left and right branches, but not the same function because they have different domains. Quote by Mathdope s goes from 0 to t in your integral. If t > 0 |s| = s. If t < 0 you can reverse the order of the integral so that s goes from t to 0 and replace |s| with -s (notice the effect of both of these will produce something nice). Will the answer (gerneral solution) be the same for both the cases t>0 and t<0? What if they aren't the same? By the way, is the answer you gave what you found or is it the known solution? The answer I gave is what I found Quote by kingwinner Will the answer (gerneral solution) be the same for both the cases t>0 and t<0? What if they aren't the same? They aren't necessarily the same. If they weren't the same you would define it in branches i.e. for t > 0 f(t) = ..., for t < 0 f(t) = ... etc. Thread Tools | | | | |---------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: First order linear differential equation | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 1 | | | Differential Equations | 1 | | | Introductory Physics Homework | 2 | | | Differential Equations | 0 | | | Differential Equations | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311522245407104, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/173/is-acceleration-an-absolute-quantity/183
# Is acceleration an absolute quantity? I would like to know if acceleration is an absolute quantity, and if so why? - ## 4 Answers In standard Newtonian mechanics, acceleration is indeed considered to be an absolute quantity, in that it is not determined relative to any inertial frame of reference (constant velocity). This fact follows directly from the principal that forces are the same everywhere, independent of observer. Of course, if you're doing classical mechanics in an accelerating reference frame, then you introduce a fictitious force, and accelerations are not absolute with respect to an "inertial frame" or other accelerating reference frames - though this is less often considered, perhaps. Note also that the same statement applies to Einstein's Special Relativity. (I don't really understand enough General Relativity to comment, but I suspect it says no, and instead considers other more fundamental things, such as space-time geodesics.) - 1 Down-vote, why? This is a correct answer. – Noldorin Nov 3 '10 at 21:41 +1 agreed, hopefully you get enough upvotes to push this up to/near the top ;-) – David Zaslavsky♦ Nov 3 '10 at 22:18 3 By the way, in GR it depends on how you define acceleration. If you use the definition of rate of change of velocity relative to an inertial observer, no, acceleration is not absolute. But if you use a local definition e.g. based on an accelerometer, it is absolute, but you get the odd result that an observer at constant coordinates in a gravitational field (sitting on the surface of the Earth, for example) is in fact accelerating. – David Zaslavsky♦ Nov 3 '10 at 22:21 Yes This is the correct answer. Can you please check if my proof below is correct? – tsudot Nov 3 '10 at 23:20 @David: Thanks for the support! And cheers for the clarification on GR; it jogs my memory about the importance of the distinction between local and global variables and symmetries. – Noldorin Nov 3 '10 at 23:40 Absolutely not. An observer in free fall and an observer in zero gravity both experience and observe no acceleration in their frame of relevance. One, however, is actually in an accelerating frame of reference. - While not wrong of course, this answer brushes over too many subtleties I think. – Noldorin Nov 3 '10 at 23:47 Why? Acceleration is only constant in inertial systems, which is the same as saying that acceleration is only constant when the system has no other accelerations. Also, what about systems with jerk? Surely newtonian mechanics must account for those too? – Sklivvz♦ Nov 7 '10 at 16:43 I've finally figured it out. First, let's define precisely what it means for some quantity to be absolute or relative. In the context in question, it has to do with whether a quantity is absolute (that is, has the same value) or relative (that is, has different values) when measured by two inertial observers moving with respect to one another. Of course, first we need to define what an inertial observer is: it's an observer for which Newton's laws are applicable without having to resort to adding fictitious forces. Ok, so now we have two observers, Alice and Bob, both of which are inertial. They both observe the motion of some object. Let the index 1 correspond to quantities measured in A's reference frame and the index 2 correspond to quantities measured in B's reference frame. The position of the object is clearly a relative concept, since r₂ = r₁ + u t (where u is the velocity of Bob with respect to Alice, and is constant since they're both inertial observers). Note that the time, t, is the same for both observers, as it must be according to Newtonian Mechanics. The object position is a relative concept because r₂ ≠ r₁. Now, take the time-derivative of both sides and we get v₂ = v₁ + u that is, the velocity of the object with respect to one observer is different than the velocity of the same object with respect to the other observer. Hence, velocity is a relative quantity in Newtonian Mechanics. Next, take the time-derivative of both sides once again, and we obtain a₂ = a₁ (since u is constant). Thus, the acceleration of the object is the same in both reference frames. Acceleration, therefore, is absolute in Newtonian Mechanics. When we take into account the theory of relativity, then time flows at different rates for different inertial observers and the result above for the acceleration is no longer true. - Most of what you have said is quite true. (Perhaps a little indirect, but no matter.) Your idea of relative and absolute quantities seems sound. A few things to note: in special relativity, I believe acceleration is still absolute; check the Lorentz transformations. David Zaslavsky gave an overview of the case for general relativity. Interesting, general relativity is actually required in order to rigorously define what an inertial frame is. Perhaps David can clarify this too. :) – Noldorin Nov 3 '10 at 23:47 2 @Noldorin: I think you're right, acceleration is absolute in special relativity. I tried to check the Lorentz transformations but doing it properly involves more math than I have time for at the moment ;-) But here's my intuition: every object can measure its own acceleration (with an accelerometer) and can thus characterize its motion relative to a locally inertial frame. Since different inertial observers in SR will agree on what constitutes an inertial frame for the object, they will also agree that its own measurement of its acceleration corresponds to its acceleration as they observe it. – David Zaslavsky♦ Nov 4 '10 at 0:27 @David Zaslavsky, I believe your intuition is correct - a constantly accelerating object in SR will trace out a hyperbola (asymptotically approaching $c$), so acceleration in the sense of $\partial^2 x/\partial t^2$ is dependent on the reference frame. (In particular, if you choose a frame such that the object it almost at $c$ it can't be accelerating very fast.) However, it's easy enough to define "proper acceleration" (acceleration in the object's own reference frame) which is of course absolute. – Nathaniel Jan 20 '12 at 11:02 Acceleration will be the same in any two frames that are moving with constant speed with respect to each other (and may also be rotated and translated). However, if you consider two frames that have relative rotation or acceleration, the acceleration of an object will be different in the two frames. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324698448181152, "perplexity_flag": "head"}
http://lamington.wordpress.com/tag/hyperbolic-groups/
# Tag Archive You are currently browsing the tag archive for the ‘hyperbolic groups’ tag. ## Random groups contain surface subgroups April 3, 2013 in Ergodic Theory, Groups, Surfaces | Tags: ergodic theory, Gromov's surface subgroup question, hyperbolic groups, Random groups, surface subgroups | by Danny Calegari | Leave a comment A few weeks ago, Ian Agol, Vlad Markovic, Ursula Hamenstadt and I organized a “hot topics” workshop at MSRI with the title Surface subgroups and cube complexes. The conference was pretty well attended, and (I believe) was a big success; the organizers clearly deserve a great deal of credit. The talks were excellent, and touched on a wide range of subjects, and to those of us who are mid-career or older it was a bit shocking to see how quickly the landscape of low-dimensional geometry/topology and geometric group theory has been transformed by the recent breakthrough work of (Kahn-Markovic-Haglund-Wise-Groves-Manning-etc.-) Agol. Incidentally, when I first started as a graduate student, I had a vague sense that I had somehow “missed the boat” — all the exciting developments in geometry due to Thurston, Sullivan, Gromov, Freedman, Donaldson, Eliashberg etc. had taken place 10-20 years earlier, and the subject now seemed to be a matter of fleshing out the consequences of these big breakthroughs. 20 years and several revolutions later, I no longer feel this way. (Another slightly shocking aspect of the workshop was for me to realize that I am older or about as old as 75% of the speakers . . .) The rationale for the workshop (which I had some hand in drafting, and therefore feel comfortable quoting here) was the following: Recently there has been substantial progress in our understanding of the related questions of which hyperbolic groups are cubulated on the one hand, and which contain a surface subgroup on the other. The most spectacular combination of these two ideas has been in 3-manifold topology, which has seen the resolution of many long-standing conjectures. In turn, the resolution of these conjectures has led to a new point of view in geometric group theory, and the introduction of powerful new tools and structures. The goal of this conference will be to explore the further potential of these new tools and perspectives, and to encourage communication between researchers working in various related fields. I have blogged a bit about cubulated groups and surface subgroups previously, and I even began this blog (almost 4 years ago now) initially with the idea of chronicling my efforts to attack Gromov’s surface subgroup question. This question asks the following: Gromov’s Surface Subgroup Question: Does every one-ended hyperbolic group contain a subgroup which is isomorphic to the fundamental group of a closed surface of genus at least 2? The restriction to one-ended groups is just meant to rule out silly examples, like finite or virtually cyclic groups (i.e. “elementary” hyperbolic groups), or free products of simpler hyperbolic groups. Asking for the genus of the closed surface to be at least 2 rules out the sphere (whose fundamental group is trivial) and the torus (whose fundamental group $\mathbb{Z}^2$ cannot be a subgroup of a hyperbolic group). It is the purpose of this blog post to say that Alden Walker and I have managed to show that Gromov’s question has a positive answer for “most” hyperbolic groups; more precisely, we show that a random group (in the sense of Gromov) contains a surface subgroup (in fact, many surface subgroups) with probability going to 1 as a certain natural parameter (the “length” $n$ of the random relators) goes to infinity. (update April 8: the preprint is available from the arXiv here.) Read the rest of this entry » ## Agol’s Virtual Haken Theorem (part 2): Agol-Groves-Manning strike back March 27, 2012 in Groups | Tags: height, hyperbolic Dehn surgery, hyperbolic groups, malnormal groups, quasiconvex subgroup, subgroup separation, virtually special cube complexes | by Danny Calegari | 14 comments Today Jason Manning gave a talk on a vital ingredient in the proof of Agol’s theorem, which is a result in geometric group theory. The theorem is a joint project of Agol-Groves-Manning, and generalizes some earlier work they did a few years ago. Jason referred to the main theorem during his talk as the “Goal Theorem” (I guess it was the goal of his lecture), but I’m going to call it the Weak Separation Theorem, since that is a somewhat more descriptive name. The statement of the theorem is as follows. Weak Separation Theorem (Agol-Groves-Manning): Let G be a hyperbolic group, let H be a subgroup of G which is quasiconvex, and isomorphic to the fundamental group of a virtually special NPC cube complex, and let g be an element of G which is not contained in H. Then there is a surjection $\phi:G \to \bar{G}$ so that 1. $\bar{G}$ is hyperbolic; 2. $\phi(H)$ is finite; and 3. $\phi(g)$ is not contained in $\phi(H)$. In the remainder of this post I will try to explain the proof of this theorem, to the extent that I understand it. Basically, this amounts to my summarizing Manning’s talk (or the part of it that I managed to get down in my notes); again, any errors, foolishness, silly blog post titles etc. are due to me. Read the rest of this entry » ## Polygonal words November 15, 2009 in Groups, Surfaces | Tags: double of free group, ends, Henry Wilton, hyperbolic groups, roundoff trick, Sang-hyun Kim, scl, Stallings theorem on ends, surface subgroup | by Danny Calegari | 6 comments Last Friday, Henry Wilton gave a talk at Caltech about his recent joint work with Sang-hyun Kim on polygonal words in free groups. Their work is motivated by the following well-known question of Gromov: Question(Gromov): Let $G$ be a one-ended word-hyperbolic group. Does $G$ contain a subgroup isomorphic to the fundamental group of a closed hyperbolic surface? Let me briefly say what “one-ended” and “word-hyperbolic” mean. A group is said to be word-hyperbolic if it acts properly and cocompactly by isometries on a proper $\delta$-hyperbolic path metric space — i.e. a path metric space in which there is a constant $\delta$ so that geodesic triangles in the metric space have the property that each side of the triangle is contained in the $\delta$-neighborhood of the union of the other two sides (colloquially, triangles are thin). This condition distills the essence of negative curvature in the large, and was shown by Gromov to be equivalent to several other conditions (eg. that the group satisfies a linear isoperimetric inequality; that every ultralimit of the group is an $\mathbb{R}$-tree). Free groups are hyperbolic; fundamental groups of closed manifolds with negative sectional curvature (eg surfaces with negative Euler characteristic) are word-hyperbolic; “random” groups are hyperbolic — and so on. In fact, it is an open question whether a group $G$ that admits a finite $K(G,1)$ is word hyperbolic if and only if it does not contain a copy of a Baumslag-Solitar group $BS(m,n):=\langle x,y \; | \; x^{-1}y^{m}x = y^n \rangle$ for $m,n \ne 0$ (note that the group $\mathbb{Z}\oplus \mathbb{Z}$ is the special case $m=n=1$); in any case, this is a very good heuristic for identifying the word-hyperbolic groups one typically meets in examples. If $G$ is a finitely generated group, the ends of $G$ really means the ends (as defined by Freudenthal) of the Cayley graph of $G$ with respect to some finite generating set. Given a proper topological space $X$, the set of compact subsets of $X$ gives rise to an inverse system of inclusions, where $X-K'$ includes into $X-K$ whenever $K$ is a subset of $K'$. This inverse system defines an inverse system of maps of discrete spaces $\pi_0(X-K') \to \pi_0(X-K)$, and the inverse limit of this system is a compact, totally disconnected space $\mathcal{E}(X)$, called the space of ends of $X$. A proper topological space is canonically compactified by its set of ends; in fact, the compactification $X \cup \mathcal{E}(X)$ is the “biggest” compactification of $X$ by a totally disconnected space, in the sense that for any other compactification $X \subset Y$ where $Y-X$ is zero dimensional, there is a continuous map $X \cup \mathcal{E}(X) \to Y$ which is the identity on $X$. For a word-hyperbolic group $G$, the Cayley graph can be compactified by adding the ideal boundary $\partial_\infty G$, but this is typically not totally disconnected. In this case, the ends of $G$ can be recovered as the components of $\partial_\infty G$. A group $G$ acts on its own ends $\mathcal{E}(G)$. An elementary argument shows that the cardinality of $\mathcal{E}(G)$ is one of $0,1,2,\infty$ (if a compact set $V$ disconnects $e_1,e_2,e_3$ then infinitely many translates of $V$ converging to $e_1$ separate $e_3$ from infinitely many other ends accumulating on $e_1$). A group has no ends if and only if it is finite. Stallings famously showed that a (finitely generated) group has at least $2$ ends if and only if it admits a nontrivial description as an HNN extension or amalgamated free product over a finite group. One version of the argument proceeds more or less as follows, at least when $G$ is finitely presented. Let $M$ be an $n$-dimensional Riemannian manifold with fundamental group $G$, and let $\tilde{M}$ denote the universal cover. We can identify the ends of $G$ with the ends of $\tilde{M}$. Let $V$ be a least ($n-1$-dimensional) area hypersurface in $\tilde{M}$ amongst all hypersurfaces that separate some end from some other (here the hypothesis that $G$ has at least two ends is used). Then every translate of $V$ by an element of $G$ is either equal to $V$ or disjoint from it, or else one could use the Meeks-Yau “roundoff trick” to find a new $V'$ with strictly lower area than $V$. The translates of $V$ decompose $\tilde{M}$ into pieces, and one can build a tree $T$ whose vertices correspond to to components of $\tilde{M} - G\cdot V$, and whose edges correspond to the translates $G\cdot V$. The group $G$ acts on this tree, with finite edge stabilizers (by the compactness of $V$), exhibiting $G$ either as an HNN extension or an amalgamated product over the edge stabilizers. Note that the special case $|\mathcal{E}(G)|=2$ occurs if and only if $G$ has a finite index subgroup which is isomorphic to $\mathbb{Z}$. Free groups and virtually free groups do not contain closed surface subgroups; Gromov’s question more or less asks whether these are the only examples of word-hyperbolic groups with this property. Kim and Wilton study Gromov’s question in a very, very concrete case, namely that case that $G$ is the double of a free group $F$ along a word $w$; i.e. $G = F *_{\langle w \rangle } F$ (hereafter denoted $D(w)$). Such groups are known to be one-ended if and only if $w$ is not contained in a proper free factor of $F$ (it is clear that this condition is necessary), and to be hyperbolic if and only if $w$ is not a proper power, by a result of Bestvina-Feighn. To see that this condition is necessary, observe that the double $\mathbb{Z} *_{p\mathbb{Z}} \mathbb{Z}$ is isomorphic to the fundamental group of a Seifert fiber space, with base space a disk with two orbifold points of order $p$; such a group contains a $\mathbb{Z}\oplus \mathbb{Z}$. One might think that such groups are too simple to give an insight into Gromov’s question. However, these groups (or perhaps the slightly larger class of graphs of free groups with cyclic edge groups) are a critical case for at least two reasons: 1. The “smaller” a group is, the less room there is inside it for a surface group; thus the “simplest” groups should have the best chance of being a counterexample to Gromov’s question. 2. If $G$ is word-hyperbolic and one-ended, one can try to find a surface subgroup by first looking for a graph of free groups $H$ in $G$, and then looking for a surface group in $H$. Since a closed surface group is itself a graph of free groups, one cannot “miss” any surface groups this way. Not too long ago, I found an interesting construction of surface groups in certain graphs of free groups with cyclic edge groups. In fact, I showed that every nontrivial element of $H_2(G;\mathbb{Q})$ in such a group is virtually represented by a sum of surface subgroups. Such surface subgroups are obtained by finding maps of surface groups into $G$ which minimize the Gromov norm in their (projective) homology class. I think it is useful to extend Gromov’s question by making the following Conjecture: Let $G$ be a word-hyperbolic group, and let $\alpha \in H_2(G;\mathbb{Q})$ be nonzero. Then some multiple of $\alpha$ is represented by a norm-minimizing surface (which is necessarily $\pi_1$-injective). Note that this conjecture does not generalize to wider classes of groups. There are even examples of $\text{CAT}(0)$ groups $G$ with nonzero homology classes $\alpha \in H_2(G;\mathbb{Q})$ with positive, rational Gromov norm, for which there are no $\pi_1$-injective surfaces representing a multiple of $\alpha$ at all. It is time to define polygonal words in free groups. Definition: Let $F$ be free. Let $X$ be a wedge of circles whose edges are free generators for $F$. A cyclically reduced word $w$ in these generators is polygonal if there exists a van-Kampen graph $\Gamma$ on a surface $S$ such that: 1. every complementary region is a disk whose boundary is a nontrivial (possibly negative) power of $w$; 2. the (labelled) graph $\Gamma$ immerses in $X$ in a label preserving way; 3. the Euler characteristic of $S$ is strictly less than the number of disks. The last condition rules out trivial examples; for example, the double of a single disk whose boundary is labeled by $w^n$. Notice that it is very important to allow both positive and negative powers of $w$ as boundaries of complementary regions. In fact, if $w$ is not in the commutator subgroup, then the sum of the powers over all complementary regions is necessarily zero (and if $w$ is in the commutator subgroup, then $D(w)$ has nontrivial $H_2$, so one already knows that there is a surface subgroup). Condition 2. means that at each vertex of $\Gamma$, there is at most one oriented label corresponding to each generator of $F$ or its inverse. This is really the crucial geometric property. If $\Gamma,S$ is a van-Kampen graph as above, then a theorem of Marshall Hall implies that there is a finite cover of $X$ into which $\Gamma$ embeds (in fact, this observation underlies Stallings’s work on foldings of graphs). If we build a $2$-complex $Y$ with $\pi_1(Y)=D(w)$ by attaching two ends of a cylinder to suitable loops in two copies of $X$, then a tubular neighborhood of $\Gamma$ in $S$ (i.e. what is sometimes called a “fatgraph” ) embeds in a finite cover $\tilde{Y}$ of $Y$, and its double — a surface of strictly negative Euler characteristic — embeds as a closed surface in $\tilde{Y}$, and is therefore $\pi_1$-injective. Hence if $w$ is polygonal, $D(w)$ contains a surface subgroup. Not every word is polygonal. Kim-Wilton discuss some interesting examples in their paper, including: 1. suppose $w$ is a cyclically reduced product of proper powers of the generators or their inverses (e.g a word like $a^3b^7a^{-2}c^{13}$ but not a word like $a^3bc^{-1}$); then $w$ is polygonal; 2. a word of the form $\prod_i a^{p_{2i-1}}(a^{p_{2i}})^b$ is polygonal if $|p_i|>1$ for each $i$; 3. the word $abab^2ab^3$ is not polygonal. To see 3, suppose there were a van-Kampen diagram with more disks than Euler characteristic. Then there must be some vertex of valence at least $3$. Since $w$ is positive, the complementary regions must have boundaries which alternate between positive and negative powers of $w$, so the degree of the vertex must be even. On the other hand, since $\Gamma$ must immerse in a wedge of two circles, the degree of every vertex must be at most $4$, so there is consequently some vertex of degree exactly $4$. Since each $a$ is isolated, at least $2$ edges must be labelled $b$; hence exactly two. Hence exactly two edges are labelled $a$. But one of these must be incoming and one outgoing, and therefore these are adjacent, contrary to the fact that $w$ does not contain a $a^{\pm 2}$. 1 above is quite striking to me. When $w$ is in the commutator subgroup, one can consider van-Kampen diagrams as above without the injectivity property, but with the property that every power of $w$ on the boundary of a disk is positive; call such a van-Kampen diagram monotone. It turns out that monotone van-Kampen diagrams always exist when $w \in [F,F]$, and in fact that norm-minimizing surfaces representing powers of the generator of $H_2(D(w))$ are associated to certain monotone diagrams. The construction of such surfaces is an important step in the argument that stable commutator length (a kind of relative Gromov norm) is rational in free groups. In my paper scl, sails and surgery I showed that monomorphisms of free groups that send every generator to a power of that generator induce isometries of the $\text{scl}$ norm; in other words, there is a natural correspondence between certain equivalence classes of monotone surfaces for an arbitrary word in $[F,F]$ and for a word of the kind that Kim-Wilton show is polygonal (Note: Henry Wilton tells me that Brady, Forester and Martinez-Pedroza have independently shown that $D(w)$ contains a surface group for such $w$, but I have not seen their preprint (though I would be very grateful to get a copy!)). In any case, if not every word is polygonal, all is not lost. To show that $D(w)$ contains a surface subgroup is suffices to show that $D(w')$ contains a surface subgroup, where $w$ and $w'$ differ by an automorphism of $F$. Kim-Wilton conjecture that one can always find an automorphism $\phi$ so that $\phi(w)$ is polygonal. In fact, they make the following: Conjecture (Kim-Wilton; tiling conjecture): A word $w$ not contained in a proper free factor of shortest length (in a given generating set) in its orbit under $\text{Aut}(F)$ is polygonal. If true, this would give a positive answer to Gromov’s question for groups of the form $D(w)$. ## Combable functions June 8, 2009 in Ergodic Theory | Tags: central limit theorem, combing, defect, finite state automaton, hyperbolic groups, Patterson-Sullivan measure, quasimorphisms | by Danny Calegari | 2 comments The purpose of this post is to discuss my recent paper with Koji Fujiwara, which will shortly appear in Ergodic Theory and Dynamical Systems, both for its own sake, and in order to motivate some open questions that I find very intriguing. The content of the paper is a mixture of ergodic theory, geometric group theory, and computer science, and was partly inspired by a paper of Jean-Claude Picaud. To state the results of the paper, I must first introduce a few definitions and some background. Let $\Gamma$ be a finite directed graph (hereafter a digraph) with an initial vertex, and edges labeled by elements of a finite set $S$ in such a way that each vertex has at most one outgoing edge with any given label. A finite directed path in $\Gamma$ starting at the initial vertex determines a word in the alphabet $S$, by reading the labels on the edges traversed (in order). The set $L \subset S^*$ of words obtained in this way is an example of what is called a regular language, and is said to be parameterized by $\Gamma$. Note that this is not the most general kind of regular language; in particular, any language $L$ of this kind will necessarily be prefix-closed (i.e. if $w \in L$ then every prefix of $w$ is also in $L$). Note also that different digraphs might parameterize the same (prefix-closed) regular language $L$. If $S$ is a set of generators for a group $G$, there is an obvious map $L \to G$ called the evaluation map that takes a word $w$ to the element of $G$ represented by that word. Definition: Let $G$ be a group, and $S$ a finite generating set. A combing of $G$ is a (prefix-closed) regular language $L$ for which the evaluation map $L \to G$ is a bijection, and such that every $w \in L$ represents a geodesic in $G$. The intuition behind this definition is that the set of words in $L$ determines a directed spanning tree in the Cayley graph $C_S(G)$ starting at $\text{id}$, and such that every directed path in the tree is a geodesic in $C_S(G)$. Note that there are other definitions of combing in the literature; for example, some authors do not require the evaluation map to be a bijection, but only a coarse bijection. Fundamental to the theory of combings is the following Theorem, which paraphrases one of the main results of this paper: Theorem: (Cannon) Let $G$ be a hyperbolic group, and let $S$ be a finite generating set. Choose a total order on the elements of $S$. Then the language $L$ of lexicographically first geodesics in $G$ is a combing. The language $L$ described in this theorem is obviously geodesic and prefix-closed, and the evaluation map is bijective; the content of the theorem is that $L$ is regular, and parameterized by some finite digraph $\Gamma$. In the sequel, we restrict attention exclusively to hyperbolic groups $G$. Given a (hyperbolic) group $G$, a generating set $S$, a combing $L$, one makes the following definition: Definition: A function $\phi:G \to \mathbb{Z}$ is weakly combable (with respect to $S,L$) if there is a digraph $\Gamma$ parameterizing $L$ and a function $d\phi$ from the vertices of $\Gamma$ to $\mathbb{Z}$ so that for any $w \in L$, corresponding to a path $\gamma$ in $\Gamma$, there is an equality $\phi(w) = \sum_i d\phi(\gamma(i))$. In other words, a function $\phi$ is weakly combable if it can be obtained by “integrating” a function $d\phi$ along the paths of a combing. One furthermore says that a function is combable if it changes by a bounded amount under right-multiplication by an element of $S$, and bicombable if it changes by a bounded amount under either left or right multiplication by an element of $S$. The property of being (bi-)combable does not depend on the choice of a generating set $S$ or a combing $L$. Example: Word length (with respect to a given generating set $S$) is bicombable. Example: Let $\phi:G \to \mathbb{Z}$ be a homomorphism. Then $\phi$ is bicombable. Example: The Brooks counting quasimorphisms (on a free group) and the Epstein-Fujiwara counting quasimorphisms are bicombable. Example: The sum or difference of two (bi-)combable functions is (bi-)combable. A particularly interesting example is the following: Example: Let $S$ be a finite set which generates $G$ as a semigroup. Let $\phi_S$ denote word length with respect to $S$, and $\phi_{S^{-1}}$ denote word length with respect to $S^{-1}$ (which also generates $G$ as a semigroup). Then the difference $\psi_S:= \phi_S - \phi_{S^{-1}}$ is a bicombable quasimorphism. The main theorem proved in the paper concerns the statistical distribution of values of a bicombable function. Theorem: Let $G$ be a hyperbolic group, and let $\phi$ be a bicombable function on $G$. Let $\overline{\phi}_n$ be the value of $\phi$ on a random word in $G$ of length $n$ (with respect to a certain measure $\widehat{\nu}$ depending on a choice of generating set). Then there are algebraic numbers $E$ and $\sigma$ so that as distributions, $n^{-1/2}(\overline{\phi}_n - nE)$ converges to a normal distribution with standard deviation $\sigma$. One interesting corollary concerns the length of typical words in one generating set versus another. The first thing that every geometric group theorist learns is that if $S_1, S_2$ are two finite generating sets for a group $G$, then there is a constant $K$ so that every word of length $n$ in one generating set has length at most $nK$ and at least $n/K$ in the other generating set. If one considers an example like $\mathbb{Z}^2$, one sees that this is the best possible estimate, even statistically. However, if one restricts attention to a hyperbolic group $G$, then one can do much better for typical words: Corollary: Let $G$ be hyperbolic, and let $S_1,S_2$ be two finite generating sets. There is an algebraic number $\lambda_{1,2}$ so that almost all words of length $n$ with respect to the $S_1$ generating set have length almost equal to $n\lambda_{1,2}$ with respect to the $S_2$ generating set, with error of size $O(\sqrt{n})$. Let me indicate very briefly how the proof of the theorem goes. Sketch of Proof: Let $\phi$ be bicombable, and let $d\phi$ be a function from the vertices of $\Gamma$ to $\mathbb{Z}$, where $\Gamma$ is a digraph parameterizing $L$. There is a bijection between the set of elements in $G$ of word length $n$ and the set of directed paths in $\Gamma$ of length $n$ that start at the initial vertex. So to understand the distribution of $\phi$, we need to understand the behaviour of a typical long path in $\Gamma$. Define a component of $\Gamma$ to be a maximal subgraph with the property that there is a directed path (in the component) from any vertex to any other vertex. One can define a new digraph $C(\Gamma)$ without loops, with one vertex for each component of $\Gamma$, in an obvious way. Each component $C$ determines an adjacency matrix $M_C$, with $ij$-entry equal to $1$ if there is a directed edge from vertex $i$ to vertex $j$, and equal to $0$ otherwise. A component $C$ is big if the biggest real eigenvalue $\lambda$ of $M_C$ is at least as big as the biggest real eigenvalue of the matrices associated to every other component. A random long walk in $\Gamma$ will spend most of its time entirely in big components, so these are the only components we need to consider to understand the statistical distribution of $\phi$. A theorem of Coornaert implies that there are no big components of $C(\Gamma)$ in series; i.e. there are no directed paths in $C(\Gamma)$ from one big component to another (one also says that the big components do not communicate). This means that a typical long walk in $\Gamma$ is entirely contained in a single big component, except for a (relatively short) path at the start and the end of the walk. So the distribution of $\phi$ gets independent contributions, one from each big component. The contribution from an individual big component is not hard to understand: the central limit theorem for stationary Markov chains says that for elements of $G$ corresponding to paths that spend almost all their time in a given big component $C$ there is a central limit theorem  $n^{-1/2}(\overline{\phi}_n - nE_C) \to N(0,\sigma_C)$ where the mean $E_C$ and standard deviation $\sigma_C$ depend only on $C$. The problem is to show that the means and standard deviations associated to different big components are the same. Everything up to this point only depends on weak combability of $\phi$; to finish the proof one must use bicombability. It is not hard to show that if $\gamma$ is a typical infinite walk in a component $C$, then the subpaths of $\gamma$ of length $n$ are distributed like random walks of length $n$ in $C$. What this means is that the mean and standard deviation $E_C,\sigma_C$ associated to a big component $C$ can be recovered from the distribution of $\phi$ on a single infinite “typical” path in $C$. Such an infinite path corresponds to an infinite geodesic in $G$, converging to a definite point in the Gromov boundary $\partial G$. Another theorem of Coornaert (from the same paper) says that the action of $G$ on its boundary $\partial G$ is ergodic with respect to a certain natural measure called a Patterson-Sullivan measure (see Coornaert’s paper for details). This means that there are typical infinite geodesics $\gamma,\gamma'$ associated to components $C$ and $C'$ for which some $g \in G$ takes $\gamma$ to a geodesic $g\gamma$ ending at the same point in $\partial G$ as $\gamma'$. Bicombability implies that the values of $\phi$ on $\gamma$ and $g\gamma$ differ by a bounded amount. Moreover, since $g\gamma$ and $\gamma'$ are asymptotic to the same point at infinity, combability implies that the values of $\phi$ on $g\gamma$ and $\gamma'$ also differ by a bounded amount. This is enough to deduce that $E_C = E_{C'}$ and $\sigma_C = \sigma_{C'}$, and one obtains a (global) central limit theorem for $\phi$ on $G$. qed. This obviously raises several questions, some of which seem very hard, including: Question 1: Let $\phi$ be an arbitrary quasimorphism on a hyperbolic group $G$ (even the case $G$ is free is interesting). Does $\phi$ satisfy a central limit theorem? Question 2: Let $\phi$ be an arbitrary quasimorphism on a hyperbolic group $G$. Does $\phi$ satisfy a central limit theorem with respect to a random walk on $G$? (i.e. one considers the distribution of values of $\phi$ not on the set of elements of $G$ of word length $n$, but on the set of elements obtained by a random walk on $G$ of length $n$, and lets $n$ go to infinity) All bicombable quasimorphisms satisfy an important property which is essential to our proof of the central limit theorem: they are local, which is to say, they are defined as a sum of local contributions. In the continuous world, they are the analogue of the so-called de Rham quasimorphisms on $\pi_1(M)$ where $M$ is a closed negatively curved Riemannian manifold; such quasimorphisms are defined by choosing a $1$-form $\alpha$, and defining $\phi_\alpha(g)$ to be equal to the integral $\int_{\gamma_g} \alpha$, where $\gamma_g$ is the closed oriented based geodesic in $M$ in the homotopy class of $g$. De Rham quasimorphisms, being local, also satisfy a central limit theorem. This locality manifests itself in another way, in terms of defects. Let $\phi$ be a quasimorphism on a hyperbolic group $G$. Recall that the defect $D(\phi)$ is the supremum of $|\phi(gh) - \phi(g) -\phi(h)|$ over all pairs of elements $g,h \in G$. A quasimorphism is further said to be homogeneous if $\phi(g^n) = n\phi(g)$ for all integers $n$. If $\phi$ is an arbitrary quasimorphism, one may homogenize it by taking a limit $\psi(g) = \lim_{n \to \infty} \phi(g^n)/n$; one says that $\psi$ is the homogenization of $\phi$ in this case. Homogenization typically does not preserve defects; however, there is an inequality $D(\psi) \le 2D(\phi)$. If $\phi$ is local, one expects this inequality to be an equality. For, in a hyperbolic group, the contribution to the defect of a local quasimorphism all arises from the interaction of the suffix of (a geodesic word representing the element) $g$ with the prefix of $h$ (with notation as above). When one homogenizes, one picks up another contribution to the defect from the interaction of the prefix of $g$ with the suffix of $h$; since these two contributions are essentially independent, one expects that homogenizing a local quasimorphism should exactly double the defect. This is the case for bicombable and de Rham quasimorphisms, and can perhaps be used to define locality for a quasimorphism on an arbitrary group. This discussion provokes the following key question: Question 3: Let $G$ be a group, and let $\psi$ be a homogeneous quasimorphism. Is there a quasimorphism $\phi$ with homogenization $\psi$, satisfying $D(\psi) = 2D(\phi)$? Example: The answer to question 3 is “yes” if $\psi$ is the rotation quasimorphism associated to an action of $G$ on $S^1$ by orientation-preserving homeomorphisms (this is nontrivial; see Proposition 4.70 from my monograph). Example: Let $C$ be any homologically trivial group $1$-boundary. Then there is some extremal homogeneous quasimorphism $\psi$ for $C$ (i.e. a quasimorphism achieving equality $\text{scl}(C) = \psi(C)/2D(\psi)$ under generalized Bavard duality; see this post) for which there is $\phi$ with homogenization $\psi$ satisfying $D(\psi) = 2D(\phi)$. Consequently, if every point in the boundary of the unit ball in the $\text{scl}$ norm is contained in a unique supporting hyperplane, the answer to question 3 is “yes” for any quasimorphism on $G$. Any quasimorphism on $G$ can be pulled back to a quasimorphism on a free group, but this does not seem to make anything easier. In particular, question 3 is completely open (as far as I know) when $G$ is a free group. An interesting test case might be the homogenization of an infinite sum of Brooks functions $\sum_w h_w$ for some infinite non-nested family of words $\lbrace w \rbrace$. If the answer to this question is false, and one can find a homogeneous quasimorphism $\psi$ which is not the homogenization of any “local” quasimorphism, then perhaps $\psi$ does not satisfy a central limit theorem. One can try to approach this problem from the other direction: Question 4: Given a function $f$ defined on the ball of radius $n$ in a free group $F$, one defines the defect $D(f)$ in the usual way, restricted to pairs of elements $g,h$ for which $g,h,gh$ are all of length at most $n$. Under what conditions can $f$ be extended to a function on the ball of radius $n+1$ without increasing the defect? If one had a good procedure for building a quasimorphism “by hand” (so to speak), one could try to build a quasimorphism that failed to satisfy a central limit theorem, or perhaps find reasons why this was impossible. ## Groups with free subgroups May 28, 2009 in Groups | Tags: amenable groups, free groups, hyperbolic groups, laws, pingpong, Thompson's group, Tits alternative, von Neumann conjecture | by Danny Calegari | 3 comments More ambitious than simply showing that a group is infinite is to show that it contains an infinite subgroup of a certain kind. One of the most important kinds of subgroup to study are free groups. Hence, one is interested in the question: Question: When does a group contain a (nonabelian) free subgroup? Again, one can (and does) ask this question both about a specific group, and about certain classes of groups, or for a typical (in some sense) group from some given family. Example: If $\mathcal{P}$ is a property of groups that is inherited by subgroups, then if no free group satisfies $\mathcal{P}$, no group that satisfies $\mathcal{P}$ can contain a free subgroup. An important property of this kind is amenability. A (discrete) group $G$ is amenable if it admits an invariant mean; that is, if there is a linear map $m: L^\infty(G) \to \mathbb{R}$ (i.e. a way to define the average of a bounded function over $G$) satisfying three basic properties: 1. $m(f) \ge 0$ if $f\ge 0$ (i.e. the average of a non-negative function is non-negative) 2. $m(\chi_G)=1$ where $\chi_G$ is the constant function taking the value $1$ everywhere on $G$ (i.e. the average of the constant function $1$ is normalized to be $1$) 3. $m(g\cdot f) = m(f)$ for every ${}g \in G$ and $f \in L^\infty(G)$, where $(g\cdot f)(x) = f(g^{-1}x)$ (i.e. the mean is invariant under the obvious action of $G$ on $L^\infty(G)$) If $H$ is a subgroup of $G$, there are (many) $H$-invariant homomorphisms $j: L^\infty(H) \to L^\infty(G)$ taking non-negative functions to non-negative functions, and $\chi_H$ to $\chi_G$; for example, the (left) action of $H$ on $G$ breaks up into a collection of copies of $H$ acting on itself, right-multiplied by a collection of right coset representatives. After choosing such a choice of representatives $\lbrace g_\alpha \rbrace$, one for each coset $Hg_\alpha$, we can define $j(f)(hg_\alpha) = f(h)$. Composing with $m$ shows that every subgroup of an amenable group is amenable (this is harder to see in the “geometric” definition of amenable groups in terms of Folner sets). On the other hand, as is well-known, a nonabelian free group is not amenable. Hence, amenable groups do not contain nonabelian free subgroups. The usual way to see that a nonabelian free group is not amenable is to observe that it contains enough disjoint “copies” of big subsets. For concreteness, let $F$ denote the free group on two generators $a,b$, and write their inverses as $A,B$. Let $W_a, W_A$ denote the set of reduced words that start with either $a$ or $A$, and let $\chi_a,\chi_A$ denote the indicator functions of $W_a,W_A$ respectively. We suppose that $F$ is amenable, and derive a contradiction. Note that $F = W_a \cup aW_A$, so $m(\chi_a) + m(\chi_A) \ge 1$. Let $V$ denote the set of reduced words that start with one of the strings $a,A,ba,bA$, and let $\chi_V$ denote the indicator function of $V$. Notice that $V$ is made of two disjoint copies of each of $W_a,W_A$. So on the one hand, $m(\chi_V) \le m(\chi_F) = 1$, but on the other hand, $m(\chi_V) = 2 (m(\chi_a)+m(\chi_A)) \ge 2$. Conversely, the usual way to show that a group $G$ is amenable is to use the Folner condition. Suppose that $G$ is finitely generated by some subset $S$, and let $C$ denote the Cayley graph of $G$ (so that $C$ is a homogeneous locally finite graph). Suppose one can find finite subsets $U_i$ of vertices so that $|\partial U_i|/|U_i| \to 0$ (here $|U_i|$ means the number of vertices in $U_i$, and  $|\partial U_i|$ means the number of vertices in $U_i$ that share an edge with $C - U_i$). Since the “boundary” of $U_i$ is small compared to $U_i$, averaging a bounded function over $U_i$ is an “almost invariant” mean; a weak limit (in the dual space to $L^\infty(G)$) is an invariant mean. Examples of amenable groups include 1. Finite groups 2. Abelian groups 3. Unions and extensions of amenable groups 4. Groups of subexponential growth and many others. For instance, virtually solvable groups (i.e. groups containing a solvable subgroup with finite index) are amenable. Example: No amenable group can contain a nonabelian free subgroup. The von Neumann conjecture asked whether the converse was true. This conjecture was disproved by Olshanskii. Subsequently, Adyan showed that the infinite free Burnside groups are not amenable. These are groups $B(m,n)$ with $m\ge 2$ generators, and subject only to the relations that the $n$th power of every element is trivial. When $n$ is odd and at least $665$, these groups are infinite and nonamenable. Since they are torsion groups, they do not even contain a copy of $\mathbb{Z}$, let alone a nonabelian free group! Example: The Burnside groups are examples of groups that obey a law; i.e. there is a word $w(x_1,x_2,\cdots,x_n)$ in finitely many free variables, such that $w(g_1,g_2,\cdots,g_n)=\text{id}$ for every choice of $g_1,\cdots,g_n \in G$. For example, an abelian group satisfies the law $x_1x_2x_1^{-1}x_2^{-1}=\text{id}$. Evidently, a group that obeys a law does not contain a nonabelian free subgroup. However, there are examples of groups which do not obey a law, but which also do not contain any nonabelian free subgroup. An example is the classical Thompson’s group $F$, which is the group of orientation-preserving piecewise-linear homeomorphisms of $[0,1]$ with finitely many breakpoints at dyadic rationals (i.e. points of the form $p/2^q$ for integers $p,q$) and with slopes integral powers of $2$. To see that this group does not obey a law, one can show (quite easily) that in fact $F$ is dense (in the $C^0$ topology) in the group $\text{Homeo}^+([0,1])$ of all orientation-preserving homeomorphisms of the interval. This latter group contains nonabelian free groups; by approximating the generators of such a group arbitrarily closely, one obtains pairs of elements in $F$ that do not satisfy any identity of length shorter than any given constant. On the other hand, a famous theorem of Brin-Squier says that $F$ does not contain any nonabelian free subgroup. In fact, the entire group $\text{PL}^+([0,1])$ does not contain any nonabelian free subgroup. A short proof of this fact can be found in my paper as a corollary of the fact that every subgroup $G$ of $\text{PL}^+([0,1])$ has vanishing stable commutator length; since stable commutator length is nonvanishing in nonabelian free groups, this shows that there are no such subgroups of $\text{PL}^+([0,1])$. (Incidentally, and complementarily, there is a very short proof that stable commutator length vanishes on any group that obeys a law; we will give this proof in a subsequent post). Example: If $G$ surjects onto $H$, and $H$ contains a free subgroup $F$, then there is a section from $F$ to $G$ (by freeness), and therefore $G$ contains a free subgroup. Example: The most useful way to show that $G$ contains a nonabelian free subgroup is to find a suitable action of $G$ on some space $X$. The following is known as Klein’s ping-pong lemma. Suppose one can find disjoint subsets $U^\pm$ and $V^\pm$ of $X$, and elements $g,h \in G$ so that $g(U^+ \cup V^\pm) \subset U^+$, $g^{-1}(U^- \cup V^\pm) \subset U^-$, and similarly interchanging the roles of $U^\pm, V^\pm$ and $g,h$. If $w$ is a reduced word in $g^{\pm 1},h^{\pm 1}$, one can follow the trajectory of a point under the orbit of subwords of $w$ to verify that $w$ is nontrivial. The most common way to apply this in practice is when $g,h$ act on $X$ with source-sink dynamics; i.e. the element $g$ has two fixed points $u^\pm$ so that every other point converges to $u^+$ under positive powers of $g$, and to $u^-$ under negative powers of $g$. Similarly, $h$ has two fixed points $v^\pm$ with similar dynamics. If the points $u^\pm,v^\pm$ are disjoint, and $X$ is compact, one can take any small open neighborhoods $U^\pm,V^\pm$ of $u^\pm,v^\pm$, and then sufficiently large powers of $g$ and $h$ will satisfy the hypotheses of ping-pong. Example: Every hyperbolic group $G$ acts on its Gromov boundary $\partial_\infty G$. This boundary is the set of equivalence classes of quasigeodesic rays in (the Cayley graph of) $G$, where two rays are equivalent if they are a finite Hausdorff distance apart. Non-torsion elements act on the boundary with source-sink dynamics. Consequently, every pair of non-torsion elements in a hyperbolic group either generate a virtually cyclic group, or have powers that generate a nonabelian free group. It is striking to see how easy it is to construct nonabelian free subgroups of a hyperbolic group, and how difficult to construct closed surface subgroups. We will return to the example of hyperbolic groups in a future post. Example: The Tits alternative says that any linear group $G$ (i.e. any subgroup of $\text{GL}(n,\mathbb{R})$ for some $n$) either contains a nonabelian free subgroup, or is virtually solvable (and therefore amenable). This can be derived from ping-pong, where $G$ is made to act on certain spaces derived from the linear action (e.g. locally symmetric spaces compactified in certain ways, and buildings associated to discrete valuations on the ring of entries of matrix elements of $G$). Example: There is a Tits alternative for subgroups of other kinds of groups, for example mapping class groups, as shown by Ivanov and McCarthy. The mapping class group (of a surface) acts on the Thurston boundary of Teichmuller space. Every subgroup of the mapping class group either contains a nonabelian free subgroup, or is virtually abelian. Roughly speaking, either elements move points in the boundary with enough dynamics to be able to do ping-pong, or else the action is “localized” in a train-track chart, and one obtains a linear representation of the group (enough to apply the ordinary Tits alternative). Virtually solvable subgroups of mapping class groups are virtually abelian. Example: A similar Tits alternative holds for $\text{Out}(F_n)$. This was shown by Bestvina-Feighn-Handel in these three papers (the third paper shows that solvable subgroups are virtually abelian, thus emphasizing the parallels with mapping class groups). Example: If $G$ is a finitely generated group of homeomorphisms of $S^1$, then there is a kind of Tits alternative, first proposed by Ghys, and proved by Margulis: either $G$ preserves a probability measure on $S^1$ (which might be singular), or it contains a nonabelian free subgroup. To see this, first note that either $G$ has a finite orbit (which supports an invariant probability measure) or the action is semi-conjugate to a minimal action (one with all orbits dense). In the second case, the proof depends on understanding the centralizer of the group action: either the centralizer is infinite, in which case the group is conjugate to a group of rotations, or it is finite cyclic, and one obtains an action of $G$ on a “smaller” circle, by quotienting out by the centralizer. So one may assume the action is minimal with trivial centralizer. In this case, one shows that the action has the property that for any nonempty intervals $I,J$ in $S^1$, there is some ${}g \in G$ with $g(I) \subset J$; i.e. any interval may be put inside any other interval by some element of the group. For such an action, it is very easy to do ping-pong. Incidentally, a minor variation on this result, and with essentially this argument, was established by Thurston in the context of uniform foliations of $3$-manifolds before Ghys proposed his question. Example: If $\rho_t$ is an (algebraic) family of representations of a (countable) free group $F$ into an algebraic group, then either some element $g \in F$ is in the kernel of every $\rho_t$, or the set of faithful representations is “generic”, i.e. the intersection of countably many open dense sets. This is because the set of representations for which a given element is in the kernel is Zariski closed, and therefore its complement is open and either empty or dense (one must add suitable hypotheses or conditions to the above to make it rigorous). ## five week plan May 25, 2009 in Overview | Tags: Gromov's question, hyperbolic groups, scl, stable commutator length, surface groups | by Danny Calegari | 4 comments As an experiment, I plan to spend the next five weeks documenting my current research on this blog. This research comprises several related projects, but most are concerned in one way or another with the general program of studying the geometry of a space by probing it with surfaces. Since I am nominally a topologist, these surfaces are real $2$-manifolds, and I am usually interested in working in the homotopy category (or some rational “quotient” of it). I am especially concerned with surfaces with boundary, and even (occasionally) with corners. Since it is good to have a “big question” lurking somewhere in the background (for the purposes of motivation and advertising, if nothing else), I should admit from the start that I am interested in Gromov’s well-known question about surface subgroups, which asks: Question (Gromov): Does every one-ended word-hyperbolic group contain a closed hyperbolic surface subgroup? I don’t have strong feelings about whether the answer to this question is “yes” or “no”, but I do think the question can be sharpened usefully in many ways, and it is my intention to do so. Gromov’s question is certainly inspired by questions such as Waldhausen’s conjecture and the virtual fibration conjecture in $3$-manifold topology, but it is hard to imagine that a proof of one of these conjectures would shed much light on Gromov’s question in general. At least one essential tool in $3$-manifold topology — namely Dehn’s lemma — has no meaningful analogue in geometric group theory, and I think it is important to try to imagine different methods of constructing surface groups from “first principles”. Another long-term project that informs much of my current research is the problem of understanding stable commutator length in free groups. The interested reader can learn something about this from my monograph (which can be downloaded from this page). I hope to explain why this is a fundamental and interesting problem, with rich structure and many potential applications. ### Recent Comments Anonymous on Measure theory, topology, and… bancuri de groaza on Laying train tracks Danny Calegari on Groups with free subgroup… Andrei Malyutin on Groups with free subgroup… Danny Calegari on wireframe, a tool for drawing…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 574, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9348185062408447, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/28393/why-does-cantors-diagonal-argument-yield-uncomputable-numbers
Why does Cantor's diagonal argument yield uncomputable numbers? As everyone knows, the set of real numbers is uncountable. The most ubiquitous proof of this fact uses Cantor's diagonal argument. However, I was surprised to learn about a gap in my perception of the real numbers: A computable number is a real number that can be computed to within any desired precision by a finite, terminating algorithm. Turns out that the set of computable numbers is countable. My mind is effectively blown at this point. So I'm trying to reconcile this with Cantor's diagonal argument. Wikipedia has this to say: "...Cantor's diagonal argument cannot be used to produce uncountably many computable reals; at best, the reals formed from this method will be uncomputable." So much for background information. Say I have a list of real numbers $.a_{1n}a_{2n}a_{3n}\ldots$ for $n\geq 1$. Why do we get more than just computable numbers if we select digits different from the diagonal digits $a_{ii}$? I.e. if I make a number $.b_1b_2b_3\ldots$ where $b_i\neq a_{ii}$, why is this number not always computable? The main issue I'm having is that it seems like I'm "computing" the digits $b_i$ in some sense. Is the problem that I have a choice for each digit? Or is there some other subtlety that I'm missing? - Cantors diagonalization procedure is an algorithm that computes a real number (given a recursive sequence of real numbers). – quanta Mar 22 '11 at 0:14 The quote you mention does not appear and neither does any mention of computability. – quanta Mar 22 '11 at 0:15 – Joseph Mar 22 '11 at 0:20 that paragraph is extremely badly written. There is nothing of content in that sentence. – quanta Mar 22 '11 at 0:25 4 Answers You are only computing the digits $b_i$ relative to the given enumeration $a_{i,n}$. So if the function $f(i,n) = a_{i,n}$ is computable then the $b$ you construct is also computable, and not equal any of the reals $a_i$. However, if you have no way to compute the digits $a_{i,n}$ then you can't compute $b$ either. It only seems like you are computing $b_i$ because you are assuming that you already have a way to compute the digits $a_{i,n}$. However, there is no computable function $f(i,n)$ that satisfies these three requirements: • for any $i$ and $n$, $f(i,n)$ eventually returns some output. In other words, $f(i,n)$ is a total function • for every $i$, the function $n \mapsto f(i,n)$ gives a decimal expansion of a real number • for every computable real $a$ there is some $i$ with $a_n = f_{i,n}$ for all $n$ The diagonalization argument is actually the "standard" way to prove there is no $f(i,n)$ with those properties. If there was, then the $b$ you get from it would be another computable real, which is impossible. Working in set theory, you can do the construction with a noncomputable function $f(i,n)$ which lists every computable real. Since this $f$ is not computable, $b$ need not be computable, and in fact it won't be if $f$ lists all the computable reals. - You said what I was typing up better and faster than I could say it. :-) – Steven Stadnicki Mar 22 '11 at 0:31 Thanks! I was afraid the answer might be a little informal. – Carl Mummert Mar 22 '11 at 0:42 Did you find this question interesting? Try our newsletter email address The reason that the computable numbers are countable is that a computable number must be given by an algorithm of finite length, and any such algorithm must be written in a language with finitely (say $n$) many symbols. The number of algorithms of length $m$ is thus at most $m^n$, so we can label each algorithm as (length $n$, natural number $x \le m^n$) and thus count them using an argument similar to Cantor's. - Since there are countably many computable real numbers (see Alex's answer), our listing of "all the real numbers" may in fact include each of these without any problem. However, when you apply Cantor's diagonalisation argument to this list, you get a real number that is not on the list, and must therefore be uncomputable. (Similarly, this new real number may fail to be rational, since our listing of "all the real numbers" may include all rational numbers.) - I'd like to take a closer look at the apparent contradiction you get when trying to apply Cantor's diagonal slash to the computable numbers, so let me repeat the argument in somewhat more detail: A real number $r$ is computable if and only if there exists an algorithm which, given $n\in \mathbb{N}$, computes the n-th digit of the decimal expansion of $r$ (care has to taken because the decimal expansion is not unique in general; in this case the algorithm has to output digits of one expansion consistently). So you pick an explicit machine model (for example, Turing machines; the word "algorithm" means "Turing machine") and an enumeration of all algorithms $A_1,A_2,...$, the outputs of which are called $a_{i,n}$ You construct a new algorithm which, given $n$, computes $a_{n,n}$ (for Turing machines this is essentially a Universal Turing machine) and outputs $b_n$ which is different from $a_{n,n}$. You can make an explicit choice here, for example, $b_n:=2$ if $a_{n,n}=1$, and $b_n:=1$ otherwise (thus avoiding potential problems with non-unique decimal expansions, which end in a sequence of zeroes or nines). This seems to define a computable number which at the same time is uncomputable because it is different from any number that $A_i$ defines. The trouble with this "proof" is that it ignores the termination issues, and thus $a_{i,n}$ isn't even well-defined. You could make several attempts to repair this argument, but you will fail one way or the other. For example, if you say that all $A_i$s that do not terminate on some input $n$ are to be skipped in the enumeration, $i\mapsto a_{i,i}$ is no longer computable because you would have to filter the faulty algorithms, which you can't by the undecidability of the halting problem. If you just define $a_{i,n}$ as zero if $A_i$ does not terminate for the input $n$, then again $(i,n)\mapsto a_{i,n}$ is not computable. So the undecidability of the halting problem is the real issue here, and it implies other notable properties of computable numbers, for example, you can't computably decide if two computable numbers are equal. (The original last paragraph before the edit, which I will keep here to understand the comments below, was: So the undecidability of the halting problem is the real issue here, and it makes the very notion of computable numbers rather worthless in my opinion (you can't even computably decide if two computable numbers are equal, again because of the halting problem).) - 1 I think "worthless" is an excessively strong claim to make. Computable numbers are quite important in the study of theoretical computability, and "computable mathematics" has close connections to both constructive mathematics and reverse mathematics. The fact that the computable reals don't have every conceivable nice property is simply one of the things that makes the theory more complex. The idea of computable real numbers goes all the way back to Turing, and it has been studied in depth by many strong mathematicians since then. That on its own suggests computable numbers have some value. – Carl Mummert Mar 22 '11 at 13:44 @Carl Mummert: You're right; that was thoughtless of me. I will edit the answer. – Florian Mar 22 '11 at 14:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309230446815491, "perplexity_flag": "head"}
http://mathoverflow.net/questions/73178/for-the-given-map-g-arising-in-the-context-of-hyperbolic-geometry-is-g-w2
For the given map $G$ arising in the context of Hyperbolic Geometry, is $|G_w|^2 - |G_\bar{w} |^2$ bounded away from zero ? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $h: S^1 \to S^1$ be a fixed oriention-preserving homeomorphism and let $p(z,t) = \frac{1}{2\pi}\frac{1-|z|^2}{|z-t|^2}$denote the Poisson kernel, $D$ denote the open unit disk in $\mathbb{C}$. Let us consider the function $G: D\times D \to \bar{D}$ given by : $G (z,w) = \int_{S^1} \frac{h(t)-w}{1-\bar{w}h(t)} p(z,t) |dt|$. Consider the quantity $|G_w|^2 - |G_\bar{w}|^2$, where, a direct computation shows that : $G_w(z,w) = \int_{S^1} \frac{-1}{1- \bar{w}h(t)} p(z,t)|dt|$ and $G_\bar{w}(z,w)= \int_{S^1}\frac{h(t)-w}{1-\bar{w}h(t)}.\frac{-h(t)}{1-\bar{w}h(t)} p(z,t)|dt|$ I was wondering whther there is a condition on $h: S^1 \to S^1$ [ for exzample, $h$ is real-analytic, smooth, bi-Lipchitz, Lipchitz, Holder ]m such that $|G_w(z,w)|^2 - |G_\bar{w}(z,w)|^2 \geq L(h) > 0 \forall (z,w)$, where $L(h)$ is a positive constant depending on $h$ only. Any books/refernces/papers would be greatly appreciated ! Thank you ! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8778793811798096, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/quantum-information+bells-inequality
# Tagged Questions 2answers 409 views ### Interpretation of “superqubits” Two very intriguing papers recently appeared on the arXiv, claiming that one can use "superqubits" -- a supersymmetric generalization of qubits -- to violate the Bell inequality by more than standard ... 1answer 49 views ### States diagonal in the tensor product of Bell states. Bell-diagonal states are 2-qubit states that are diagonal in the Bell basis. Since those states lie in $\mathbb{C}^{2} \otimes \mathbb{C}^{2}$, the Peres-Horodecki criterion is a sufficient condition ... 0answers 212 views ### Bell polytopes with nontrivial symmetries Take $N$ parties, each of which receives an input $s_i \in {1, \dots, m_i}$ and produces an output $r_i \in {1, \dots, v_i}$, possibly in a nondeterministic manner. We are interested in joint ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.886424720287323, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/17206-counting-sets.html
# Thread: 1. ## Counting Sets Determine with proof the number of ordered triples $(A_1, A_2, A_3)$ of sets which have the property (i) $A_1 \cup A_2 \cup A_3 = \{1,2,3,4,5,6,7,8,9,10 \}$ and (ii) $A_1 \cap A_2 \cap A_3 = \emptyset$. So basically this means that all the sets are disjoint. 2. So basically this means that all the sets are disjoint. Not, necessarily. $A=\{1\},B=\{1,...,9\}, C=\{10\}$. But that actually makes the problem easier! Originally Posted by tukeywilliams Determine with proof the number of ordered triples $(A_1, A_2, A_3)$ of sets which have the property (i) $A_1 \cup A_2 \cup A_3 = \{1,2,3,4,5,6,7,8,9,10 \}$ and (ii) $A_1 \cap A_2 \cap A_3 = \emptyset$. Think of a Venn diagram, see below. Now the numbers $\{1,2,3,4,5,6,7,8,9,10\}$ can be placed anyway except into the shaded area. So we are placing $10$ "marbles" into 6 "boxes" (because there are 7 sections and we omit the middle section). The formula is: ${{6+10-1}\choose 10}$ Attached Thumbnails 3. Could you also use matrices to solve this problem? 4. Originally Posted by tukeywilliams Could you also use matrices to solve this problem? I really have no idea? I am not a Combinatorics expert . Perhaps, you can phrase this problem as a Graph Theory problem and then arrive at your answer by computing the adjancency matrix. But I have no idea how to do that. 5. I was thinking that we could somehow use a binary relation. So there is some type of mapping (maybe a bijection?) between $A_1 \cup A_2 \cup A_3 = \{1, \ldots, 10\}$, $A_1 \cap A_2 \cap A_3 = \emptyset$ and some type of $10 \times 3$ matrix with $0,1$ so that there are no rows that are $\{000 \}$ or $\{111 \}$. 6. I think there are $6$ possibilities for each row and therefore $6^{10}$ total possible triples? 7. Originally Posted by tukeywilliams I think there are $6$ possibilities for each row and therefore $6^{10}$ total possible triples? On a strict reading of this problem as stated, I agree that the answer is $6^{10}$. However, are you sure that you have stated this question correctly? As stated, the question has several different readings. I have seen this sort of question many times: “How many ways can {0,1,2,…,8,9} be partitioned into three non-empty, pair-wise disjoint sets?” Could that be what was meant by this question?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9642062187194824, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/66576-definite-integral.html
# Thread: 1. ## Definite Integral If the definite integral from -2 to 6 of f(x)dx=10 and the definite integral from 2 to 6 of f(x)dx=3, then the definite integral from 2 to 6 of f(x-4)dx= ? I don't understand how to solve definite integrals when the function has something more than just x inside the parenthesis such as f(4-x). 2. Originally Posted by frog09 If the definite integral from -2 to 6 of f(x)dx=10 and the definite integral from 2 to 6 of f(x)dx=3, then the definite integral from 2 to 6 of f(x-4)dx= ? I don't understand how to solve definite integrals when the function has something more than just x inside the parenthesis such as f(4-x). Here is a hint: what does substituting $4-x$ instead of $x$ do to the function? An easy example to find out is to graph $f(x) = x^2$ and then graph $f(x-4) = (x-4)^2$. Compare $\int_2^6 x^2 dx$ and $\int_6^{10} (x-4)^2 dx$ Do you see why? What I used a change of variables for the second integral, where $y=x-4,y+4=x$ and $dy = dx$? Once you figure that out, recall that for that integral is a linear operator: $\int_a^c f(x) dx = \int_a^b f(x) dx + \int_b^c f(x) dx$ where $a<b<c \in \mathbb{R}$ You should get the answer to be 7: $\int_2^6 f(x-4) dx = 7$ 3. Originally Posted by frog09 If the definite integral from -2 to 6 of f(x)dx=10 and the definite integral from 2 to 6 of f(x)dx=3, then the definite integral from 2 to 6 of f(x-4)dx= ? I don't understand how to solve definite integrals when the function has something more than just x inside the parenthesis such as f(4-x). Hint: Let $t = 4-x$ in $\int^6_2 f(4-x)\;dx.$ (Too slow) 4. Originally Posted by frog09 If the definite integral from -2 to 6 of f(x)dx=10 and the definite integral from 2 to 6 of f(x)dx=3, then the definite integral from 2 to 6 of f(x-4)dx= ? I don't understand how to solve definite integrals when the function has something more than just x inside the parenthesis such as f(4-x). $\int_{-2}^6 f(x) \, dx = 10$ $\int_2^6 f(x) \, dx = 3$ $\int_2^6 f(x-4) \, dx = \, ?$ use the method of substitution ... $u = x - 4$ $du = dx$ substitute and reset the limits of integration ... $\int_2^6 f(x-4) \, dx$ $\int_{-2}^2 f(u) \, du$ $\int_{-2}^6 f(x) \, dx - \int_2^6 f(x) \, dx = \int_{-2}^2 f(x) \, dx$ $10 - 3 = 7$ since $\int_{-2}^2 f(u) \, du = \int_{-2}^2 f(x) \, dx$ $\int_{-2}^2 f(u) \, du = \int_2^6 f(x-4) \, dx = 7$ 5. LOL @ danny arrigo Sorry 6. I was re-checking this problem and realized I copied down the last part wrong. It is really supposed to be: $<br /> \int_2^6 f(4-x) , dx = ?<br />$ i substitute u=4-x and get -du=dx. and get the new limits as $<br /> \int_2^{-2} f(u) du<br />$ which would be $<br /> -\int_{-2}^2 f(u) du<br />$ does that -du change the problem? or would it still be 7? 7. Originally Posted by frog09 I was re-checking this problem and realized I copied down the last part wrong. It is really supposed to be: $<br /> \int_2^6 f(4-x) , dx = ?<br />$ i substitute u=4-x and get -du=dx. and get the new limits as $<br /> \int_2^{-2} f(u) du<br />$ which would be $<br /> -\int_{-2}^2 f(u) du<br />$ does that -du change the problem? or would it still be 7? You would get a negative from the dx so your second last line should be - and the last line should be +.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270421862602234, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/89808/what-does-it-mean-for-a-function-to-be-a-solution-of-a-differential-equation/89812
# What does it mean for a function to be a solution of a differential equation? What does it mean for a function to be a solution of a differential equation? I think I understand, but I still don't have a simple or intuitive understanding. - 1 What does it mean for a number to be a solution of an equation? – Neal Dec 9 '11 at 3:43 A number is a solution when it fits within the range of answers that the function generates. For differential equations an equation is a solution when the equation is created satisfies the derivative's range. – user17366 Dec 9 '11 at 3:53 2 Not quite. An equation is a statement about numbers involving an unknown. A number solves an equation if, when substituted for the unknown, it makes the statement true. Likewise, a differential equation is a statement about functions involving an unknown function. A function solves a differential equation if, when substituted, the statement is true. – Neal Dec 9 '11 at 12:16 Simple? If you "plug in" the function to the differential equation and it gives an equality, then it's a solution! – The Chaz 2.0 Dec 14 '11 at 2:28 ## 2 Answers Suppose that $\Omega \subset \mathbb{R}^{n+1}$ and that you have some function $F: \Omega \to \mathbb{R}$. Then $F$ defines the differential equation $$F(x, y, y', ..., y^{(n-1)}) = 0$$ A solution to this differential equation is a function $y: (a,b) \to \mathbb{R}$ defined on some open interval (which could be all of $\mathbb{R}$) such that $$(x, y(x), y'(x), ..., y^{(n-1)}(x)) \in \Omega$$ for all $x \in (a,b)$ and $$F(x, y(x), y'(x), ..., y^{(n-1)}(x)) = 0$$ for all $x \in (a,b)$. Since every ordinary differential equation can be written in such a way this defines the notion of solution unambiguously. Intuitively, you can think of this as "$F$ being $0$ on the graph of $y$ and its derivatives." In practice you usually add some requirements on $F$ and $\Omega$, for example it is usual to require that $\Omega$ is a region, and that $F$ is continuous and invertible around $0$. To give an example, if you have the equation $$y''(x) = 2x^3y(x) + y'(x)^2$$ then you can take $$F(x_1, x_2, x_3, x_4) = 2x_1^3x_2 + x_3^2 - x_4$$ - Solving $x+1=3$ means finding a value for $x$ that satisfies the equation $x+1=3$. In this case, $x=2$ does the job because $2+1=3$. Now, given a differential equation such as: $\frac{dy}{dx}= 2x$ ..........(1) a solution to this equation is a function (call it $y(x)$). Of course not any function will do. The correct function must satisfy the equation we have in (1) above. for this particular case, y(x) can be equal to $x^2$. Why? Because the derivative of $x^2$ with respect to $x$ is $\frac{dy}{dx}= 2x$ which is what we have in our equation. But wait, what about $y(x)=x^2$+5? In fact, this is another solution. As you can see, in this case we end up with many solutions all of the form: $y(x)=x^2+k$ where k is a constant. This is because, all of such functions satisfy our differential equation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413197636604309, "perplexity_flag": "head"}
http://mathoverflow.net/questions/92736/two-curious-asymptotic-results-for-dimensions-of-type-a-objects/92772
## Two curious asymptotic results for dimensions of type A objects ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $V_{\lambda}$ and $W_{\lambda}$ be the irreducible representations of $S(n)$ and $\mathfrak{su}(N,\mathbb{C})$ associated to the partition $\lambda \in \mathbb{Y}$ of size $| \lambda |=n$ and length $l(\lambda) \leq N$. The following limit $$\frac{\dim V_{\lambda}}{n!} = \lim_{N \rightarrow \infty} \frac{\dim W_{\lambda}}{N^{n}}$$ follows immediately from the well known hook (content) formulas $$\dim V_{\lambda} = \prod_{\square \in \lambda} \frac{n!}{h(\square)} \ \ \ \ \dim W_{\lambda} = \prod_{\square \in \lambda} \frac{N + c(\square)}{h(\square)}$$ which can be found in Macdonald's book. Notice that $n! = \dim_{\mathbb{C}} \mathbb{C}[S(n)]$ and $N^n = \dim_{\mathbb{C}} (\mathbb{C}^N)^{\otimes n}$, so what we're seeing is that as $N \rightarrow \infty$, the relative multiplicity of $V_{\lambda}$ in Schur-Weyl duality approaches the relative multiplicity of $V_{\lambda}$ in the regular representation. Does anyone have a good feeling for why this is true? Also, let us not forget the Peter-Weyl theorem! If for a compact group $G$ we write $G^{\vee}$ for its set of finite dimensional irreducible representations over $\mathbb{C}$, we have $$L^2(SU(N)) = \widehat{\bigoplus_{\lambda \in SU(N)^{\vee}}} W_{\lambda} \boxtimes W_{\lambda}$$ $$(\mathbb{C}^N)^{\otimes n}=\bigoplus_{\lambda \in SU(N)^{\vee} \cap S(n)^{\vee}} V_{\lambda} \boxtimes W_{\lambda}$$ $$\mathbb{C}[S(n)] = \bigoplus_{\lambda \in S(n)^{\vee}} V_{\lambda} \boxtimes V_{\lambda}$$ The limit we discussed above relating the second to the third line here actually also happens when we pass from the first to the second line: the relative multiplicity'' of $W_{\lambda}$ in its regular representation approaches the relative multiplicity of $W_{\lambda}$ in Schur-Weyl duality. Can anyone give me some intuition for what's going on here + why I might expect such a result? - Also, something I really want is some representation theoretic place where I might find a direct sum of $W_{\lambda, N} \boxtimes W_{\lambda, M}$ over all $\lambda \in \mathbb{Y}$ with length $l(\lambda) \leq \min (N,M)$ - a sort of unbalanced'' Peter-Weyl theorem for $SU(N)$ on the left and $SU(M)$ on the right. Any thoughts? – Alexander Moll Mar 31 2012 at 7:09 2 The latter thing is the polynomial functions on N by M matrices. The magic word is Howe duality. – Ben Webster♦ Mar 31 2012 at 13:43 ## 2 Answers This is an answer to Alexander's combinatorial reformulation of the question in comments to Bruce's answer. dim $V_\lambda$/$n$! is the chance that you will get a standard Young tableau if you assign the values 1 to $n$ to the boxes of a tableau of shape $\lambda$ according to a random permutation. dim $W_\lambda/N^n$ is the chance that you will get a semi-standard Young tableau if you assign a value in $[1,N]$ to each box in a tableau of shape $\lambda$. Think of the second procedure in the following way: first choose a set of $n$ numbers from 1 to $N$ to serve as entries, and then assign them to boxes. If the entries are all different, then the chances that what you get is a semistandard tableau is the same as the chance that you get a standard tableau starting with 1..$n$. As $N$ tends to infiniity, the chance that you will choose two entries the same becomes vanishingly small, so in the limit, dim $W_\lambda/N^n$ tends to dim $V_\lambda$/$n$!. - This is a very nice argument Hugh - thank you! – Alexander Moll Apr 3 2012 at 22:27 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The $q$-analogue of $\dim W_\lambda$ is the specialisation $s_\lambda(q,q^2,\ldots ,q^n)$ and the $q$-analogue of $\dim V_\lambda$ is the principal specialisation $s_\lambda(q,q^2,\ldots )$ which is manifestly given by taking $n\rightarrow\infty$. In fact in Enumerative Combinatorics II by Stanley this is how the hook length formula for $\dim V_\lambda$ is derived. - Sorry - I'm familiar with this approach to calculating the limit, and should have included it in my original post, where I was aiming for more of a representation theoretic "Why?". Maybe we can ask this question combinatorially: is there a reason why the number of semi standard Young tableaux of shape $\lambda$ with entries from $\{1, \ldots, N\}$ after dividing by $N^n$ is asymptotic to the number of standard Young tableaux with shape $\lambda$ if we divide by $n!$? – Alexander Moll Apr 1 2012 at 0:16 i.e. is there a bijective'' proof of the limit above? – Alexander Moll Apr 1 2012 at 0:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 5, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246037602424622, "perplexity_flag": "head"}
http://en.m.wikibooks.org/wiki/Principles_of_Finance/Section_1/Chapter_6/Corp/PVGO
# Principles of Finance/Section 1/Chapter 6/Corp/PVGO When valuing a company's stock, there is an important distinction which must be made between that and an ordinary perpetuity. A company has the capacity to grow, which must be reflected in the current price. If a stock price were to valued using a standard annuity formula, it would look something like this: $P_0 = \frac{D_1}{r}$ (*edit* it should be $P_0 = \frac{D_1}{r-g}$) Where P0, is the price at time 0, D1 is the dividend at time 1, and r is the required rate of return. However, this clearly does not reflect the level at which a company is expected to grow. This is especially pertinent to start up companies, who may not pay any dividends for a long time, but are expected to become highly profitable in the future. A formula which takes this into consideration is as follows: $P_0 = \frac{D_1}{r} + PVGO$ (*edit* it should be $P_0 = \frac{EPS_1}{r} + PVGO$) $g = plowback * ROE$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.971798837184906, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/63559/how-to-create-a-one-to-one-correspondence-between-two-sets
# How to create a one to one correspondence between two sets? I am stuck with, Give a one to one correspondence between Z+ and positive even integers. Now, I don't have an idea how to show that there is a one to one correspondence between the two. I would be thankful for some hints. - 2 Please avoid using images in that fashion. How hard is it to type that in? – Aryabhata Sep 11 '11 at 14:50 btw, these (this one and your other recent question) look like homework. If so, I would encourage you to tag them as such. – Aryabhata Sep 11 '11 at 14:52 ## 2 Answers When asked to prove that "$\exists$ a ..."; then what you are really doing is actually finding whatever is that you need to prove exists. For example in your question you must prove that there exists a one-to-one correspondance between $\mathbb{Z}^{+}$ and positive even integers, which I will now denote $\mathbb{Z}_e$ so we should attempt to find a map which takes $\mathbb{Z}^{+} \rightarrow \mathbb{Z}_e$. So how should we go about finding one? well first lets think what is the formal definition of even? I would say an integer $x$ is even if $x = 2k$ for some $k\in \mathbb{Z}$ so the set of positive even integers is $\mathbb{Z}_e = \{x = 2k : k\in\mathbb{Z}^{+}\}$. Now once we have actually formalized what a positive even integer is it is not hard to think of a map, for example take: $f: \mathbb{Z}^{+} \rightarrow \mathbb{Z}_e$ defined by : $k \mapsto 2k$ Now we've got a map we think we will work, and we just need to check if it is one-to-one. Suppose $f(r) = f(s)$ Then $2r = 2s$, but this quickly implies that $r = s$ so the map is one-to-one, as desired. Furthermore the map is also onto, because $\mathbb{Z}_e = \{x = 2k : k\in \mathbb{Z}^{+}\}$ is the set of integers of the form $2k$ by definition. - Consider the f(x)=2x. Now if f(a)=f(b) then 2a=2b then a=b ( because $2\neq0$) this proves the function is injective, to prove that is surjective a positive even integer is of the form 2k with k a positive integer, then f(k)=2k. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587399959564209, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/01/25/distinguishing-maxima-and-minima/?like=1&_wpnonce=81a70c627a
The Unapologetic Mathematician Distinguishing Maxima and Minima From Heine-Borel we know that a continuous function $f$ on a closed interval $\left[a,b\right]$ takes a global maximum and a minimum. From Fermat we know that any local (and in particular any global) extremum occurs at a critical points — a point where $f'(x)=0$, or $f$ has no derivative at all. But once we find these critical points how can we tell maxima from minima? The biggest value of $f$ at a critical point is clearly the global maximum, and the smallest is just as clearly the minimum. But what about all the ones in between? Here’s where those consequences of the mean value theorem come in handy. For simplicity, let’s assume that the critical points are isolated. That is, each one has a neighborhood in which it’s the only critical point. Further, let’s assume that $f'(x)$ is continuous wherever it exists. Now, to the left of any critical point we’ll have a stretch where $f$ is differentiable (or else there would be another critical point there) and $f'(x)$ is nonzero (ditto). Since the derivative is continuous, it must either be always positive or always negative on this stretch, because if it was sometimes positive and sometimes negative the intermediate value theorem would give us a point where it’s zero. If the derivative is positive, our corollaries of the mean value theorem tell us that $f$ increases as we move in towards the point, while if the derivative is negative it decreases into the critical point. Similarly, on the right we’ll have another such stretch telling us that $f$ either increases or decreases as we move away from the critical point. So what’s a local maximum? It’s a critical point where the function increases moving into the critical point and decreases moving away! That is, if near the critical point the derivative is positive on the left and negative on the right, we’ve got ourselves a local maximum. If the derivative is positive on the right and negative on the left, it’s a local minimum. And if we find the same sign on either side, it’s neither! Notice that this is exactly what happens with the function $f(x)=x^3$ at its critical point. Also, we don’t have to worry about where to test the sight of the derivative, because we know that it can only change signs at a critical point. In fact, if we add a bit more to our assumptions we can get an even nicer test. Let’s assume that the function is “twice-differentiable” — that $f'(x)$ is itself a differentiable function — on our interval. Then all the critical points happen where $f'(x)=0$. Even better now, if it changes signs as we pass through the critical point (indicating a local extremum) it’s either increasing or decreasing, and this will be reflected in its derivative $f''(x)$ at the critical point. If $f''(x)>0$ then our sign changes from negative to positive and we must be looking at a local minimum. On the other hand, if $f''(x)<0$ then we’ve got a local maximum. Unfortunately, if $f''(x)=0$ we don’t really get any information from this test and we have to fall back on the previous one. Like this: Posted by John Armstrong | Analysis, Calculus 2 Comments » 1. [...] along all of these lines we have a local minimum at the origin by the second derivative test. And along the -axis, we have , which has the origin as a local [...] Pingback by | November 23, 2009 | Reply 2. [...] function . That is, a point where the differential vanishes. We want something like the second derivative test that might tell us more about the behavior of the function near that point, and to identify (some) [...] Pingback by | November 24, 2009 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9095962047576904, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/32263/why-do-wheels-appear-to-revolve-opposite-to-the-direction-they-are-rotating?answertab=active
# Why do wheels appear to revolve opposite to the direction they are rotating? When viewing cars that are driving along side of us, sometimes their wheels appear to be turning backwards even though they are traveling in the same direction as our car. Why do they look that way? - 1 – Qmechanic♦ Jul 17 '12 at 21:10 I don't think that answer is related; the sun is not a strobe light. (Or does this phenomenon only happen at night?) – Peter Shor Jul 17 '12 at 22:14 Does this happen with the human eye? I have never noticed aliasing with natural vision, only with cameras. Did you see it yourself, or in a video of the neighboring car? – Ron Maimon Jul 18 '12 at 2:01 I've seen it with my naked eyes, if you can say that when you wear perscription eyeglasses... – Major Stackings Jul 18 '12 at 3:03 ## 1 Answer The issue appears to be rather complex, so I do not aim at providing an exhaustive answer. At a toy model level it is reasonable to model the eye as a "camera". Specifically, let us assume that a human eye "samples" at a maximum frequency of $\nu$, so that we may make use of the Nyquist-Shannon sampling theorem. Basically, given an instantaneous angular velocity of $\omega$, if the wheel has $n$ spacings, then the "highest frequency" component is $n\omega\over{2\pi}$ (i.e., in a full rotation, there are $n$ wheel bars passing at a given angle). Therefore, writing $\omega = {v\over r}$ with $v$ being the car speed and $r$ the wheel radius (here I am assuming pure rolling of the wheel), when $$v > {{\pi\nu r}\over{n} }$$ we may assume that some kind of aliasing took place, i.e., I guess you would be unable to reconstruct correctly the wheel motion. So assuming that a typical wheel has 10 bars and a radius of about 0.3 meters and your eye samples at ~30 Hz (typical frame rate of most first person shooter videogames, so it may be used as an upper limit since there one has complete illusion of movement), a rule of thumb calculation yields about 30 meters/second as a reasonable threshold speed for aliasing phenomena. - 1 Human eyes have a response of 10-12fps in the high acuity region – Martin Beckett Jul 18 '12 at 4:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437200427055359, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/34947/when-would-you-read-a-paper-claiming-to-have-settled-a-long-open-problem-like-p/34957
## When would you read a paper claiming to have settled a long open problem like $P$ vs. $NP$? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) From time to time, people announce papers claiming to have settled long open problems like $P$ vs. $NP$. There have been many attempts, reading them is time-consuming, and finding bugs in their arguments is not easy, ... . This brings up the following question: When would you read a paper claiming to have settled a famous long open problem like $P$ vs. $NP$? What are your criteria to consider such an announcement as serious? EDIT: I am mostly interested in the case that the paper is in your area and is not written by a crank but by a mathematician with previous publications in reputable journals (although not necessary in the same area or a related one). - 5 I think that this is a moral duplicate of mathoverflow.net/questions/6912 . At least, I would answer it in exactly the same way. – Greg Kuperberg Aug 8 2010 at 21:02 Thank you Greg. I think that they are quit different, that asks if people really check these claims, this one is asking when they would check them. Your answer there is the kind of answer I am looking for: "Learning from reading it". But I want to hear other factors also. By the way, I would be happy to merge this question with that one and make that a community wiki if it is possible to do so. (There is a new claim of $P \neq NP$, and I have heard that more than one expert in complexity theory consider this one to be serious.) – Kaveh Aug 8 2010 at 21:16 There is also an implicit question: criteria that would make you not read the paper, e.g., if the author is not a mathematician. – Kaveh Aug 8 2010 at 21:29 I am mostly interested in the case that the paper is in your area and is not written by crank. – Kaveh Aug 8 2010 at 21:30 4 There has been blog post on Scott Aaronson's blog a while ago. scottaaronson.com/blog/?p=304 – wood Aug 8 2010 at 22:45 show 6 more comments ## 3 Answers When would you read a paper claiming to have settled a famous long open problem like $P$ vs. $NP$? If the paper claimed to resolve $P$ vs $NP$, I'd begin reading it right away. For instance, I'm currently looking at this paper. But that is only because I have a good chance of understanding the work. If the paper claimed to resolve any other Clay Millennium Prize problem, I'd defer to others. What are your criteria to consider such an announcement as serious? (a) It's not written in Microsoft Word (b) The abstract, title, and opening paragraphs do not convey obvious misunderstandings For $P$ vs $NP$ proofs, criteria (a) and (b) work about 99% of the time. (Seriously.) I'm still checking to see if the above link passes criterion (b). It's a long abstract. - 2 For the link above the first thing I checked was whether the paper cites Razborov-Rudich. – Qiaochu Yuan Aug 8 2010 at 22:40 1 Yep, that's important to check, for a proof that wants to go through random models of k-SAT... and it passes that check as well. – Ryan Williams Aug 8 2010 at 22:43 1 Dick's post: rjlipton.wordpress.com/2010/08/08/… – Kaveh Aug 9 2010 at 4:23 That paper in pdf format: hpl.hp.com/personal/Vinay_Deolalikar/Papers/… :) – Kaveh Aug 9 2010 at 7:09 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Published in a respected journal? Unless the solution claims to use mathematics where I have some particular expertise, that is probably the only place I would read such a paper. - 1 I think the problem with the first one is that it would be really hard to publish such a result in a reputable journal before people working in that area are convinced that it is correct. – Kaveh Aug 8 2010 at 21:25 On the other hand, I have heard from people that they would not consider a paper using usual well-known stuff as serious, the reason being that many experts in the area familiar with them were unable to solve it, and we also have some negative results that prove that using such and such techniques can not settle the question (based on some widely believed conjectures). – Kaveh Aug 8 2010 at 21:25 6 @Kaveh: when they decide whether to publish a paper in a reputable journal, they don't take a poll of researchers working in the area; they solicit one or two particular reserrachers in the area to act as referees, who then read the paper very carefully. So I suppose "When you have agreed to be a referee" is one sufficient reason to take a paper seriously. – Pete L. Clark Aug 8 2010 at 21:42 @Pete: I think what you say is correct for normal questions. But not for long open questions like Poincaré conjecture or $P$ vs. $NP$. The problem is that almost no expert in that area is ready to read say a hundred pages and spend time to find a bug in it or announce that it is correct. Let's consider Poincaré conjecture. Was Perelman work published in a reputable journal before people in the area were convinced that the solution is correct? – Kaveh Aug 8 2010 at 22:02 4 Kaveh, your assertion that "almost no expert in that area is ready to read say a hundred pages and spend time to find a bug in it or announce that it is correct" is wrong. Firstly, experts have read over long papers claiming proofs of big conjectures, even if the history was somewhat problematic: Bieberbach conjecture, Seifert conjecture, 4d smooth Poincaré conjecture, Fermat's last theorem, the Kepler conjecture, etc. Secondly, it's highly unusual $\textit{in mathematics}$ to have an expert publicly certify that a certain proof is correct. BTW, refereeing doesn't provide such a certification. – Victor Protsak Aug 8 2010 at 22:38 show 2 more comments just caught this at slashdot...thought I would make it my first post at MO. http://gregbaker.ca/blog/2010/08/07/p-n-np/ - 3 This came up in the comments to this (closed) question: mathoverflow.net/questions/34953/… – Yemon Choi Aug 9 2010 at 0:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9588196873664856, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Cdma
# Code division multiple access (Redirected from Cdma) The Please discuss this issue on the talk page and read the layout guide to make sure the section will be inclusive of all essential details. (November 2010) Multiplex techniques Circuit mode (constant bandwidth) TDM · FDM/WDM · SDM Polarization multiplexing Spatial multiplexing (MIMO) OAM multiplexing Statistical multiplexing (variable bandwidth) Packet mode · Dynamic TDM FHSS · DSSS OFDMA · SC-FDM · MC-SS Related topics Channel access methods Media Access Control (MAC) This box: Code division multiple access (CDMA) is a channel access method used by various radio communication technologies. It should not be confused with the mobile phone standards called cdmaOne, CDMA2000 (the 3G evolution of cdmaOne) and WCDMA (the 3G standard used by GSM carriers), which are often referred to as simply CDMA, and use CDMA as an underlying channel access method. One of the concepts in data communication is the idea of allowing several transmitters to send information simultaneously over a single communication channel. This allows several users to share a band of frequencies (see bandwidth). This concept is called multiple access. CDMA employs spread-spectrum technology and a special coding scheme (where each transmitter is assigned a code) to allow multiple users to be multiplexed over the same physical channel. By contrast, time division multiple access (TDMA) divides access by time, while frequency-division multiple access (FDMA) divides it by frequency. CDMA is a form of spread-spectrum signalling, since the modulated coded signal has a much higher data bandwidth than the data being communicated. An analogy to the problem of multiple access is a room (channel) in which people wish to talk to each other simultaneously. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but other languages are perceived as noise and rejected. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can communicate. The technology of code division multiple access channels has long been known. In the USSR, the first work devoted to this subject was published in 1935 by professor Dmitriy V. Ageev.[1] It was shown that through the use of linear methods, there are three types of signal separation: frequency, time and compensatory. The technology of CDMA was used in 1957, when the young military radio engineer Leonid Kupriyanovich in Moscow, made an experimental model of a wearable automatic mobile phone, called LK-1 by him, with a base station. LK-1 has a weight of 3 kg, 20–30 km operating distance, and 20–30 hours of battery life.[2][3] The base station, as described by the author, could serve several customers. In 1958, Kupriyanovich made the new experimental "pocket" model of mobile phone. This phone weighed 0.5 kg. To serve more customers, Kupriyanovich proposed the device, named by him as correllator.[4][5] In 1958, the USSR also started the development of the "Altai" national civil mobile phone service for cars, based on the Soviet MRT-1327 standard. The phone system weighed 11 kg and was approximately 3 cubic meters in size[dubious ]. It was placed in the trunk of the vehicles of high-ranking officials and used a standard handset in the passenger compartment. The main developers of the Altai system were VNIIS (Voronezh Science Research Institute of Communications) and GSPI (State Specialized Project Institute). In 1963 this service started in Moscow and in 1970 Altai service was used in 30 USSR cities.[citation needed] ## Uses A CDMA2000 mobile phone • One of the early applications for code division multiplexing is in GPS. This predates and is distinct from its use in mobile phones. • The Qualcomm standard IS-95, marketed as cdmaOne. • The Qualcomm standard IS-2000, known as CDMA2000. This standard is used by several mobile phone companies, including the Globalstar satellite phone network. • The UMTS 3G mobile phone standard, which uses W-CDMA. • CDMA has been used in the OmniTRACS satellite system for transportation logistics. ## Steps in CDMA Modulation CDMA is a spread spectrum multiple access[6] technique. A spread spectrum technique spreads the bandwidth of the data uniformly for the same transmitted power. A spreading code is a pseudo-random code that has a narrow ambiguity function, unlike other narrow pulse codes. In CDMA a locally generated code runs at a much higher rate than the data to be transmitted. Data for transmission is combined via bitwise XOR (exclusive OR) with the faster code. The figure shows how a spread spectrum signal is generated. The data signal with pulse duration of $T_{b}$ (symbol period) is XOR’ed with the code signal with pulse duration of $T_{c}$ (chip period). (Note: bandwidth is proportional to $1/T$ where $T$ = bit time) Therefore, the bandwidth of the data signal is $1/T_{b}$ and the bandwidth of the spread spectrum signal is $1/T_{c}$. Since $T_{c}$ is much smaller than $T_{b}$, the bandwidth of the spread spectrum signal is much larger than the bandwidth of the original signal. The ratio $T_{b}/T_{c}$ is called the spreading factor or processing gain and determines to a certain extent the upper limit of the total number of users supported simultaneously by a base station.[7] Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance will occur when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made by correlating the received signal with the locally generated code of the desired user. If the signal matches the desired user's code then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to as cross correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference.[8] In general, CDMA belongs to two basic categories: synchronous (orthogonal codes) and asynchronous (pseudorandom codes). ## Code division multiplexing (Synchronous CDMA) Synchronous CDMA exploits mathematical properties of orthogonality between vectors representing the data strings. For example, binary string 1011 is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking their dot product, by summing the products of their respective components (for example, if u = (a, b) and v = (c, d), then their dot product u·v = ac + bd). If the dot product is zero, the two vectors are said to be orthogonal to each other. Some properties of the dot product aid understanding of how W-CDMA works. If vectors a and b are orthogonal, then $\scriptstyle\mathbf{a}\cdot\mathbf{b} \,=\, 0$ and: $\begin{align} \mathbf{a}\cdot(\mathbf{a}+\mathbf{b}) &= \|\mathbf{a}\|^2 &\quad\mathrm{since}\quad \mathbf{a}\cdot\mathbf{a}+\mathbf{a}\cdot\mathbf{b} &= \|a\|^2+0 \\ \mathbf{a}\cdot(-\mathbf{a}+\mathbf{b}) &= -\|\mathbf{a}\|^2 &\quad\mathrm{since}\quad -\mathbf{a}\cdot\mathbf{a}+\mathbf{a}\cdot\mathbf{b} &= -\|a\|^2+0 \\ \mathbf{b}\cdot(\mathbf{a}+\mathbf{b}) &= \|\mathbf{b}\|^2 &\quad\mathrm{since}\quad \mathbf{b}\cdot\mathbf{a}+\mathbf{b}\cdot\mathbf{b} &= 0+\|b\|^2 \\ \mathbf{b}\cdot(\mathbf{a}-\mathbf{b}) &= -\|\mathbf{b}\|^2 &\quad\mathrm{since}\quad \mathbf{b}\cdot\mathbf{a}-\mathbf{b}\cdot\mathbf{b} &= 0-\|b\|^2 \end{align}$ Each user in synchronous CDMA uses a code orthogonal to the others' codes to modulate their signal. An example of four mutually orthogonal digital signals is shown in the figure. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95 64 bit Walsh codes are used to encode the signal to separate different users. Since each of the 64 Walsh codes are orthogonal to one another, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each user's signal can be encoded and decoded. ### Example An example of four mutually orthogonal digital signals. Start with a set of vectors that are mutually orthogonal. (Although mutual orthogonality is the only condition, these vectors are usually constructed for ease of decoding, for example columns or rows from Walsh matrices.) An example of orthogonal functions is shown in the picture on the left. These vectors will be assigned to individual users and are called the code, chip code, or chipping code. In the interest of brevity, the rest of this example uses codes, v, with only 2 bits. Each user is associated with a different code, say v. A 1 bit is represented by transmitting a positive code, v, and a 0 bit is represented by a negative code, –v. For example, if v = (v0, v1) = (1, –1) and the data that the user wishes to transmit is (1, 0, 1, 1), then the transmitted symbols would be (v, –v, v, v) = (v0, v1, –v0, –v1, v0, v1, v0, v1) = (1, –1, –1, 1, 1, –1, 1, –1). For the purposes of this article, we call this constructed vector the transmitted vector. Each sender has a different, unique vector v chosen from that set, but the construction method of the transmitted vector is identical. Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they subtract and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component. If sender0 has code (1, –1) and data (1, 0, 1, 1), and sender1 has code (1, 1) and data (0, 0, 1, 1), and both senders transmit simultaneously, then this table describes the coding steps: | | | | |------|------------------------------------------------------------------------------------|------------------------------------------------------------------------------------| | Step | Encode sender0 | Encode sender1 | | 0 | code0 = (1, –1), data0 = (1, 0, 1, 1) | code1 = (1, 1), data1 = (0, 0, 1, 1) | | 1 | encode0 = 2(1, 0, 1, 1) – (1, 1, 1, 1) = (1, –1, 1, 1) | encode1 = 2(0, 0, 1, 1) – (1, 1, 1, 1) = (–1, –1, 1, 1) | | 2 | signal0 = encode0 ⊗ code0 = (1, –1, 1, 1) ⊗ (1, –1) = (1, –1, –1, 1, 1, –1, 1, –1) | signal1 = encode1 ⊗ code1 = (–1, –1, 1, 1) ⊗ (1, 1) = (–1, –1, –1, –1, 1, 1, 1, 1) | Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal: (1, –1, –1, 1, 1, –1, 1, –1) + (–1, –1, –1, –1, 1, 1, 1, 1) = (0, –2, –2, 0, 2, 0, 2, 0) This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern, the receiver combines it with the codes of the senders. The following table explains how this works and shows that the signals do not interfere with one another: | | | | |------|------------------------------------------------------|-----------------------------------------------------| | Step | Decode sender0 | Decode sender1 | | 0 | code0 = (1, –1), signal = (0, –2, –2, 0, 2, 0, 2, 0) | code1 = (1, 1), signal = (0, –2, –2, 0, 2, 0, 2, 0) | | 1 | decode0 = pattern.vector0 | decode1 = pattern.vector1 | | 2 | decode0 = ((0, –2), (–2, 0), (2, 0), (2, 0)).(1, –1) | decode1 = ((0, –2), (–2, 0), (2, 0), (2, 0)).(1, 1) | | 3 | decode0 = ((0 + 2), (–2 + 0), (2 + 0), (2 + 0)) | decode1 = ((0 – 2), (–2 + 0), (2 + 0), (2 + 0)) | | 4 | data0=(2, –2, 2, 2), meaning (1, 0, 1, 1) | data1=(–2, –2, 2, 2), meaning (0, 0, 1, 1) | Further, after decoding, all values greater than 0 are interpreted as 1 while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2, –2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly 0 means that the sender did not transmit any data, as in the following example: Assume signal0 = (1, –1, –1, 1, 1, –1, 1, –1) is transmitted alone. The following table shows the decode at the receiver: | | | | |------|--------------------------------------------------------|-------------------------------------------------------| | Step | Decode sender0 | Decode sender1 | | 0 | code0 = (1, –1), signal = (1, –1, –1, 1, 1, –1, 1, –1) | code1 = (1, 1), signal = (1, –1, –1, 1, 1, –1, 1, –1) | | 1 | decode0 = pattern.vector0 | decode1 = pattern.vector1 | | 2 | decode0 = ((1, –1), (–1, 1), (1, –1), (1, –1)).(1, –1) | decode1 = ((1, –1), (–1, 1), (1, –1), (1, –1)).(1, 1) | | 3 | decode0 = ((1 + 1), (–1 – 1),(1 + 1), (1 + 1)) | decode1 = ((1 – 1), (–1 + 1),(1 – 1), (1 – 1)) | | 4 | data0 = (2, –2, 2, 2), meaning (1, 0, 1, 1) | data1 = (0, 0, 0, 0), meaning no data | When the receiver attempts to decode the signal using sender1's code, the data is all zeros, therefore the cross correlation is equal to zero and it is clear that sender1 did not transmit any data. ## Asynchronous CDMA See also: Direct-sequence spread spectrum and near-far problem When mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, a different approach is required. Since it is not mathematically possible to create signature sequences that are both orthogonal for arbitrarily random starting points and which make full use of the code space, unique "pseudo-random" or "pseudo-noise" (PN) sequences are used in asynchronous CDMA systems. A PN code is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These PN codes are used to encode and decode a user's signal in Asynchronous CDMA in the same manner as the orthogonal codes in synchronous CDMA (shown in the example above). These PN sequences are statistically uncorrelated, and the sum of a large number of PN sequences results in multiple access interference (MAI) that is approximated by a Gaussian noise process (following the central limit theorem in statistics). Gold codes are an example of a PN suitable for this purpose, as there is low correlation between the codes. If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users. All forms of CDMA use spread spectrum process gain to allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified PN sequence (code) are received, while signals with different codes (or the same code but a different timing offset) appear as wideband noise reduced by the process gain. Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for Asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power control scheme to tightly control each mobile's transmit power. ### Advantages of asynchronous CDMA over other techniques #### Efficient practical utilization of fixed frequency spectrum In theory, CDMA, TDMA and FDMA have exactly the same spectral efficiency but practically, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA. TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard-time, which reduces the probability that users will interfere, but decreases the spectral efficiency. Similarly, FDMA systems must use a guard-band between adjacent channels, due to the unpredictable doppler shift of the signal spectrum because of user mobility. The guard-bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum. #### Flexible allocation of resources Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of a PN codes to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots and frequency slots respectively are fixed hence the capacity in terms of number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability, since the SIR (Signal to Interference Ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2N users that only talk half of the time, then 2N users can be accommodated with the same average bit error probability as N users that talk all of the time. The key difference here is that the bit error probability for N users talking all of the time is constant, whereas it is a random quantity (with the same mean) for 2N users talking half of the time. In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number of orthogonal codes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there are N time slots in a TDMA system and 2N users that talk half of the time, then half of the time there will be more than N users needing to use more than N time slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal code, time slot or frequency channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say, and go off the air when they don't, keeping the same PN signature sequence as long as they are connected to the system. ### Spread-spectrum characteristics of CDMA Most modulation schemes try to minimize the bandwidth of this signal since bandwidth is a limited resource. However, spread spectrum techniques use a transmission bandwidth that is several orders of magnitude greater than the minimum required signal bandwidth. One of the initial reasons for doing this was military applications including guidance and communication systems. These systems were designed using spread spectrum because of its security and resistance to jamming. Asynchronous CDMA has some level of privacy built in because the signal is spread using a pseudo-random code; this code makes the spread spectrum signals appear random or have noise-like properties. A receiver cannot demodulate this transmission without knowledge of the pseudo-random sequence used to encode the data. CDMA is also resistant to jamming. A jamming signal only has a finite amount of power available to jam the signal. The jammer can either spread its energy over the entire bandwidth of the signal or jam only part of the entire signal.[9] CDMA can also effectively reject narrow band interference. Since narrow band interference affects only a small portion of the spread spectrum signal, it can easily be removed through notch filtering without much loss of information. Convolution encoding and interleaving can be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread spectrum signal occupies a large bandwidth only a small portion of this will undergo fading due to multipath at any given time. Like the narrow band interference this will result in only a small loss of data and can be overcome. Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudo-random codes will have poor correlation with the original pseudo-random code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudo-random codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored. Some CDMA devices use a rake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlation tuned to the path delay of the strongest signal.[10] Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems frequency planning is an important consideration. The frequencies used in different cells must be planned carefully to ensure signals from different cells do not interfere with each other. In a CDMA system, the same frequency can be used in every cell, because channelization is done using the pseudo-random codes. Reusing the same frequency in every cell eliminates the need for frequency planning in a CDMA system; however, planning of the different pseudo-random sequences must be done to ensure that the received signal from one cell does not correlate with the signal from a nearby cell.[11] Since adjacent cells use the same frequencies, CDMA systems have the ability to perform soft hand offs. Soft hand offs allow the mobile telephone to communicate simultaneously with two or more cells. The best signal quality is selected until the hand off is complete. This is different from hard hand offs utilized in other cellular systems. In a hard hand off situation, as the mobile telephone approaches a hand off, signal strength may vary abruptly. In contrast, CDMA systems use the soft hand off, which is undetectable and provides a more reliable and higher quality signal.[11] ## Collaborative CDMA In a recent study, a novel collaborative multi-user transmission and detection scheme called Collaborative CDMA[12] has been investigated for the uplink that exploits the differences between users’ fading channel signatures to increase the user capacity well beyond the spreading length in multiple access interference (MAI) limited environment. The authors show that it is possible to achieve this increase at a low complexity and high bit error rate performance in flat fading channels, which is a major research challenge for overloaded CDMA systems. In this approach, instead of using one sequence per user as in conventional CDMA, the authors group a small number of users to share the same spreading sequence and enable group spreading and despreading operations. The new collaborative multi-user receiver consists of two stages: group multi-user detection (MUD) stage to suppress the MAI between the groups and a low complexity maximum-likelihood detection stage to recover jointly the co-spread users’ data using minimum Euclidean distance measure and users’ channel gain coefficients. In CDM signal security is high. ## See also Wikimedia Commons has media related to: CDMA ## Further reading • Viterbi, Andrew J. (1995). CDMA: Principles of Spread Spectrum Communication (1st ed.). Prentice Hall PTR. ISBN 0-201-63374-4. • "CDMA Spectrum". Retrieved 2008-04-29. ## References 1. Ageev, D. V. (1935). "Bases of the Theory of Linear Selection. Code Demultiplexing". Proceedings of the Leningrad Experimental Institute of Communication: 3–35. 2. 8, 1957, p. 49 3. Yuniy technik 7, 1957, p. 43-44 4. Nauka i Zhizn 10, 1958, p. 66 5. 2, 1959, p. 18-19 6. Ipatov, Valeri (2000). Spread Spectrum and CDMA. John Wiley & Sons, Ltd. 7. Dubendorf, Vern A. (2003). Wireless Data Technologies. John Wiley & Sons, Ltd. 8. "CDMA Spectrum". Retrieved 2008-04-29. 9. Skylar, Bernard (2001). Digital Communications: Fundamentals and Applications (Second ed.). Prentice-Hall PTR. 10. Rapporteur, Theodore S. (2002). Wireless Communications, Principles and Practice. Prentice-Hall, Inc. 11. ^ a b Harte, Levine, Kikta, Lawrence, Richard, Romans (2002). 3G Wireless Demystified. McGowan-Hill. 12. Shakya, Indu L. (2011). "High User Capacity Collaborative CDMA". IET Communications.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9092373251914978, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/11631?sort=oldest
## Complete graph invariants? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Obviously, graph invariants are wonderful things, but the usual ones (the Tutte polynomial, the spectrum, whatever) can't always distinguish between nonisomorphic graphs. Actually, I think that even a combination of the two I listed will fail to distinguish between two random trees of the same size with high probability. Is there a known set of graph invariants that does always distinguish between non-isomorphic graphs? To rule out trivial examples, I'll require that the problem of comparing two such invariants is in P (or at the very least, not obviously equivalent to graph isomorphism) -- so, for instance, "the adjacency matrix" is not a good answer. (Computing the invariants is allowed to be hard, though.) If this is (as I sort of suspect) in fact open, does anyone have any insight on why it should be hard? Such a set of invariants wouldn't require or violate any widely-believed complexity-theoretic conjectures, and actually there are complexity-theoretic reasons to think that something like it exists (specifically, under derandomization, graph isomorphism is in co-NP). It seems like it shouldn't be all that hard... Edit: Thorny's comment raises a good point. Yes, there is trivially a complete graph invariant, which is defined by associating a unique integer (or polynomial, or labeled graph...) to every isomorphism class of graphs. Since there are a countable number of finite graphs, we can do this, and we have our invariant. This is logically correct but not very satisfying; it works for distinguishing between finite groups, say, or between finite hypergraphs or whatever. So it doesn't actually tell us anything at all about graph theory. I'm not sure if I can rigorously define the notion of a "satisfying graph invariant," but here's a start: it has to be natural, in the sense that the computation/definition doesn't rely on arbitrarily choosing an element of a finite set. This disqualifies Thorny's solution, and I think it disqualifies Mariano's, although I could be wrong. - 6 Enumerate the finite graphs, assign to each graph its index in the sequence. Comparing the invariant is easy, calculating it, not so much. I assume you wanted some kind of restriction on the invariants, so this is excluded? – Thorny Jan 13 2010 at 8:53 That's funny: In <a href="mathoverflow.net/questions/11647/… most recent post</a> I try to make sense of this notion of naturality. And more than that: I definitely had your question at the top of my pipeline, but now I will postpone it and watch the discussion here. – Hans Stricker Jan 13 2010 at 11:43 2 This seems to be a very difficult problem. You can think about the case of bounded degree graphs where it is known that graph isomorphism is in P. Still no set of invariants is known. (Do you have any suggestions for trees? for planar graphs?) For general graphs although there are good reasons to believe that graph isomorphism is in co NP there are also reasons to believe that showing it will be very hard. "Under deradomization" is not something to take lightly. (Here are some examples: gilkalai.wordpress.com/2009/12/06/… ) – Gil Kalai Jan 13 2010 at 13:23 Gil: one suggestion for trees is the chromatic symmetric polynomial (garden.irmacs.sfu.ca/?q=op/…). – Qiaochu Yuan Jan 13 2010 at 13:26 3 I do not see how to compare chromatic symmetric polynomials in P. In some sense comparing them (I am not even talking about calculating them) is more complicated than comparing the trees. You can regard the deck of isomorphism types of edge-deleted subgraphs (or vertex deleted subgraphs) as a kind of graph invariant of the type you want. – Gil Kalai Jan 13 2010 at 20:12 show 2 more comments ## 5 Answers A complete graph invariant is computationally equivalent to a canonical labeling of a graph. A canonical labeling is by definition an enumeration of the vertices of every finite graph, with the property that if two graphs are isomorphic as unlabeled graphs, then they are still isomorphic as labeled graphs. If you have a black box that gives you a canonical labeling, then obviously that is a complete graph invariant. On the other hand, if you have a complete graph invariant for unlabeled graphs, then you also have one for partially labeled graphs. So given a black box that computes a complete graph invariant, you can assign the label 1 to the vertex that minimizes the invariant, then assign a label 2 to a second vertex than again minimizes the invariant, and so on. There are algorithms to decide graph isomorphism for certain types of graphs, or for all graphs but with varying performance, and there are algorithms for canonical labeling, again with varying performance. It is understood that graph isomorphism reduces to canonical labeling, but not necessarily vice versa. The distinction between the two problems is discussed in this classic paper by Babai and Luks. One natural canonical labeling of a graph is the one that is lexicographically first. I think I saw, although I don't remember where, a result that computing this canonical labeling for one of the reasonable lex orderings on labeled graphs is NP-hard. But there could well be a canonical labeling computable in P that doesn't look anything like first lex. As Douglas says, nauty is a graph computation package that includes a canonical labeling function. It is often very fast, but not always. Nauty uses a fancy contagious coloring algorithm. For a long time people thought that contagious coloring algorithms might in principle settle the canonical labeling and graph isomorphism problems, but eventually counterexamples were found in another classic paper by Cai, Furer, and Immerman. It was not clear at first whether this negative result would apply to nauty, but it seems that it does. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One can find generators for the ring of invariants of $\mathbb F_2[x_{ij}:1\leq i < j \leq n]$ under the action of $S_n$ on the indices, which are finitely many by Noether's finite generation theorem. I think this gives you a complete set of invariants. Later: As Steve observes, one would like the number of invariants not to grow too fast. In characteristic zero (which we may use just as well), Noether's bound tells us that the ring of invariants is generated by at most $\binom{n^2+n!}{n!}$ elements, but this is quite huge (for $n=6$ the bound is 48813025503084826957958990535221725233495346780817632847728425, which is discouraging...) I do not think anyone knows how many elements one really needs, though, to generate in this particular case---usually Noether's bound is pretty bad. - Hmm. I don't know much invariant theory, but I think I get why this "should" work. That said, I'm not actually convinced this is in the spirit of the question either -- partly because I was hoping for a number of invariants independent of the order of the graph, but mostly because "turn it into algebra" doesn't really tell you much by itself (although this does look more approachable than something like "pick a canonical labeling by AoC.") Upvoted anyway, though, since it's technically right and something I hadn't thought about. – Harrison Brown Jan 13 2010 at 7:19 This attaches to each graph a vector of zeroes and ones whose length depends on $n$ (but for which there are bounds), the spectrum attaches to each graph a vector of real numbers whose length also depends on $n$. – Mariano Suárez-Alvarez Jan 13 2010 at 7:27 By the way, this works because invariant functions separate orbits. – Mariano Suárez-Alvarez Jan 13 2010 at 7:35 Good point about the spectrum. This still feels a bit like sidestepping the question, but I'll sleep on it and maybe try to work out a small case by hand tomorrow, and see if my opinion changes. – Harrison Brown Jan 13 2010 at 7:41 1 It is very hard to explicitely compute those invariants. For $n<=4$ you can do it by hand; for $n=5$ it was intractable last time I checked. See, for example, portal.acm.org/citation.cfm?id=377612 – Mariano Suárez-Alvarez Jan 13 2010 at 7:54 show 5 more comments nauty provides a canonical labelling of a graph. Here's a link: http://cs.anu.edu.au/~bdm/nauty/ - In some ways, provably, no (assuming the graphs are infinite). See MR1011177 (91f:03062) Friedman, H; Stanley, L; "A Borel reducibility theory for classes of countable structures." J. Symbolic Logic 54 (1989), no. 3, 894–914. This paper shows (although the argument is terse, and at least some is older folklore) that any Borel (in an appropriate sense) function f mapping graphs to any thing else with an equivalence relation E in such a way that G is isomorphic to H iff f(g) E f(H) must be at least as complicated as the graphs themselves. For a similar result on finite graphs, see MR2135387 (2006e:03049) Calvert, Cummins, Knight, and Miller, Comparing classes of finite structures. (Russian) Algebra Logika 43 (2004), no. 6, 666--701, 759; translation in Algebra Logic 43 (2004), no. 6, 374–392. - The sequence of homomorphism numbers $|Hom(F_i,G)|$ for all (isomorphism types of) graphs $F_i$ is an invariant of $G$ (see Lovász, Operations with structures). (Does this fit your bill? Or do you want finite invariants only?) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429327249526978, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/18847/homotopy-first-courses-in-algebraic-topology/18880
## “Homotopy-first” courses in algebraic topology ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A first course in algebraic topology, at least the ones I'm familiar with, generally gets students to a point where they can calculate homology right away. Building the theory behind it is generally then left for the bulk of the course, in terms of defining singular homology, proof of the harder Eilenberg-Steenrod axioms, cellular chains, and everything else necessary to show that the result is essentially independent of the definitions. A second course then usually takes up the subject of homotopy theory itself, which is harder to learn and often harder to motivate. This has some disadvantages, e.g. it leaves a discussion of Eilenberg-Maclane spaces and the corresponding study of cohomology operations far in the distance. However, it gets useful machinery directly to people who are consumers of the theory rather than looking to research it long-term. Many of the more recent references (e.g. tom Dieck's new text) seem to take the point of view that from a strictly logical standpoint a solid foundation in homotopy theory comes first. I've never seen a course taught this way and I'm not really sure if I know anyone who has, but I've often wondered. So the question is: Has anyone taught, or been taught, a graduate course in algebraic topology that studied homotopy theory first? What parts of it have been successful or unsuccessful? - 4 In Germany it seems to be common to teach the first homotopy ($\pi_1$) before homology. I don't know whether this is good or bad; I am missing any particularly fascinating applications of $\pi_1$. But I guess you don't want $\pi_1$ only. As for $\pi_n$, the way I have been taught it, it is very hard to calculate, and the few things that can be said about it require experience with CW complexes (CW approximation, already necessary to show that small homotopy groups of large spheres are zero) and/or homology (to use the Hurewicz theorem), so it doesn't look like a natural candidate for ... – darij grinberg Mar 20 2010 at 15:51 5 I think fundamental group before homology is pretty much standard everywhere. But, as Jose states below, fundamental group just by itself is a far cry from the rest of what is called homotopy theory. – Kevin Lin Mar 20 2010 at 17:22 1 I have also heard of the homotopy only approach. I once heard that everything (including homology) is just homotopy, but I don't even pretend to understand what this means. – Tony Huynh Mar 20 2010 at 17:37 2 My first pass through algebraic topology was homotopy-first. They were lecture notes, not from a book. The closest book in the literature would be Peter May's, I suppose. – Ryan Budney Mar 20 2010 at 19:20 4 My first course in algebraic topology was about both homotopy and homology at the same time! It was taught by two lecturers, one of them focussing on homology, and the other of them (tom Dieck) introducing homotopy. I liked it - it was like two simultaneous storylines meeting eventually. – Rasmus Mar 20 2010 at 21:12 show 5 more comments ## 12 Answers I was a heavily involved TA for such a graduate course in 2006 at UC Berkeley. We started with a little bit of point-set topology introducing the category of compactly generated spaces. Then we moved into homotopy theory proper. We covered CW-complexes and all the fundamental groups, Van-Kampen's Theorem, etc. From this you can prove some nice classical theorems, like the Fundamental Theorem of Algebra, the Brauwer Fixed Point Theorem, the Borsuk-Ulam Theorem, and that $R^n \neq R^m$ for $n \neq m$. I felt like this part of the course went fairly well and is sufficiently geometric to be suitable for a first level graduate course (you can draw lots of pictures!). At this point you can take the course in a couple different directions which all seem to have their own disadvantages and problems. The main problem is lack of time. A very natural direction is to discuss obstruction theory, since it is based off of the same ideas and constructions covered so far. However this is not really possible since the students haven't seen homology or cohomology at this point! Instead, for a bit we discussed the long exact sequences you get from fibrations and cofibrations. You could then try to lead into the definition of cohomology as homotopy classes of maps into a $K(A,n)$. But this definition is fairly abstract and doesn't show one of the main feature of homology/cohomology: It is extremely computable. Still, I could imagine a course trying to develop homology and cohomology from this point of view and leading into CW homology and the Eilenberg-Steenrod axioms. Another direction you can go is into the theory of fiber bundles (this is what we tried). The part on covering space theory works fairly well and you have all the tools at your disposal. However when you want to do general fiber bundle theory it can be difficult. A natural goal is the construction of classifying spaces and Brown's representability theorem. The problem is that the homotopy invariance of fiber bundles is non-trivial to prove. You should expect to have to spend fair amount of time on this. It is really more suited for a second course on algebraic topology. The main problem with all of these approaches is that it is difficult to cover the homotopy theory section and still have enough time to cover homology/cohomology properly. You know this has to be the case since it is hard to do the reverse: cover homology and cohomology, and still have enough time to cover homotopy theory properly. What this means is that you'll be in the slightly distasteful situation of having bunch of students who have taken a first course on algebraic topology, but don't really know about homology or cohomology. This is fine if you know that these students will be taking a second semester of algebraic topology. Then any gaps can be fixed. However, in my experience this is not a realistic expectation. As you well know, you will typically have some students who end up not being interested in algebraic topology and go into analysis or algebraic geometry or some such. Or you might have some students who are second or third year students in other math fields and are taking your course to learn more about homology and cohomology. They would be done a particular disservice by a course focusing on "homotopy first". - 4 Great answer, I feel the same way. I am guessing that people in Russia and Germany have the luxury to follow homotopy-first approach exactly because they are teaching a 2-course-sequence, and maybe the sequence is even mandatory there. – Igor Belegradek Mar 21 2010 at 22:51 4 I have to admit that as a topologist a "homotopy-first" approach is very appealing/tempting. Somehow we understand that the "real meat" of algebraic topology is homotopy theory. If I had students locked into a full year's worth of courses I would absolutely teach homotopy first. I think there are a lot of cool results and ideas that can be expressed from that perspective. I also think that it likely leads to a better understanding of homology/cohomology and how it is just a partial reflection of a deeper and larger world. However I think practical matters usually prohibit this approach. – Chris Schommer-Pries Mar 22 2010 at 2:28 2 Why don't you try to make it a full-year course? – Harry Gindi Mar 22 2010 at 7:48 Just to answer Igor: No, in Germany, topology is not mandatory, except of a basic (and partly rather stupid) course in set-theoretical topology. – darij grinberg Mar 22 2010 at 13:22 1 I was in said course; it was indeed a nice course, but it was indeed slightly distasteful to come out of a first semester of algebraic topology without a strong grasp of singular cohomology. Yes, there was also a second semester course, but it was taught by a different professor; it would have been better if it had been taught by the same professor. – Kevin Lin Mar 29 2010 at 2:19 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't think that "homotopy-first" is a special feature of the recent references. The following classical textbooks begin by introducing the general notions from the homotopy theory: • Algebraic Topology by E. H. Spanier. • Algebraic topology - homotopy and homology by R. M. Switzer. In my opinion, these books provide a basis for a good graduate course. A Course of Homotopic Topology by D. B. Fuchs and A. T. Fomenko, which is another great textbook, also begins with the homotopy theory. - 1 This is a great answer that I can fully get behind (it's also something that I could never get away with, but that's a different story). – Harry Gindi Mar 20 2010 at 19:47 11 I would say that Spanier is only partially a homotopy-first textbook. Things like the homotopy extension and lifting properties, fiber bundles, and fibrations are introduced before homology, but higher homotopy groups don't appear until much later. A big problem with the book is of course that it's now quite outdated. For example, CW complexes don't appear until page 400. The book of tom Dieck is much more a homotopy-first textbook, and is certainly much more up to date, having the benefit of 40 years of hindsight. One could say it's Spanier done right. – Allen Hatcher Mar 21 2010 at 14:36 Mind if I quote you on that last part in my review next week of tom Dieck for the MAA online,Allen?Had to ask,,,,,,,,, Personally,I consider Joseph Rotman's book to be Spainer done right. Also very beautiful,if you've ever seen them,is Spainer's original lecture notes from Berkeley in the early 1960's. They are much more limited in scope and therefore focused. – Andrew L Mar 22 2010 at 5:12 I think Spanier's problem with his book is that he didn't heed Otto von Fredrich's warning to his students which Peter Lax so often quotes:"It's easy to write a book on something if you make the mistake of trying to put everything you know about it into it." – Andrew L Mar 22 2010 at 5:12 For better or worse,the geometric/formalistic camps of teaching this subject have hardened thier stances and widened the divide.For the former,your book (and the forthcoming completed versions of the sequels,which hopefully will see the light of day one of these years) will probably be the bibles of the former camp and tom Dieck's and May's books will be those for the latter.tom Dieck's book,in fact,has already been adopted by Clark Barwick at Harvard for his course. – Andrew L Mar 22 2010 at 5:17 show 2 more comments There is the Aguilar-Gitler-Prieto book on algebraic topology: Algebraic Topology from a Homotopical Viewpoint. As I recall from browsing it, the book is meant to be a graduate course in algebraic topology, and it introduces both homology and cohomology eventually. - 1 This book introduces homology using the Dold-Thom theorem, which makes it sound like exactly the sort of thing Tyler is thinking about: you build many tools of homology without ever mentioning simplices (singular or otherwise). However, it requires the machinery of quasifibrations. It would be interesting to hear from someone who's used this approach in a first course. – Dan Ramras Mar 22 2010 at 5:31 Novikov (apparently) taught this way: see the 3-volume set Modern Geometry (link to vol.1) with Dubrovin and Fomenko. Volume 2 covers homotopy (among other things) and volume 3 covers homology. - If the "first course" is meant to be taken by all students in pure math, then homotopy theory does not belong there I think; I do not see how learning about homotopy groups of spheres, Eilenberg-MacLane spaces, or obstruction theory could benefit those not interested in topology per se. If on the other hand, the audience consists of students in geometry/topology, then substantial homotopy theory may (and should) be taught. My personal favorites are texts by Fuchs-Fomenko, and May. - 3 While I largely agree with you, I think a small amount of homotopy theory belongs in the education of all pure mathematics students. For example, I think it's good for every mathematician to know that $\pi_3(S^2)$ is nontrivial. – Timothy Chow Sep 7 2010 at 20:55 I was taught algebraic topology from Brayton Gray's "Homotopy Theory" (Academic Press) and the approach was wholly homotopical: Homology and Cohomology are defined using spectra and the constructions are natural and clear. The transition to advanced topics is easy and natural (generalized cohomology theories, for example, including algebraic K-theory). - As a graduate student I was taught homotopy first (including higher homotopy groups), then singular homology, and then cohomology. The instructor was quite good, but now I feel that the order of presentation was backwards. I think starting with homotopy is fine as long as you stay in low dimensions, but degenerates into algebraic nonsense otherwise. I highly recommend Stillwell's book Classical Topology and Combinatorial Group Theory where he takes this approach. Edit: I am not a topologist. I am probably further from being a topologist than people who have left similar disclaimers. - 1 Ronald Brown's TOPOLOGY AND GROUPOIDS is also excellent for this approach,Tony.It has the added benefit of making point-set topology geometric rather then analytic,as it is usually presented. – Andrew L Mar 20 2010 at 21:24 Thanks Andrew. I will check it out. – Tony Huynh Mar 23 2010 at 19:39 Disclaimer: I am not a topologist. I was taught basic homotopy theory (fundamental group, van Kampen, but not sure about higher homotopy groups, that might have been elsewhere) at the end of a point-set topology graduate course based on Munkres's Topology: a first course. As Mikael comments above, $\pi_1$ being so geometric means it can be taught without the need of the standard algebraic topology machinery. Of course, $\pi_1$ is a far cry from homotopy theory, which requires a lot more technology. - @Tyler Lawson: I just saw this question. Our book published in 2011 and advertised on http://pages.bangor.ac.uk/~mas010/nonab-a-t.html does exactly that. No (or little) singular homology, no simplicial approximation. It gives many calculations of nonabelian second relative homotopy groups not available by traditional methods. It also gets to the Relative Hurewicz Theorem and the calculation of certain homotopy classes of maps, including the non simply connected case. It is in a sense a rewrite of algebraic topology on the border between homotopy and homology, using functors defined in terms of homotopy classes of maps, and establishing their main properties directly. Of course there is a lot of homotopy and homology theory it does not do, for example Poincare duality: I've put that as one of a number problems to solve in the style/techniques of the book! - This isn't quite what you mean, but I took Igor Frenkel's algebraic topology course as an undergrad. He taught out of Massey's book, A Basic Course in Algebraic Topology. It starts with the classification of 2-manifolds, does the fundamental group and the Seifert-von Kampen theorem, and then does singular homology and cohomology. De Rham cohomology is only there as an appendix. I think the fundamental group is a little bit easier to grasp early on in a first course than singular homology. For cohomology first, you could do something like Bott & Tu, I suppose, but I think this way is a bit more useful because de Rham cohomology is a little too nice for its own good. - As an undergraduate, I took a semester of point-set topology that used Munkres's book Topology, and we studied the fundamental group towards the end of the course. Following that, I took a semester of algebraic topology that used Greenberg and Harper's book Algebraic Topology: A First Course. Greenberg and Harper start off with homotopy theory and introduce higher homotopy groups. However, they don't go very far with homotopy theory before turning their attention to singular homology. Although there are various things I don't like about Greenberg and Harper's book (for example, I didn't learn about simplicial homology until much later, and I think I would have understood singular homology better if I had first learned simplicial homology), I think that the approach of giving a brief introduction to homotopy groups before proceeding to homology theory works pretty well. It's good to emerge from a one-semester course at least knowing what higher homotopy groups are. - J. P. May's superb book, "A Concise Course in Algebraic Topology," starts with a great deal on homotopy theory, and doesn't really get to homology until nearly half way through. I learned a great deal from this approach, and think that it is the best way to teach algebraic topology. But May's book is probably too difficult for a "first course" in algebraic topology. - It sadly also omits too many important topics,Daniel-like the classification of compact surfaces.But it is definitely a must read for anyone interested in the subject. – Andrew L Sep 7 2010 at 17:33 The classification of compact surfaces isn't really part of algebraic topology (and I say that as someone who in some sense specializes in 2-dimensional topology). While May's book has its faults and is probably too brisk for a first pass through the subject (though I took such a class from Peter back when I was a grad student and seemingly turned out ok), I think it covers everything that belongs in a first year grad course in algebraic topology. – Andy Putman Sep 7 2010 at 22:23 I don't think May wrote the book for beginners. Though I should admit the book is beautifully written. One should know enough examples before reading it. However this book is the most up-to-date algebraic topology textbook and the best book prepared for a future researcher. – Yan Zou May 8 2012 at 17:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9618104100227356, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/71118/subspaces-of-duals
## Subspaces of duals ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is an easy undergraduate exercise to show that (finite) direct sums are preserved under dualisation. Thus, it is natural to ask if we the following holds: is it true that if $X$ is a subspace of $Y$, then $X^*$ is a subspace of $Y^*$? In many cases this is certainly not true (one can construct relevant subspaces of $Y=\ell^\infty$) but... let me say that $Y$ has property (D) if the above hypothesis holds for $Y$. Is it true that the only spaces with property (D) are Hilbert spaces? - ## 2 Answers $Y=L_1[0,1]$ has the property (D) since it is separable and the dual of any separable space embeds into $Y^\ast = L_\infty[0,1]$. Of course, any separable space with a complemented subspace whose dual is isomorphic to $L_\infty[0,1]$ will have the property too. If I think of other examples that are fundamentally different I'll add them later. - ok, it was easy, thanks! – Tomek Kania Jul 24 2011 at 13:39 1 I probably should have just answered in a comment, but in any case my initial attempts to find a counterexample (before I thought of $L_1$) involved trying to construct a space $Y$ with the formally stronger property $(E)$ (say) that if $X$ is a subspace of $Y$, then $X$ is isomorphic to $X^\ast$... this seems to be not so easy if one excludes Hilbert spaces I wonder if having the property (E) is equivalent to being isomorphic to Hilbert space? Maybe the answer to this also follows easily from known results, but off the top of my head I can't think of a good argument or counterexample. – Philip Brooker Jul 24 2011 at 14:04 Nice question, Phil. It is related to an old unsolved problem of Pelczynski's: If $X$ is separable and every subspace of $X$ that has a basis is isomorphic to $\ell_2$, must $X$ be isomorphic to $\ell_2$? – Bill Johnson Jul 24 2011 at 15:34 Phil, what about a local version of your problem? Suppose you have a family of finite dimensional spaces that is closed under taking of subspaces. Assume that every space in the collection is $C$-isomorphic to its dual. Is every space in the collection $D$-isomorphic to a Hilbert space for some $D=D(C)$? This looks to me easier than your question. – Bill Johnson Jul 24 2011 at 15:39 Pelczynski's question is quite intriguing, Bill. I guess one could write down many properties for which the only known spaces with the property are those isomorphic to $\ell_2$; e.g., I wonder if the only minimal Banach space isomorphic to its dual is $\ell_2$? In a sort of opposite direction, I think it is (still) an open question whether an indecomposable space can be isomorphic to its dual. The local question you pose above seems like it is probably true... Is it known to be true for $C=1$? – Philip Brooker Jul 26 2011 at 14:24 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There are separable reflexive examples also, such as the $\ell_2$ sum $X$ of a sequence of finite dimensional spaces that is dense (in the sense of the Banach-Mazur distance) in the collection of all finite dimensional spaces. See my 1974 paper with Zippin in the Israel Journal of Mathematics. Another example is a separable reflexive space $X$ s.t. every subspace of every quotient of $X$ is isomorphic to a complemented subspace of $X$, yet $X$ is not isomorphic to a Hilbert space. This is in a recent paper with Szankowski and can be downloaded from http://www.math.tamu.edu/~bill.johnson/selpubs.html (no. 117). The example with Zippin lacks this property since it has subspaces that fail the approximation property (we proved in an earlier paper that each of its subspaces that has the approximation property is isomorphic to a complemented subspace). - Awesome examples, Bill. It's a shame that my answer was already accepted when you posted yours, because the content of mine was really only comment-worthy. – Philip Brooker Jul 26 2011 at 14:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371233582496643, "perplexity_flag": "head"}
http://mathoverflow.net/questions/1677?sort=newest
## Number of metric spaces on N points ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given X = {x1, ..., xn}, how many collections C of subsets of X are there such that C is the listing of all open balls of some metric space? The first nontrivial example is n=3; let's call the points x, y, and z. Also, let a = d(x, y), b = d(y, z), and c = d(z, x). For any collection C to be a listing of all the open balls, it must contain all the singleton sets and the whole set X. Let C0 = {{x},{y},{z},X}. If x, y, and z are equidistant, these are the only open balls. If, say, a < b < c, then we get C = C0 \union {{x,y},{y,z}}. Through careful case enumeration, we can answer this for small n, but the process quickly becomes unwieldy. Has anyone ever looked at this before, and is there a recursive formula or a generating function for this? What about if the points are unlabeled? For n=3, I count 7 possibilities for C if the points are labeled, and 3 if the points are unlabeled. It's already somewhat time-consuming to count when n=4. - ## 5 Answers My analysis leads to different answers from the above, and to some references. Edit: I get different answers because I changed the question without realizing it. I'm working from the collection of pointed metric balls, i.e., metric balls with a distinguished center. Gabe asked the question about the collection of unpointed metric balls, noting that the same set can be a metric ball with two different centers. That seems more complicated, although I would suggest working from the pointed solution. Let's let ${1,2,\ldots,n}$ be the points in $X$. The distances from $i$ to the other points in the set $X$ induce a strict weak ordering of those points by their distance from $i$. This has the same information as the set of metric balls with center $i$. Thus the information in the metric is given by all of the comparisons between the distances $x_{(i,j)} = d(i,j)$ and $x_{(i,k)} = d(i,k)$. You can express these relations by a hyperplane arrangement in $\mathbb{R}^{n(n-1)/2}$, where the hyperplanes are given by the equations $x_{(i,j)} = x_{(i,k)}$. You might also think about the triangle inequalities satisfied by all of the distances, and the fact that the distances are all positive numbers. The set of feasible distance vectors is called the "metric cone". Although the metric has an interesting combinatorial structure, it looks like it matters for nothing in this particular question: The hyperplanes all meet at an interior point of the cone in which the distances are all equal. In other words, you can always add a constant distance $h \gg 0$ to all of the distances without changing any of the metric balls, so that the triangle inequality and positivity of distance become irrelevant. If a hyperplane arrangement has the property that all hyperplanes are given by setting two coordinates equal, then the arrangement is called "graphical". The coordinates correspond to the vertices of a graph $G$, and the hyperplanes correspond to the edges. The hyperplane arrangement gives you a partially ordered set of chambes and other faces, ordered by inclusion. This poset has a lot of properties and there are techniques to compute the number of faces from $G$. In our case, $G$ is the line graph $T_n$ of the complete graph $K_n$, which is sometimes confusingly called a triangular graph. For $n=3$, my answer is that there are 13 different types of metrics, not 7, corresponding to the hyperplane arrangement $x=y$, $y=z$, $x=z$ in $\mathbb{R}^3$. In other words, I count 13 types of triangles: 6 scalene, 3 short-base isosceles, 3 long-base isosceles, and 1 equilateral. Some of the types of metrics lie in chambers, meaning generic metrics in which $d(i,j) \ne d(i,k)$ for all $i$, $j$, and $k$. It is a theorem that the number of chambers of the graphical arrangment $A(G)$ of a graph $G$ is $|\chi_G(-1)|$, where $\chi_G$ is the chromatic polynomial of $G$. (See this excellent review by Richard Stanley.) So I asked Maple to compute ````ChromaticPolynomial(LineGraph(CompleteGraph(n)),q) ```` for small values of $n$, and I got the following answers: $$\chi_{T_3}(q) = q(q-1)(q-2)$$ $$\chi_{T_4}(q) = q(q-1)(q-2)(q^3-9^2+29q-32)$$ $$\chi_{T_5}(q) = q(q-1)(q-2)(q-3)(q-4)(q^5-20q^4+170q^3-765q^2+1804q-1764).$$ Evaluating at $q=-1$, I get that there are 1, 6, 426, and 542880 generic types of metrics on 2, 3, 4, and 5 points. This sequence is not in the Encyclopedia of Integer Sequences, although possibly it should be. I think that you can obtain the total number of faces of a graphical arrangement $A(G)$ from the Tutte polynomial of $G$, and therefore the total number of types of metrics, but I did not do the calculation. - Interesting analysis! However, I still think there are 7 types of metrics according to my properties above. While there are 6 different scalene triangles, these give only 3 different collections of open balls. If d(x,y) = 3, d(y,z) = 4, and d(x,z) = 5, then the nontrivial open balls are {x,y} and {y,z}. The same is true if we interchange the labels x and z. My view is to treat these as the same. Nevertheless, this viewpoint might help count (or at least get an upper bound on) the number I want. – Gabe Cunningham Dec 22 2009 at 15:30 I see. I misinterpreted the question, although my interpretation can also be pursued. – Greg Kuperberg Dec 22 2009 at 17:41 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I published a paper a few years ago that studies the question for n=3 and n=4 points, and maybe there are connections with what you are discussing here. Vania Mascioni: On the probability that finite spaces with random distances are metric spaces, Discrete Mathematics 300 (2005) 129-138. My approach was to first count all the metric spaces arising from distances chosen from the pool of integer values in {1,2,...,n}, and then take the limit to infinity to obtain the ratio of metric (real) distances versus all possible distance choices. So, for n=3 points, if the distances are real and set at random, the probability that the triangle is metric will be 1/2 (this can also be handled as an easy calculus exercise, if you like integrals). This is a very easy result. For n=4 points the counting task was much more involved, but the real limit is easily stated: with four points and with the six (real) distances among them chosen at random, the probability that they form a metric space is exactly 17/120. I lost patience and gave up when it came to five points, though the paper above contains an estimate (weak) of what to expect when the number of points is allowed to get large. [Edit: I had mistyped, sorry, the probability I had typed in, 103/120 was for a 4-point set with random distances not to be metric. The metric probability decreases sharply with the size of the space, roughly with the order of c^{M^2}, where c is a constant (between .7 and .9?) and M is the number of points in the space. Also, as some comments below state, indeed the problem has similar nature and motivation to the one of counting finite topologies (see the work by Kleitman and Rothschild to get started), but the nuts and bolts are necessarily different.] - Well, here's a start. Suppose we have n points, and let k = n(n-1)/2. Thus there are k distances we have to pick. Let's take all of our distances to lie in the set {k+1, k+2, ..., 2k}, so that we don't have to worry about the triangle inequality. Now, the collection of open balls depends on how many times each distance is repeated. However, translating the set by an integer doesn't affect the collection of open balls. That is, if D is the multiset of distances, and CD is the collection of open balls induced by D, then CD+1 = CD, where by D+1 I mean add 1 to each element of D. This is true because of what javier says: to find the collection of open balls at a point, we just start at that point and increase the radius of the ball by 1 at each step, writing down each open ball we get and stopping when we get the whole set. The upshot of this is that if k+1 is not the smallest element of D, we can translate D such that k+1 is the smallest element, without affecting the open ball structure. In fact, I'm pretty sure a stronger statement is true: if D has a gap, we can slide down the upper part of D to close that gap without affecting the open ball structure. That is, if r and s are elements of D such that r < s-1 and there are no elements of D strictly between r and s, then we can translate s and everything above it down by 1. If D has r distinct values, then by doing such translations, we can get D to be a multiset with values in {k+1, ..., k+r}. Thus, if we have a particular open ball structure C on n points, we can find a multiset D with the above properties such that C = CD. So the number of such multisets provides an upper bound on the number of (unlabeled) metric spaces. I have no idea how good this bound is, but let's calculate it. Let fr(k) = # of multisets with k elements taking values from {k+1, ..., k+r}, and taking each value at least once. So we essentially have k-r free slots in D, and r different values, so the number of such multisets is the binomial coefficient B((k-r)-(r-1), r-1) = B(k-1, r-1). Then the total number of multisets is f1(k) + ... + fk(k) = 2^{k-1}. Now, that looks pretty huge: 2^{(n+1)(n-2)/2}. On the other hand, there are 2^{2^n} collections of subsets of X, and even when you account for the fact that you have to include all the singletons and the whole set, 2^{2^n - (n+1)} is hardly an improvement. For n=3, our new upper bound is 4, and the true value is 3, since the multisets {3, 4, 5} and {3, 3, 4} give the same open ball structure. Can anyone expand on this? Edit: On further reflection, there are multiple different open ball structures induced by a multiset. For example, if n=4 and D = {7, 7, 7, 7, 8, 8}, then we get different structures if we assign the two distances of 8 to the same vertex or to different vertices. So it seems we should look at ordered k-tuples instead of multisets, which makes our upper bound much larger (bigger than k factorial). So maybe this is less useful than I thought. But at least after thinking about it like this, I might have simplified the problem enough that I can write a program to calculate the next few values of the sequence. - The metric condition actually implies that every point is closed, and since any finite union of closed is closed, every subset is closed, and thus also open, so there is only one metric topology (the discrete topology), the problem is not really about topology but about picking the distances between the points, which are just n(n-1)/2 - uples of positive numbers (you can even force them to be natural numbers) satisfying the triangular inequality and then finding the sets of balls. A random idea: every point x will be contained in a smallest ball, say B, that must contain at least another point y, that ball will be contained in another one and so on until one reach the whole space, so it looks like enumerating sets of balls is equivalent to enumerating (rooted) trees with n leafs for which each vertex which is not a leaf has at least two children. - 1 Ah, interesting. So in other words, you're looking at the lattice of subsets of X and piecing together the paths you describe as a tree in the lattice. However, it is not actually necessary for each nonleaf to have at least 2 children. For example, if x and z are very far apart, and both are an equal distance to y, then when we enlarge from x we get {x}->{x,y}->{x,y,z}, and similarly when we enlarge from z. But enlarging from y, we get {y}->{x,y,z}. So the nodes {x,y} and {y,z} both only have a single child. Is there another way to characterize the trees that come up this way? – Gabe Cunningham Oct 21 2009 at 16:38 There is a theorem which might be inspiring here: It says that associating real numbers w_ij to the edges of a tree gives a metric on the nodes (with those numbers as distances) iff for each quadruple of nodes there exists a unique way of associating to the nodes the names i,j,k,l such that wij + wkl = wik + wjl ≤ wil + wjk This comes from the paper P. Buneman: A note on metric properties of trees, Journal of Combinatorial Theory, Ser. B 17 (1974) 48–50 – Peter Arndt Oct 21 2009 at 17:26 It's not clear to me that that's helpful, since with the trees you mention, the nodes are the elements of X and not the subsets. With 3 elements, there's only 1 unlabeled tree (3 labeled ones), so the tree structure can't distinguish between different collections of open balls. – Gabe Cunningham Oct 22 2009 at 14:41 This reminds me of finite topological spaces, specifically the number of topologies on a finite set. This isn't exactly the same, though, because of your metric condition. - I wouldn't expect it to be similar in any important way, since the topology being generated is always discrete. – Eric Wofsey Oct 21 2009 at 15:54 Open balls have more restricted structure than open sets in a metrizable topology, though. – Qiaochu Yuan Oct 21 2009 at 15:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406365752220154, "perplexity_flag": "head"}
http://en.wikibooks.org/wiki/C%2B%2B_Programming/Templates/Template_Meta-Programming
# C++ Programming/Templates/Template Meta-Programming ## Contents ### Template Meta-programming Overview Template meta-programming (TMP) refers to uses of the C++ template system to perform computation at compile-time within the code. It can, for the most part, be considered to be "programming with types" — in that, largely, the "values" that TMP works with are specific C++ types. Using types as the basic objects of calculation allows the full power of the type-inference rules to be used for general-purpose computing. #### Compile-time programming The preprocessor allows certain calculations to be carried out at compile time, meaning that by the time the code has finished compiling the decision has already been taken, and can be left out of the compiled executable. The following is a very contrived example: ```#define myvar 17 #if myvar % 2 cout << "Constant is odd" << endl; #else cout << "Constant is even" << endl; #endif ``` This kind of construction does not have much application beyond conditional inclusion of platform-specific code. In particular there's no way to iterate, so it can not be used for general computing. Compile-time programming with templates works in a similar way but is much more powerful, indeed it is actually Turing complete. Traits classes are a familiar example of a simple form of template meta-programming: given input of a type, they compute as output properties associated with that type (for example, std::iterator_traits<> takes an iterator type as input, and computes properties such as the iterator's difference_type, value_type and so on). #### The nature of template meta-programming Template meta-programming is much closer to functional programming than ordinary idiomatic C++ is. This is because 'variables' are all immutable, and hence it is necessary to use recursion rather than iteration to process elements of a set. This adds another layer of challenge for C++ programmers learning TMP: as well as learning the mechanics of it, they must learn to think in a different way. #### Limitations of Template Meta-programming Because template meta-programming evolved from an unintended use of the template system, it is frequently cumbersome. Often it is very hard to make the intent of the code clear to a maintainer, since the natural meaning of the code being used is very different from the purpose to which it is being put. The most effective way to deal with this is through reliance on idiom; if you want to be a productive template meta-programmer you will have to learn to recognize the common idioms. It also challenges the capabilities of older compilers; generally speaking, compilers from around the year 2000 and later are able to deal with much practical TMP code. Even when the compiler supports it, the compile times can be extremely large and in the case of a compile failure the error messages are frequently impenetrable. Some coding standards may even forbid template meta-programming, at least outside of third-party libraries like Boost. #### History of TMP Historically TMP is something of an accident; it was discovered during the process of standardizing the C++ language that its template system happens to be Turing-complete, i.e., capable in principle of computing anything that is computable. The first concrete demonstration of this was a program written by Erwin Unruh which computed prime numbers although it did not actually finish compiling: the list of prime numbers was part of an error message generated by the compiler on attempting to compile the code.[1] TMP has since advanced considerably, and is now a practical tool for library builders in C++, though its complexities mean that it is not generally appropriate for the majority of applications or systems programming contexts. ```#include <iostream> template <int p, int i> class is_prime { public: enum { prim = (p == 2) || ( (p % i) && is_prime<(i>2 ? p : 0), i - 1>::prim ) }; }; template<> class is_prime<0, 0> { public: enum { prim = 1 }; }; template<> class is_prime<0, 1> { public: enum { prim = 1 }; }; template <int i> class Prime_print { // primary template for loop to print prime numbers public: Prime_print<i - 1> a; enum { prim = is_prime<i, i - 1>::prim }; void f() { a.f(); if (prim) { std::cout << "prime number:" << i << std::endl; } } }; template<> class Prime_print<1> { // full specialization to end the loop public: enum { prim = 0 }; void f() {} }; #ifndef LAST #define LAST 18 #endif int main() { Prime_print<LAST> a; a.f(); } ``` #### Building Blocks ##### Values The 'variables' in TMP are not really variables since their values can not be altered, but you can have named values that you use rather like you would variables in ordinary programming. When programming with types, named values are typedefs: ```struct ValueHolder { typedef int value; }; ``` You can think of this as 'storing' the `int` type so that it can be accessed under the `value` name. Integer values are usually stored as members in an enum: ```struct ValueHolder { enum { value = 2 }; }; ``` This again stores the value so that it can be accessed under the name `value`. Neither of these examples is any use on its own, but they form the basis of most other TMP, so they are vital patterns to be aware of. ##### Functions A function maps one or more input parameters into an output value. The TMP analogue to this is a template class: ```template<int X, int Y> struct Adder { enum { result = X + Y }; }; ``` This is a function that adds its two parameters and stores the result in the `result` enum member. You can call this at compile time with something like `Adder<1, 2>::result`, which will be expanded at compile time and act exactly like a literal `3` in your program. ##### Branching A conditional branch can be constructed by writing two alternative specialisations of a template class. The compiler will choose the one that fits the types provided, and a value defined in the instantiated class can then be accessed. For example, consider the following partial specialisation: ```template<typename X, typename Y> struct SameType { enum { result = 0 }; }; template<typename T> struct SameType<T, T> { enum { result = 1 }; }; ``` This tells us if the two types it is instantiated with are the same. This might not seem very useful, but it can see through typedefs that might otherwise obscure whether types are the same, and it can be used on template arguments in template code. You can use it like this: ```if (SameType<SomeThirdPartyType, int>::result) { // ... Use some optimised code that can assume the type is an int } else { // ... Use defensive code that doesn't make any assumptions about the type } ``` The above code isn't very idiomatic: since the types can be identified at compile-time, the `if()` block will always have a trivial condition (it'll always resolve to either `if (1) { ... }` or `if (0) { ... }`). However, this does illustrate the kind of thing that can be achieved. ##### Recursion Since you don't have mutable variables available when you're programming with templates, it's impossible to iterate over a sequence of values. Tasks that might be achieved with iteration in standard C++ have to be redefined in terms of recursion, i.e. a function that calls itself. This usually takes the shape of a template class whose output value recursively refers to itself, and one or more specialisations that give fixed values to prevent infinite recursion. You can think of this as a combination of the function and conditional branch ideas described above. Calculating factorials is naturally done recursively: $0! = 1$, and for $n>0$, $n! = n*(n-1)!$. In TMP, this corresponds to a class template "factorial" whose general form uses the recurrence relation, and a specialization of which terminates the recursion. First, the general (unspecialized) template says that `factorial<n>::value` is given by n*factorial<n-1>::value: ```template <unsigned n> struct factorial { enum { value = n * factorial<n-1>::value }; }; ``` Next, the specialization for zero says that `factorial<0>::value` evaluates to 1: ```template <> struct factorial<0> { enum { value = 1 }; }; ``` And now some code that "calls" the factorial template at compile-time: ``` int main() { // Because calculations are done at compile-time, they can be // used for things such as array sizes. int array[ factorial<7>::value ]; } ``` Observe that the `factorial<N>::value` member is expressed in terms of the `factorial<N>` template, but this can't continue infinitely: each time it is evaluated, it calls itself with a progressively smaller (but non-negative) number. This must eventually hit zero, at which point the specialisation kicks in and evaluation doesn't recurse any further. #### Example: Compile-time "If" The following code defines a meta-function called "if_"; this is a class template that can be used to choose between two types based on a compile-time constant, as demonstrated in main below: 1. ```template <bool Condition, typename TrueResult, typename FalseResult> ``` 2. ```class if_; ``` 3. ``` ``` 4. ```template <typename TrueResult, typename FalseResult> ``` 5. ```struct if_<true, TrueResult, FalseResult> ``` 6. ```{ ``` 7. ``` typedef TrueResult result; ``` 8. ```}; ``` 9. ``` ``` 10. ```template <typename TrueResult, typename FalseResult> ``` 11. ```struct if_<false, TrueResult, FalseResult> ``` 12. ```{ ``` 13. ``` typedef FalseResult result; ``` 14. ```}; ``` 15. ``` ``` 16. ```int main() ``` 17. ```{ ``` 18. ``` typename if_<true, int, void*>::result number(3); ``` 19. ``` typename if_<false, int, void*>::result pointer(&number); ``` 20. ``` ``` 21. ``` typedef typename if_<(sizeof(void *) > sizeof(uint32_t)), uint64_t, uint32_t>::result ``` 22. ``` integral_ptr_t; ``` 23. ``` ``` 24. ``` integral_ptr_t converted_pointer = reinterpret_cast<integral_ptr_t>(pointer); ``` 25. ```} ``` On line 18, we evaluate the `if_` template with a true value, so the type used is the first of the provided values. Thus the entire expression `if_<true, int, void*>::result` evaluates to `int`. Similarly, on line 19 the template code evaluates to `void *`. These expressions act exactly the same as if the types had been written as literal values in the source code. Line 21 is where it starts to get clever: we define a type that depends on the value of a platform-dependent sizeof expression. On platforms where pointers are either 32 or 64 bits, this will choose the correct type at compile time without any modification, and without preprocessor macros. Once the type has been chosen, it can then be used like any other type. Note: This code is just an illustration of the power of template meta-programming, it is not meant to illustrate good cross-platform practice with pointers. For comparison, this problem is best attacked in C90 as follows 1. ```# include <stddef.h> ``` 2. ```typedef size_t integral_ptr_t; ``` 3. ```typedef int the_correct_size_was_chosen [sizeof (integral_ptr_t) >= sizeof (void *)? 1: -1]; ``` As it happens, the library-defined type `size_t` should be the correct choice for this particular problem on any platform. To ensure this, line 3 is used as a compile time check to see if the selected type is actually large enough; if not, the array type `the_correct_size_was_chosen` will be defined with a negative length, causing a compile-time error. In C99, `<stdint.h>` may define the types `intptr_h` and `uintptr_h`. #### Conventions for "Structured" TMP To do: Describe some conventions for "structured" TMP.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9036073684692383, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/268840/does-the-following-condition-characterise-convexity-of-a-set
Does the following condition characterise convexity of a set? Conjecture: A set $X \subseteq \mathbb{R}^n$ is convex if and only if the following holds. For any $x \in X$ and any vector $v \in \mathbb{R}^n$ such that $x+v \notin X$, it holds that for any scalar $a$ with $1 \leq a$ we have that $x+av \notin X$. - 5 Try to restate your condition in contrapositive way, and see what it says. – user53153 Jan 1 at 22:26 Well the contrapositive basically says $\forall x,v,a (x \in X, v \in R^n, 1 \leq a, x+av \in X \rightarrow x+v \in X)$. I don't know how to manipulate it any further.... – user18921 Jan 1 at 22:49 Introduce a letter $y=x+av$ and write the vector $x+v$ in terms of $x$ and $y$. – user53153 Jan 1 at 22:50 Draw a picture. – copper.hat Jan 1 at 23:11 @copper.hat: can I borrow your $n$-dimensional graph paper? – robjohn♦ Jan 1 at 23:40 show 1 more comment 1 Answer Suppose X convex. Then if $x\in X$ and $x+v\notin X$, then $x+av\notin X$ for $a\geq1$ because X is convex and the contrary will imply that $x+v\in X$. On the other hand, suppose your condition. Take two points $x$ and $y$ in X. Then if $y+t(x-y)\notin X$ for some $t_{0}$ between $(0,1)$, we have $y+t(x-y)\notin X$ for all $t\geq t_{0}$. Then, we will must have $x\notin X$. Contradiction! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.914584219455719, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=656131
Physics Forums ## Statically indeterminate beam (fixed ends) and the moments at the supports I'm working through my professor's solution for this problem, and I don't understand how he comes up with the reaction force at B without taking into account the moment at B. Any help would be greatly appreciated. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus I think I've been able to reason it out: The moment at the right support is a reaction to the bending moment at that support, and the two are equal and opposite, therefore having no affect at the left end. Compare this to the moments caused by the forces acting between the supports. These forces are supported by both ends of the beam, therefore their moments affect both ends of the beam. I'm sure I could have worded that better, but I think I got the gist of it. Well I make MA = 8820kN-m and MB = 9420kN-m So the end moments are not equal. Can you tell us more about these notes? ## Statically indeterminate beam (fixed ends) and the moments at the supports When it comes to statically indeterminate, its harder. Solving the differential equations in Mathematica, I get: $$R_a=6095/72=84.6528...$$ $$R_b=6145/72=85.3472...$$ $$M_a=2165/12=180.417...$$ $$M_b=-2215/12=-184.583...$$ The force equation is seen to be solved since $Ra+Rb-20-30-10\times12=0$ and the torque equation is seen to be solved because $Ma+Mb-3\times 20-8\times 30+Rb\times 12-10\times 12\times 6=0$. Here I use counterclockwise torques as positive, and upward forces as positive, and I calculate the torque about x=0. Since the problem is statically indeterminate, you cannot use the force and torque equations to solve for the four unknowns. Looking at the solutions in the notes, the Ra and Rb given (85) are close to the actual solutions, so I tend to think that some approximations have been made in the solution, but I don't know where. I agree with the OP, the exact solution cannot be obtained without taking into account the moment at point B. Since this is important and not homework I have written it out at length in the attachments. I think that there was more to the professor's presentation which is why I asked about this. First Fig1 shows a fully restrained general beam with loads P1 and P2 and length L. As a result of the loads it is subject to shears VA and moments MA at end A and similarly at end B. In general MA ≠ MB and VA ≠ VB It is vital to realise that these shears are different from the reactions attributable to a simply supported beam. Fig 2 breaks the loading and support down to two simply suported beams which are added by superposition to yield the original conditions. The first beam in Fig 2 offers two simple reactions R1 and R2 that resist loads P1 and P2. They can thus be calculated by the ordinary rules of mechanical equilibrium as done at the end of Fig6 in the second attachment. The second beam in Fig 2 offers end moments MA and MB. The difference is balanced by a couple formed from the extra reactions R and the lever arm of the length of the beam. Fig3 shows the equations that result. Now there are standard tables of 'fixed end moments' and Figs 4 and 5 show two extracts relevent to the OP beam loadings. In Fig 6 I have addressed the OP loading case and used the standard fixed end moments to calculated MA and MB Using the difference I can then calculate R. The final calculation is to calculate R1 and R2 and add and subtract R as appropriate to obtain the end shears. It is interesting to note that R1 = R2 = 85, the figure from the professor's calculation, so I suspect that which was presented in post#1 was this calculation only and the rest is missing. Attached Thumbnails Oh dear this is what happens when you rush somehing to post without proper editorial checks. Some arithmetical errors crept in. The method is sound, however. I have uploaded a corrected Fig6 and calculations. These confirm The results by Rap, as did working the dif equations by hand. I have also just re-enroled in kindegarten arithmetic class - sums. Attached Thumbnails Thread Tools | | | | |-------------------------------------------------------------------------------------------------|----------------------------------------------|---------| | Similar Threads for: Statically indeterminate beam (fixed ends) and the moments at the supports | | | | Thread | Forum | Replies | | | Engineering, Comp Sci, & Technology Homework | 5 | | | Classical Physics | 6 | | | General Engineering | 4 | | | Mechanical Engineering | 3 | | | Mechanical Engineering | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9619444608688354, "perplexity_flag": "middle"}
http://topologicalmusings.wordpress.com/2008/05/22/free-boolean-algebras-truth-tables-disjunctive-normal-form-and-completeness-theorems/
Todd and Vishal’s blog Topological Musings # Free Boolean algebras, truth tables, disjunctive normal form, and completeness theorems May 22, 2008 in Boolean Algebra, Propositional Calculus | Tags: Boolean Algebra, completeness theorem, disjunctive normal form, truth tables Last time in this series on Stone duality, we observed a perfect duality between finite Boolean algebras and finite sets, which we called “baby Stone duality”: 1. Every finite Boolean algebra $B$ is obtained from a finite set $X$ by taking its power set (or set of functions $\hom(X, \mathbf{2})$ from $X$ to $\mathbf{2}$, with the Boolean algebra structure it inherits “pointwise” from $\mathbf{2} = \{0, 1\}$). The set $X$ may be defined to be $\mbox{Bool}(B, \mathbf{2})$, the set of Boolean algebra homomorphisms from $B$ to $\mathbf{2}$. 2. Conversely, every finite set $X$ is obtained from the Boolean algebra $B = \hom(X, \mathbf{2})$ by taking its “hom-set” $\mbox{Bool}(B, \mathbf{2})$. More precisely, there are natural isomorphisms $i_B: B \stackrel{\sim}{\to} \hom(\mbox{Bool}(B, \mathbf{2}), \mathbf{2}),$ $j_X: X \stackrel{\sim}{\to} \mbox{Bool}(\hom(X, \mathbf{2}), \mathbf{2})$ in the categories of finite Boolean algebras and of finite sets, respectively. In the language of category theory, this says that these categories are (equivalent to) one another’s opposite — something I’ve been meaning to explain in more detail, and I promise to get to that, soon! In any case, this duality says (among other things) that finite Boolean algebras, no matter how abstractly presented, can be represented concretely as power sets. Today I’d like to apply this representation to free Boolean algebras (on finitely many generators). What is a free Boolean algebra? Again, the proper context for discussing this is category theory, but we can at least convey the idea: given a finite set $S$ of letters $x, y, z, \ldots$, consider the Boolean algebra $\mathbf{B}(S)$ whose elements are logical equivalence classes of formulas you can build up from the letters using the Boolean connectives $\wedge, \vee, \neg$ (and the Boolean constants $0, 1$), where two formulas $\phi, \phi'$ are defined to be logically equivalent if $\phi \leq \phi'$ and $\phi' \leq \phi$ can be inferred purely on the basis of the Boolean algebra axioms. This is an excellent example of a very abstract description of a Boolean algebra: syntactically, there are infinitely many formulas you can build up, and the logical equivalence classes are also infinite and somewhat hard to visualize, but the mess can be brought under control using Stone duality, as we now show. First let me cut to the chase, and describe the key property of free Boolean algebras. Let $A$ be any Boolean algebra; it could be a power set, the lattice of regular open sets in a topology, or whatever, and think of a function $f: S \to A$ from the set of letters to $A$ as modeling or interpreting the atomic formulas $x, y, z, \ldots$ as elements $f(x), f(y), f(z), \ldots$ of $A$. The essential property of the free Boolean algebra is that we can extend this interpretation $f$ in a unique way to a Boolean algebra map $\mathbf{B}(S) \to A$. The way this works is that we map a formula like $(x \wedge \neg y) \vee z$ to the obvious formula $(f(x) \wedge \neg f(y)) \vee f(z)$. This is well-defined on logical equivalence classes of formulas because if $p = q$ in $\mathbf{B}(S)$, i.e., if the equality is derivable just from the Boolean algebra axioms, then of course $f(p) = f(q)$ holds in $A$ as the Boolean algebra axioms hold in $A$. Thus, there is a natural bijective correspondence between functions $S \to A$ and Boolean algebra maps $\mathbf{B}(S) \to A$; to get back from a Boolean algebra map $\mathbf{B}(S) \to A$ to the function $S \to A$, simply compose the Boolean algebra map with the function $S \to \mathbf{B}(S)$ which interprets elements of $S$ as equivalence classes of atomic formulas in $\mathbf{B}(S)$. To get a better grip on $\mathbf{B}(S)$, let me pass to the Boolean ring picture (which, as we saw last time, is equivalent to the Boolean algebra picture). Here the primitive operations are addition and multiplication, so in this picture we build up “formulas” from letters using these operations (e.g., $(x + y) \cdot z$ and the like). In other words, the elements of $\mathbf{B}(S)$ can be considered as “polynomials” in the variables $x, y, z, \ldots$. Actually, there are some simplifying features of this polynomial algebra; for one thing, in Boolean rings we have idempotence. This means that $p^n = p$ for $n \geq 1$, and so a monomial term like $x^3 y^2$ reduces to its support $x y$. Since each letter appears in a support with exponent 0 or 1, it follows that there are $2^{|S|}$ possible supports or Boolean monomials, where $|S|$ denotes the cardinality of $S$. Idempotence also implies, as we saw last time, that $b + b = 0$ for all elements $b \in \mathbf{B}(S)$, so that our polynomials = $\mathbb{Z}$-linear combinations of monomials are really $\mathbb{Z}_2$-linear combinations of Boolean monomials or supports. In other words, each element of $\mathbf{B}(S)$ is uniquely a linear combination $\sum_{\sigma \in \mbox{supp}(S)} a_\sigma \sigma$ where $a_\sigma \in \{0, 1\},$ i.e., the set of supports $\mbox{supp}(S)$ forms a basis of $\mathbf{B}(S)$ as a $\mathbb{Z}_2$-vector space. Hence the cardinality of the free Boolean ring is $2^{|\mbox{supp}(S)|} = 2^{2^{|S|}}$. • Remark: This gives an algorithm for checking logical equivalence of two Boolean algebra formulas: convert the formulas into Boolean ring expressions, and using distributivity, idempotence, etc., write out these expressions as Boolean polynomials = $\mathbb{Z}_2$-linear combinations of supports. The Boolean algebra formulas are equivalent if and only if the corresponding Boolean polynomials are equal. But there is another way of understanding free Boolean algebras, via baby Stone duality. Namely, we have the power set representation $i: \mathbf{B}(S) \stackrel{\sim}{\to} \hom(\mbox{Bool}(\mathbf{B}(S), \mathbf{2}), \mathbf{2})$ where $\mbox{Bool}(\mathbf{B}(S), \mathbf{2})$ is the set of Boolean algebra maps $\mathbf{B}(S) \to \mathbf{2}$. However, the freeness property says that these maps are in bijection with functions $S \to \mathbf{2}$. What are these functions? They are just truth-value assignments for the elements (atomic formulas, or variables) $x, y, z, \ldots \in S$; there are again $2^{|S|}$ many of these. This leads to the method of truth tables: each formula $b \in \mathbf{B}(S)$ induces (in one-one fashion) a function $i(b): \mbox{Bool}(\mathbf{B}(S), \mathbf{2}) \to \mathbf{2}$ which takes a Boolean algebra map $\phi: \mathbf{B}(S) \to \mathbf{2}$, aka a truth-value assignment for the variables $x, y, z, \ldots$, to the element of $\{0, 1\}$ obtained by instantiating the assigned truth values $0, 1$ for the variables and evaluating the resulting Boolean expression for $b$ in $\mathbf{2}$. (In terms of power sets, $\mathbf{B}(S) \cong P(\mbox{Bool}(\mathbf{B}(S), \mathbf{2}))$ identifies each equivalence class of formulas $b \in \mathbf{B}(S)$ with the set of truth-value assignments of variables which render the formula $b$ “true” in $\{0, 1\}$.) The fact that the representation $b \mapsto i(b)$ is injective means precisely that if formulas $b, c$ are inequivalent, then there is a truth-value assignment which renders one of them “true” and the other “false”, hence that they are distinguishable by truth tables. • Remark: This is an instance of what is known as a completeness theorem in logic. On the syntactic side, we have a notion of provability of formulas (that $b$ is logically equivalent to $\top$, or $b = \top$ in $\mathbf{B}(S)$ if this is derivable from the Boolean algebra axioms). On the semantic side, each Boolean algebra homomorphism $\phi: \mathbf{B}(S) \to \mathbf{2}$ can be regarded as a model of $\mathbf{B}(S)$ in which each formula becomes true or false under $\phi$. The method of truth tables then says that there are enough models or truth-value assignments to detect provability of formulas, i.e., $b$ is provable if it is true when interpreted in any model $\phi$. This is precisely what is meant by a completeness theorem. There are still other ways of thinking about this. Let $\phi: B \to \mathbf{2}$ be a Boolean algebra map, aka a model of $B$. This model is completely determined by • The maximal ideal $\phi^{-1}(0)$ in the Boolean ring $B$, or • The maximal filter or ultrafilter $\phi^{-1}(1)$ in $B$. Now, as we saw last time, in the case of finite Boolean algebras, each (maximal) ideal is principal: is of the form $\{x \in B: x \leq b\}$ for some $b \in B$. Dually, each (ultra)filter is principal: is of the form $\{x \in B: c \leq x\}$ for some $c = \neg b \in B$. The maximality of the ultrafilter means that there is no nonzero element in $B$ smaller than $c$; we say that $c$ is an atom in $B$ (NB: not to be confused with atomic formula!). So, we can also say • A model of a finite Boolean algebra $B$ is specified by a unique atom of $B$. Thus, baby Stone duality asserts a Boolean algebra isomorphism $i: B \to P(\mbox{Atoms}(B)).$ Let’s give an example: consider the free Boolean algebra on three elements $x, y, z$. If you like, draw a Venn diagram generated by three planar regions labeled by $x, y, z$. The atoms or smallest nonzero elements of the free Boolean algebra are then represented by the $2^3 = 8$ regions demarcated by the Venn diagram. That is, the disjoint regions are labeled by the eight atoms $x \wedge y \wedge z, x \wedge y \wedge \neg z, x \wedge \neg y \wedge z, x \wedge \neg y \wedge \neg z,$ $\neg x \wedge y \wedge z, \neg x \wedge y \wedge \neg z, \neg x \wedge \neg y \wedge z, \neg x \wedge \neg y \wedge \neg z.$ According to baby Stone duality, any element in the free Boolean algebra (with $2^8 = 256$ elements) is uniquely expressible as a disjoint union of these atoms. Another way of saying this is that the atoms form a basis (alternative to Boolean monomials) of the free Boolean algebra as $\mathbb{Z}_2$-vector space. For example, as an exercise one may calculate $(x \Rightarrow y) \wedge z = x \wedge y \wedge z + \neg x \wedge y \wedge z + \neg x \wedge \neg y \wedge z.$ The unique expression of an element $b \in \mathbf{B}(S)$ (where $b$ is given by a Boolean formula) as a $\mathbb{Z}_2$-linear combination of atoms is called the disjunctive normal form of the formula. So yet another way of deciding when two Boolean formulas are logically equivalent is to put them both in disjunctive normal form and check whether the resulting expressions are the same. (It’s basically the same idea as checking equality of Boolean polynomials, except we are using a different vector space basis.) All of the above applies not just to free (finite) Boolean algebras, but to general finite Boolean algebras. So, suppose you have a Boolean algebra $B$ which is generated by finitely many elements $x_1, x_2, \ldots, x_n \in B$. Generated means that every element in $B$ can be expressed as a Boolean combination of the generating elements. In other words, “generated” means that if we consider the inclusion function $S = \{x_1, \ldots, x_n\} \hookrightarrow B$, then the unique Boolean algebra map $\phi: \mathbf{B}(S) \to B$ which extends the inclusion is a surjection. Thinking of $\phi$ as a Boolean ring map, we have an ideal $I = \phi^{-1}(0)$, and because $\phi$ is a surjection, it induces a ring isomorphism $B \cong \mathbf{B}(S)/I.$ The elements of $I$ can be thought of as equivalence classes of formulas which become false in $B$ under the interpretation $\phi$. Or, we could just as well (and it may be more natural to) consider instead the filter $F = \phi^{-1}(1)$ of formulas in $\mathbf{B}(S)$ which become true under the interpretation $\phi$. In any event, what we have is a propositional language $\mathbf{B}(S)$ consisting of classes of formulas, and a filter $F \subseteq \mathbf{B}(S)$ consisting of formulas, which can be thought of as theorems of $B$. Often one may find a filter $F$ described as the smallest filter which contains certain chosen elements, which one could then call axioms of $B$. In summary, any propositional theory (which by definition consists of a set $S$ of propositional variables together with a filter $F \subseteq \mathbf{B}(S)$ of the free Boolean algebra, whose elements are called theorems of the theory) yields a Boolean algebra $B = \mathbf{B}(S)/F$, where dividing out by $F$ means we take equivalence classes of elements of $\mathbf{B}(S)$ under the equivalence relation $b \sim c$ defined by the condition “$b \Leftrightarrow c$ belongs to $F$“. The partial order on equivalence classes [$b$] is defined by [$b$] $\leq$ [$c$] iff $b \Rightarrow c$ belongs to $F$. The Boolean algebra $B$ defined in this way is called the Lindenbaum algebra of the propositional theory. Conversely, any Boolean algebra $B$ with a specified set of generators $x_1, \ldots x_n$ can be thought of as the Lindenbaum algebra of the propositional theory obtained by taking the $x_i$ as propositional variables, together with the filter $\phi^{-1}(1)$ obtained from the induced Boolean algebra map $\phi: \mathbf{B}(S) \to B$. A model of the theory should be a Boolean algebra map $\mathbf{B}(S) \to \mathbf{2}$ which interprets the formulas of $\mathbf{B}(S)$ as true or false, but in such a way that the theorems of the theory (the elements of the filter) are all interpreted as “true”. In other words, a model is the same thing as a Boolean algebra map $B \cong \mathbf{B}(S)/F \to \mathbf{2}.$ i.e., we may identify a model of a propositional theory with a Boolean algebra map $f: B \to \mathbf{2}$ out of its Lindenbaum algebra. So the set of models is the set $\mbox{Bool}(B, \mathbf{2})$, and now baby Stone duality, which gives a canonical isomorphism $i: B \cong \hom(\mbox{Bool}(B, \mathbf{2}), \mathbf{2}),$ implies the following Completeness theorem: If a formula of a finite propositional theory is “true” when interpreted under any model $\phi$ of the theory, then the formula is provable (is a theorem of the theory). Proof: Let $B$ be the Lindenbaum algebra of the theory, and let $b = [p] \in B$ be the class of formulas provably equivalent to a given formula $p$ under the theory. The Boolean algebra isomorphism $i$ takes an element $b \in B$ to the map $\phi \mapsto \phi(b)$. If $\phi(b) = 1$ for all models $\phi$, i.e., if $i(b) = 1$, then $b = 1$. But then [$p$] $= 1$, i.e., $p \in F$, the filter of provable formulas. $\Box$ In summary, we have developed a rich vocabulary in which Boolean algebras are essentially the same things as propositional theories, and where models are in natural bijection with maximal ideals in the Boolean ring, or ultrafilters in the Boolean algebra, or [in the finite case] atoms in the Boolean algebra. But as we will soon see, ultrafilters have a significance far beyond their application in the realm of Boolean algebras; in particular, they crop up in general studies of topology and convergence. This is in fact a vital clue; the key point is that the set of models or ultrafilters $\mbox{Bool}(B, \mathbf{2})$ carries a canonical topology, and the interaction between Boolean algebras and topological spaces is what Stone duality is all about. • 221,437 hits
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 187, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188070893287659, "perplexity_flag": "head"}
http://nrich.maths.org/5038/index
### Polydron This activity investigates how you might make squares and pentominoes from Polydron. ### A Cartesian Puzzle Find the missing coordinates which will form these eight quadrilaterals. These coordinates themselves will then form a shape with rotational and line symmetry. ### Symmetry Challenge Systematically explore the range of symmetric designs that can be created by shading parts of the motif below. Use normal square lattice paper to record your results. # Coordinate Challenge ##### Stage: 2 Challenge Level: Here is a grid: Can you position these ten letters in their correct places according to the eight clues below? Clues: The letters at $(1, 1),$ $(1, 2)$ and $(1, 3)$ are all symmetrical about a vertical line. The letter at $(4, 2)$ is not symmetrical in any way. The letters at $(1, 1),$ $(2, 1)$ and $(3, 1)$ are symmetrical about a horizontal line. The letters at $(0, 2),$ $(2, 0)$ have rotational symmetry. The letter at $(3, 1)$ consists of just straight lines. The letters at $(3, 3)$ and $(2, 0)$ consist of just curved lines. The letters at $(3, 3),$ $(3, 2)$ and $(3, 1)$ are consecutive in the alphabet. The letters at $(0, 2)$ and $(1, 2)$ are at the two ends of the alphabet. You could use this interactivity to try out your ideas. Full Screen Version This text is usually replaced by the Flash movie. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9070565104484558, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/105320/the-equality-of-mixed-partial-derivatives
# The equality of mixed partial derivatives Let $f$ be a function, that has continuous partial derivatives of order $m+1$ in an open ball $B(a,r)$ in $R^n$. Show for $k=1, \dots, m$ that the equation is valid: $$\frac1{k!}\sum_{j_1=1}^n \cdots \sum_{j_k=1}^n \frac{\partial^k f}{\partial x_{j_1} \cdots \partial x_{j_k}}(\mathbf a)x_{j_1}\cdots x_{j_k}=\sum_{|\mathbf{\alpha}|=k}\frac1{\alpha!}D_\alpha f(\mathbf a)\mathbf x^\alpha$$ where $\mathbf \alpha$ is an $n$-dimensional multi-index. - ## 1 Answer When you are told to compute all $k$-th partial derivatives of a function $f$ of $n$ variables $x_j$ then formally there are $n^k$ of them: For each word $(j_1,j_2,\ldots, j_k)\in [n]^k$ you differentiate first with respect to $x_{j_1}$, then with respect to $x_{j_2}$, $\ldots$, and finally with respect to $x_{j_k}$. On the left side of your equation you have the sum of all these $n^k$ derivatives (at the point ${\bf a}$), each multiplied with the corresponding monomial $x_{j_1}\ x_{j_2}\cdots\ x_{j_k}$. Any two mixed partial derivatives where each variable has been affected the same number of times coincide, and at the same time the corresponding monomials $x_{j_1}\ x_{j_2}\cdots\ x_{j_k}$ coincide. Therefore among the $n^k$ terms on the left of your equation there are a lot of them equal. On the right side equal terms have been collected. The multiindex $\alpha=(\alpha_1,\alpha_2,\ldots, \alpha_n)$ of weight $|\alpha|=k$ encodes how often each of the individual variables $x_i$ is affected: For each $i\in[n]$ you should differentiate $\alpha_i$ times with respect to $x_i$ resp. take the factor $x_i^{\alpha_i}$ into the accompagning monomial. Now it is an elementary combinatorial fact that for each multiindex $\alpha$ of weight $k$ there are exactly $k!/\alpha!$ terms on the left side which correspond to this $\alpha$. By the way: The large expressions considered here are generated when you develop the auxiliary function $$\phi(t)\ :=\ f({\bf a} + t{\bf x})\qquad (t\in{\mathbb R})$$ into a Taylor series with respect to the real variable $t$. In order to compute $\phi^{(k)}(t)$ you have to apply the chain rule $k$ times in succession, and each time the formal number of terms is multiplied by $n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279111623764038, "perplexity_flag": "head"}
http://nrich.maths.org/6718
### Bat Wings Three students had collected some data on the wingspan of some bats. Unfortunately, each student had lost one measurement. Can you find the missing information? ### A Mean Tetrahedron Can you number the vertices, edges and faces of a tetrahedron so that the number on each edge is the mean of the numbers on the adjacent vertices and the mean of the numbers on the adjacent faces? ### Pairs Ann thought of 5 numbers and told Bob all the sums that could be made by adding the numbers in pairs. The list of sums is 6, 7, 8, 8, 9, 9, 10,10, 11, 12. Help Bob to find out which numbers Ann was thinking of. # Weekly Problem 36 - 2009 ##### Stage: 2 and 3 Short Challenge Level: The mean of a sequence of $64$ numbers is $64$. The mean of the first $36$ numbers is $36$. What is the mean of the last $28$ numbers? If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution View the current weekly problem The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303107261657715, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/48056-least-upper-bound-property-proof.html
# Thread: 1. ## Least Upper Bound Property Proof Prove that an nonempty set S that is bounded above has a least upper bound. Proof. For this problem, we must use the following way to show the proof: Let $u_0 \in S$ and $N_0$ be an upper bound of S. Let $v_{0} = \frac {u_0 + N_0 }{2}$. If $v_0$ is an upper bound, let $U_1 = v_0$ and $u_1 = x_0$ If $v_0$ is not an upper bound, then let $N_1 = N_0$ and $u_1 > v_0 \ \ \ u_1 \in S$ Repeat the process to obtain sequence $N_n$ and $u_n$, we have to show that they both converge to the LUB (S). To be honest, I'm a bit lost on how to generate $N_n$ and $u_n$, so if $v_0$ is not an upper bound, that means $N_0$ is still the least upper bound since $v_0 \in S$, right? Suppose it is not true, then we let $v_1$ be something lower, or closer to the sequence, but how should the sequences $N_n$ and $u_n$ look like? Thank you very much! 2. Originally Posted by tttcomrader Prove that an nonempty set S that is bounded above has a least upper bound. There is a real problem is offering any help in this case. This statement is usually known as the Completeness Axiom. If you are asked to prove what is an axiom in one development then there must be an axiom in the other development that helps prove the statement. Can you tell us what axioms you have been given that deal with bounded sets? What are your axioms to this point? 3. So far we know that: A convergent sequence is bounded. A montone sequence that is bounded converges. And for the set S in the problem, S is a nonempty set in the set of real numbers. 4. Originally Posted by tttcomrader So far we know that:A convergent sequence is bounded. A montone sequence that is bounded converges. And for the set S in the problem, S is a nonempty set in the set of real numbers. It is clear that you will use “A montone sequence that is bounded converges.” However, I still do not know what else you have proven about the structure of the real numbers. Here is a fact that may help recall what you have proved about the real numbers. If S contains a point of U, then that point is the LUB of S. So if there is no LUB of S both S & U can be shown to be open. Have you done anything with connectivity? 5. We haven't done anything with connected sets yet (at least not in this course). We went over field axioms and order axioms, well-ordered property, completeness property, and we talked about dense and equivalent classes. Haven't talk about open, close, or connected. 6. Originally Posted by tttcomrader We haven't done anything with connected sets yet (at least not in this course). We went over field axioms and order axioms, well-ordered property, completeness property, and we talked about dense and equivalent classes. Haven't talk about open, close, or connected. Well, I think that is what I asked to bengin with: "What is the statement of the completeness property?" Because this is equivalent to that. 7. The statement is: "An ordered field is said to be complete if it obeys the monotone sequence property". I was working on this problem, and I understand the idea of the proof now. So if a is an upper bound, then M would change; if a is not an upper bound, then x would change. In other words, $M_n$ is a sequence that is appraoching sup(S) from the positive side while $X_n$ does the same from the negative side. Since, both sequences are monotonic, M being decreasing while X increasing, and they are both bounded by the sup(S), since if they move outside of their respective area they stop moving, sup(S) is the GLB of M and the LUB of x. It is just that, I don't know how to write this correctly, since I do know what excatly $M_n$ and $x_n$ equals to. But here is what I think: So I have $M_n = \frac { M_{n-1} + x_0 }{2}$ On the other hand, I have: $x_n > \frac { x_{n-1} +M_0 }{2}$ Now, by construction, M has to be decreasing and x has to be increasing, and they are both bounded by sup(S), but how do I show that? Any hints would be appreicated, thank you!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9646329283714294, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/185990/find-all-ways-to-factor-a-number
# Find all ways to factor a number An example of what I'm looking for will probably explain the question best. 24 can be written as: • 12 · 2 • 6 · 2 · 2 • 3 · 2 · 2 · 2 • 8 · 3 • 4 · 2 · 3 • 6 · 4 I'm familiar with finding all the prime factors of a number ($24 = 3 · 2^3$), as well as all the factor pairs (24 = 12·2, 8·3, 6·4). I'm assuming one or both will form the basis of the answer, but I can't figure out an algorithm to find all the possible ways to represent a number as a product of 2 or more other numbers. So, what is a (preferably efficient) way to accomplish this? Note: this is not homework, it's just for my own knowledge. - Hint: Suppose $n=p_1^{r_1}p_2^{r_2} ... p_k^{r_k}$ - you need to pick a power of $p_1$ between 0 and $r_1$ ... – Mark Bennet Aug 23 '12 at 19:21 Already for $p^n$ the problem is difficult! We are then looking for the partitions of $n$ (see Wikipedia). – André Nicolas Aug 23 '12 at 19:28 @MarkBennet - I'm not getting the hint. Can you please expound a bit, either in a comment or an answer? – dj18 Aug 24 '12 at 14:40 ## 2 Answers That's called multiplicative partitions, and there is a generating function discovered by Oppenheim and McMahon. You could use it. The list of the number of multiplicative partitions is on http://oeis.org/A001055 - Does the function generate all the partitions, or the number of such partitions? I am interested in generating all the partitions. – dj18 Aug 24 '12 at 14:38 @dj18: It only generates the number of partitions. Anyway, you could do a some sort of "backtracking" with the divisors, e.j. 12=2.6=3.4→6=2.3,4=2.2 then doing the same with those numbers. – dot dot Sep 2 '12 at 10:59 Well if you have the prime factorization for a number (let's use your example of 24), then any combination of its prime factors must be a factor. $$24 = 3\times 2^3$$ So any combination of {3, 2, 2, 2} is a factor. The way you go about taking all subsets of a set in an efficient manner is more of a CS problem. But just to drive home the point: {{3}=2, {3, 2}=6, {3,2,2}=12,{3,2,2,2}=24, {2}=2, {2,2}=4, {2,2,2}=8} And then remember 1. - I see from here how to easily generate all the factors of a number, but I'm not sure how to apply this to my question, as I consider 2 * 2 to be distinct from 4. – dj18 Aug 23 '12 at 19:40 Oh, silly me! The set can be written: {{3}, {3, 2}, {6}, {3,2,2}, {12}, {3,2,2,2}, {24}, {2}, {2,2}, {4}, {2,2,2}, {8}}. So, to make sure I understand what you're saying: find all combinations of the prime factorization of the number, and then find subsets (from the set of combinations) whose product equals the original number - is that correct? – dj18 Aug 23 '12 at 20:40 @dj18 "Finding all combinations of the prime factorization of a number" is the same thing as "finding all (UNIQUE) subsets - the union of subsets - of the set of its prime factors." – VF1 Aug 24 '12 at 1:03 As to your consideration of 2*2 as distinct with 4, I guess that's true if you're looking for ALL representations of a number. But that being said, if you're looking into quantifying how many partitions a number has, then look to the answer below. I was talking about an approach to explicitly writing out all of these possible factors. – VF1 Aug 24 '12 at 1:07 Yes, I'm looking for all representations - not just the factors, and not just the number of representations. – dj18 Aug 24 '12 at 14:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9102421402931213, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/26914/random-matrix-ensembles-from-bmn-model?answertab=oldest
# random matrix ensembles from BMN model My friends working on Thermalization of Black Holes explained solutions to their matrix-valued differential equations (from numerical implementation of the Berenstein-Maldacena-Nastase matrix model) result in chaotic solutions. They are literally getting random matrices. For the eigenvalue spectrum, would expect a semicircle distribution but for finite N get something slightly different. The proof of the Wigner Semicircle Law comes from studying the GUE Kernel $$K_N(\mu, \nu)=e^{-\frac{1}{2}(\mu^2+\nu^2)} \cdot \frac{1}{\sqrt{\pi}} \sum_{j=0}^{N-1}\frac{H_j(\lambda)H_j(\mu)}{2^j j!}$$ The eigenvalue density comes from setting $\mu = \nu$. The Wigner semicircle identity is a Hermite polynomial identity $$\rho(\lambda)=e^{-\mu^2} \cdot \frac{1}{\sqrt{\pi}} \sum_{j=0}^{N-1}\frac{H_j(\lambda)^2}{2^j j!} \approx \left\{\begin{array}{cc} \frac{\sqrt{2N}}{\pi} \sqrt{1 - \lambda^2/2N} & \text{if }|\lambda|< 2\sqrt{N} \\ 0 & \text{if }|\lambda| > 2 \sqrt{N} \end{array} \right.$$ The asymptotics come from calculus identities like Christoffel-Darboux formula. For finite size matrices the eigenvalue distribution is a semicircle yet. Plotting the eigenvalues of a random $4 \times 4$ matrix, the deviations from semicircle law are noticeable with 100,000 trials and 0.05 bin size. GUE is in brown, GUE|trace=0 is in orange. Axes not scaled, sorry! Mathematica Code: ```num[] := RandomReal[NormalDistribution[0, 1]] herm[N_] := (h = Table[(num[] + I num[])/Sqrt[2], {i, 1, N}, {j, 1, N}]; (h + Conjugate[Transpose[h]])/2) n = 4; trials = 100000; eigen = {}; Do[eigen = Join[(mat = herm[n]; mat = mat - Tr[mat] IdentityMatrix[n]/n ; Re[Eigenvalues[mat]]), eigen], {k, 1, trials}]; Histogram[eigen, {-5, 5, 0.05}] BinCounts[eigen, {-5, 5, 0.05}]; a = ListPlot[%, Joined -> True, PlotStyle -> Orange] eigen = {}; Do[eigen = Join[(mat = herm[n]; mat = mat; Re[Eigenvalues[mat]]), eigen], {k, 1, trials}]; Histogram[eigen, {-5, 5, 0.05}] BinCounts[eigen, {-5, 5, 0.05}]; b = ListPlot[%, Joined -> True, PlotStyle -> Brown] Show[a, b] ``` My friend asks if traceless GUE ensemble $H - \frac{1}{N} \mathrm{tr}(H)$ can be analyzed. The charts suggest we should still get a semicircle in the large $N$ limit. For finite $N$, the oscillations (relative to semicircle) are very large. Maybe has something to do with the related harmonic oscillator eigenstates. The trace is the average eigenvalue & The eigenvalues are being "recentered". We could imagine 4 perfectly centered fermions - they will repel each other. Joint distribution is: $$e^{-\lambda_1^2 -\lambda_2^2 - \lambda_3^2 - \lambda_4^2} \prod_{1 \leq i,j \leq 4} |\lambda_i - \lambda_j|^2$$ On average, the fermions will sit where the humps are. Their locations should be more pronounced now that their "center of mass" is fixed. - Interesting. Of course in the large dimension limit one expects no difference. However I am quite surprised to see such a big differences for N=4. Sorry I have no answer for the time being, but I will follow this post. – user667 Jan 30 '12 at 14:46 Dont anybody close or migrate this just because some mathematica code is displayed ... ! – Dilaton Dec 30 '12 at 17:37 ## 1 Answer The Christoffel–Darboux formula is not an asymptotic (in the sense of $N$ going to infinity) result, whereas the semicircle is valid for random matrices of infinite size. For finite matrices you obtain the oscillations you've got. To see this check out and plot formula (97) in http://arxiv.org/abs/math-ph/0412017 As to the traceless GUE, I'm no expert but here is sth. I dug out, http://arxiv.org/abs/math/9909104 maybe start there. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8753582835197449, "perplexity_flag": "middle"}
http://explainingmaths.wordpress.com/2012/04/01/is-composition-of-functions-associative/
How to explain mathematical concepts in a way that students can understand # Is composition of functions associative? Posted on April 1, 2012 Is composition of functions associative? Well, of course it is. [Notes in italics added 30/7/12: In spite of the date of this post, it is not intended to be a joke (except in as much as my concerns here may appear amusing!). However, the title of the post might be somewhat misleading. I am not suggesting that there is actually any doubt about the truth of these standard results, namely that composition of functions is associative, and that evaluation of expressions with nested brackets is unambiguous. What I am really asking is whether the standard proof of the fact that composition of functions is associative is a fully convincing one.] I’m sure you have seen the standard proof that composition of functions is associative, but let me remind you how it goes. Let $W,X,Y$ and $Z$ be sets, and suppose that we are given functions $h:W \to X$, $g:X \to Y$ and $f:Y \to Z$ We show that $(f \circ g) \circ h = f \circ (g \circ h)$ as follows. Let $w \in W$. Then $((f \circ g) \circ h)(w) = (f \circ g)(h(w)) = f(g(h(w)))$ and $(f \circ (g \circ h))(w) = f( (g\circ h)(w)) = f(g(h(w)))$. Since $((f \circ g) \circ h)(w) = (f \circ (g \circ h))(w)$ for all $w\in W$, we have $(f \circ g) \circ h = f \circ (g \circ h)\,.$ Now perhaps it is just me, but I suspect that seeing so many brackets around here might make for some initial confusion for the students. But is there more to it than that? As mathematicians, we know how to evaluate expressions involving lots of brackets. In fact we all learned how to prioritize brackets over the other mathematical operations when we were at school. But somewhere implicit in this is a confidence that we know how to evaluate expressions involving nested brackets and that these expressions are unambiguous. But how do we know that evaluating expressions with nested brackets is unambiguous? Are we assuming here that composition of functions is associative? [Or maybe we need, at some earlier stage, to perform a check that is essentially equivalent to proving that composition of functions is associative?] That would then lead to circularity in the argument above. [Or at least it might render the standard proof less satisfactory?] Here is my suggestion for breaking out of this possible problem. First of all, how else could we define the function $f\circ g$. Perhaps we are happy at least with the definition $(f \circ g)(x) = f(g(x))$: after all, there doesn’t seem to be any ambiguity there. Still, let’s see this another way. Given $x \in X$, set $y = g(x) \in Y$, and then set $z = f(y) \in Z$. Then the function $f \circ g$ is the function $x \mapsto z$, where $z$ is defined in terms of $x$ as above. How can this possibly help? Well, if we return to the composition of functions, let’s see what the new definitions of $(f \circ g) \circ h$ and $f \circ (g \circ h)$ look like. Let $w \in W$. Set $x=h(w) \in X$, set $y=g(x) \in Y$ and set $z=f(y) \in Z$. Examining our definitions above, we discover that $z= (f \circ g)(x)$ where $x=h(w)$, and so $z = ((f\circ g)\circ h)(w)$ (and you could rewrite this in longhand in English if you want to avoid some of the brackets round the functions!) Similarly $y = (g\circ h)(w)$ and $z = f(y)$, giving us $z= (f \circ (g\circ h))(w)$ (and again, we can rewrite everything in English to avoid some of the brackets round the functions). The rest of the proof is as before. I’m not really sure whether this helps, or hinders in general! But perhaps there are a few more people like me out there who might be glad to see that there isn’t really any circularity in the proof after all. That is, if I haven’t introduced some new problems above … [Feedback so far suggests that I may be in a minority of one with my concerns about the standard proof!] ### Like this: This entry was posted in Mathematics, Teaching. Bookmark the permalink. ### 8 Responses to Is composition of functions associative? 1. Pingback: Composition of Functions is Associative | New Math Done Right 2. Toby Bailey There is no question of associativity in interpreting an expression formed from brackets, and therefore, as far as I can see no possibility of circularity. More to the point, *if* you seriously think that there is something to say about interpreting multiply bracketed expressions that your students do not already know, then surely you need to say it far earlier since this can hardly be the only place that they occur. Indeed, a similar complication of brackets is easily obtained in basic algebra. • Toby Bailey Just noticed the date of posting of the original article. As a joke, it’s a bit too similar to a lot of maths teaching. 3. Joel Toby, it wasn’t a joke. It was a personal response to the issue, and one which made me feel a bit happier about it! Some might find my comments helpful, others irrelevant. Let me ask: why do you think that there is no issue involved in evaluating nested brackets? I don’t claim to have resolved that issue here, except perhaps in the very special case which corresponds to function composition. Presumably the general case is discussed elsewhere? Joel • Toby Bailey I still don’t see what you are worrying about. Brackets are a means of writing unambiguous instructions to carry out multiple binary operations (or function evaluations of course). Associativity is about whether two different bracketed expressions are in fact equal as a consequence of the properties of the particular binary operation. There is not even the scope for circularity. I quite see that there is an issue to think about as to whether bracketed expressions do define a calculation to be done unambiguously, and doubtless somebody has written on this although it strikes me as obvious that they do. And, as I said before, if we are in any doubt about this then we should sort it out far earlier since it would affect every reasonably complicated calculation we ever do. I certainly have not experienced a failure to understand brackets as being an issue with UG maths teaching – although of course they are often used (or more often, not used) rather carelessly. But students will certainly find the associativity of function composition statement and proof difficult, but not, I would claim, because of imagining an ambiguity with brackets. I would suggest that the issue is probably in understanding the statement and the need for a proof. On the first, we are probably dealing with what has often been referred to as a “process-object” difficulty: the huge cognitive step from thinking of a given function as a process to be applied to an argument to thinking of a function as an object itself to which operations can be applied. One sees this at a more elementary level when students have difficulty reasoning about an abstract function about which only some properties are known, compared with a concrete one which can be evaluated on a calculator. Secondly, there is the related difficulty with students not realising that the “rules of algebra” change according to the properties of the objects and operations. Having dealt at High School almost entirely with numbers, they are inclined to think of a(bc)=(ab)c as an absolute truth of mathematics rather than as a law that may or may not hold. I only realised that this may be a related problem as I wrote this: I suspect that the process-object difficulty causes students to fall back on using the “usual rules of algebra” because they are not yet equipped with the necessary schema to work with the functions as objects. • Joel The mathematics required to explain what it actually means to evaluate a mathematical expression is standard enough, but involves an inductive construction which I suspect would be somewhat tricky for a typical first-year mathematics undergraduate. So I doubt that this would be covered rigorously before students have to meet composition of functions. That means that students are taking certain things on trust at this point. When you take standard facts on trust (like the fact that you can evaluate mathematical expressions unambiguously), you don’t necessarily know which results are used/assumed when people actually give rigorous proofs of those facts. What I hope my post does is to give a short cut for those who, like me, might otherwise need to look at the full details involved in the inductive construction, and check that there is no hidden use of the associativity of composition of functions there. I suspect that I am, and will remain, in a vanishingly small minority in having had any concerns at all! I have certainly never come across a student with this concern, and I may never meet one. 4. Joel Somewhat related to my concerns here is a more advanced fact. If a Banach space $E$ is reflexive then so is its topological dual $E^*$. Some of my students asked me whether the following argument was acceptable: “Since $E$ is reflexive, identification allows us to write $E = E^{**}$ and so, taking $*$ of both sides gives $E^* = E^{***}\,,$ from which $E^*$ is also reflexive.” That attempted argument is definitely dubious! It is this type of concern that brought me back to wondering whether there is an issue. Perhaps there really is nothing to it. If you are not worried then please feel free to ignore this post. Part of the point of the post is to convince the rare concerned reader (someone like myself) that there really is no circularity after all here. 5. Neil Hey. Thanks for the post. I was confusing myself with this today. It was one of those ‘I know this is stupid but something doesn’t feel right’ moments. I also thought I would be the only one with an issue but apparently not. My confusion was also due to me wandering whether the fact that fg(x) = f(g(x)) comes before or after associativity. I’ve worked on it a bit and this is my present understanding. Which may be of some help. Take functions to be defined by their source, target and graph. Ie, ordered pairs with elements from given sets. Then this definition implies that composition is associative and it implies that fg(x) = f(g(x)). But now apparently fg(x) = f(g(x)) also implies associativity. Which is sort of weird: Definition implies associative Definition implies fg(x) = f(g(x)) fg(x) = f(g(x)) implies associativity Maybe I should just reproduce the set theoretic proof of associativity. Which is: (x,y) in f(gh) = There exists z such that (x,z) in gh and (z,y) in f = There exists z and w such that (x,w) in h and (w,z) in g and (z,y) in f = There exists w and z such that (x,w) in h and (w,z) in g and (z,y) in f = There exists w such that (x,w) in h and (w,y) in fg = (x,y) in (fg)h And the proof that fg(x) = f(g(x)) Recall that fg(x) is the unique element y such that (x,y) in fg. Or, formally stated: { fg(x) } = { y | (x,y) is in fg } so then { fg(x) } = { y | There is a z | (x,z) is in g and (z,y) is in f } But since { g(x) } = { y | (x,y) is in g } we have { g(x) } = { z } so z = g(x) so { fg(x) } = { y | (x,g(x)) is in g and (g(x),y) is in f } but again we find that if (g(x), y) is in f then y = f(g(x)) So { fg(x) } = { f(g(x))} and finally fg(x) = f(g(x)) So really the question is how best to define function composition and how best to prove associativity. It seems we can keep both associativity and that fg(x) = f(g(x)) by dropping the set theoretic definition and just defining fg to be all pairs (x,f(g(x))). This will lose a lot in other places though. As for the proof: Is it possible that we can have and operation * on functions where f*g(x) =f(g(x)) but * is not associative? Seems the answer is no. Is it possible that we can have an operation * on functions such that * is associative but f*g(x) is not f(g(x))? We have not shown otherwise. (normal addition of functions an example?) How should we prove associativity? Personally I prefer the proof straight from the set theoretic definition. I don’t think you should use theorems where definitions and axioms will do. ) • ### Blog Stats • 89,203 hits Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 40, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.959630012512207, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/133134-backgammon-match.html
# Thread: 1. ## Backgammon match I don't know how to solve this probability question, at first I thought it was a Bernoulli Random Variable question, but it's not. I mean, this is more like counting... but I don't know how to solve it... Can anyone help me? Here's my question: Consider a backgammon match with 17 games, each of which can have one of two outcomes: win (1 point), or loss (0 points). Find the number of all possible distinct score sequences under the following alternative assumptions. All 17 games are played. The match is stopped when one player wins more than half the games. 2. If all 17 games are played, all possible score sequences are determined by which games of the 17 player 1 wins. So either player 1 wins the first game or he loses it, and the same goes for the other 16 games. Thus, there are two scoring possibilities for each game. Since the result of any one game is independent of the others, we can multiply the number of possibilities for each game together, and determine that there are 2^17 score sequences. The other problem is a bit trickier, because we have to stop counting when one player reaches 9. Do you have any idea how to count the possibilities here? 3. Originally Posted by essedra Consider a backgammon match with 17 games, each of which can have one of two outcomes: win (1 point), or loss (0 points). Find the number of all possible distinct score sequences under the following alternative assumption: The match is stopped when one player wins more than half the games. Suppose that player A wins. That can be done in $\sum\limits_{k = 0}^8 {\binom{8+k}{k}}$ ways.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9626067876815796, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/electrostatics+resistors
# Tagged Questions 1answer 59 views ### how to find the diameter of this fuse? [closed] A fuse blows if the current exceeds 1.0 A. It is made of material that melts at a current density of 620A/cm2. What is the diameter of the wire, assumed to have a circular profile, in the fuse? 1answer 350 views ### Resistance between two points in an infinite metal sphere/cube Let's imagine that we have a tridimensional metal object of infinite size, and decide to calculate the resistance between two arbitrary points. How would we go about doing this? I have thought of two ... 1answer 871 views ### As ISA Practical - Resistors in Parallel [duplicate] Possible Duplicate: Current against the inverse of resistance graph, I = V/R +c How would you set up a circuit with a fixed resistor in parallel with a variable one. We are told to measure ... 4answers 974 views ### Resistance between two points on a conducting surface Suppose we have a cylindrical resistor, with resistance given by $R=\rho\cdot l/(\pi r^2)$ Let $d$ be the distance between two points in the interior of the resistor and let $r\gg d\gg l$. Ie. it is ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950026273727417, "perplexity_flag": "middle"}
http://medlibrary.org/medwiki/Particular_point_topology
# Particular point topology Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: In mathematics, the particular point topology (or included point topology) is a topology where sets are considered open if they are empty or contain a particular, arbitrarily chosen, point of the topological space. Formally, let X be any set and p ∈ X. The collection T = {S ⊆ X: p ∈ S or S = ∅} of subsets of X is then the particular point topology on X. There are a variety of cases which are individually named: • If X = {0,1} we call X the Sierpiński space. This case is somewhat special and is handled separately. • If X is finite (with at least 3 points) we call the topology on X the finite particular point topology. • If X is countably infinite we call the topology on X the countable particular point topology. • If X is uncountable we call the topology on X the uncountable particular point topology. A generalization of the particular point topology is the closed extension topology. In the case when X \ {p} has the discrete topology, the closed extension topology is the same as the particular point topology. This topology is used to provide interesting examples and counterexamples. ## Properties Closed sets have empty interior Given an open set $A \subset X$ every $x \ne p$ is a limit point of A. So the closure of any open set other than $\emptyset$ is $X$. No closed set other than $X$ contains p so the interior of every closed set other than $X$ is $\emptyset$. ### Connectedness Properties Path and locally connected but not arc connected $f(t) = \begin{cases} x & t=0 \\ p & t\in(0,1) \\ y & t=1 \end{cases}$ f is a path for all x,y ∈ X. However since p is open, the preimage of p under a continuous injection from [0,1] would be an open single point of [0,1], which is a contradiction. Dispersion point, example of a set with p is a dispersion point for X. That is X\{p} is totally disconnected. Hyperconnected but not ultraconnected Every open set contains p hence X is hyperconnected. But if a and b are in X such that p, a, and b are three distinct points, then {a} and {b} are disjoint closed sets and thus X is not ultraconnected. Note that if X is the Sierpinski space then no such a and b exist and X is in fact ultraconnected. ### Compactness Properties Closure of compact not compact The set {p} is compact. However its closure (the closure of a compact set) is the entire space X and if X is infinite this is not compact (since any set {t,p} is open). For similar reasons if X is uncountable then we have an example where the closure of a compact set is not a Lindelöf space. Pseudocompact but not weakly countably compact First there are no disjoint non-empty open sets (since all open sets contain 'p'). Hence every continuous function to the real line must be constant, and hence bounded, proving that X is a pseudocompact space. Any set not containing p does not have a limit point thus if X if infinite it is not weakly countably compact. Locally compact but not strongly locally compact. Both possibilities regarding global compactness. If x ∈ X then the set $\{x,p\}$ is a compact neighborhood of x. However the closure of this neighborhood is all of X and hence X is not strongly locally compact. In terms of global compactness, X finite if and only if X is compact. The first implication is immediate, the reverse implication follows from noting that $\bigcup_{x\in X} \{p,x\}$ is an open cover with no finite subcover. ### Limit related Accumulation point but not a ω-accumulation point If Y is some subset containing p then any x different from p is an accumulation point of Y. However x is not an ω-accumulation point as {x,p} is one neighbourhood of x which does not contain infinitely many points from Y. Because this makes no use of properties of Y it leads to often cited counter examples. Accumulation point as a set but not as a sequence Take a sequence {ai} of distinct elements that also contains p. As in the example above, the underlying set has any x different from p as an accumulation point. However the sequence itself cannot possess accumulation point y for its neighbourhood {y,p} must contain infinite number of the distinct ai. ### Separation related T0 X is T0 (since {x, p} is open for each x) but satisfies no higher separation axioms (because all open sets must contain p). Not regular Since every nonempty open set contains p, no closed set not containing p (such as X\{p}) can be separated by neighbourhoods from {p}, and thus X is not regular. Since complete regularity implies regularity, X is not completely regular. Not normal Since every nonempty open set contains p, no nonempty closed sets can be separated by neighbourhoods from each other, and thus X is not normal. Exception: the Sierpinski topology is normal, and even completely normal, since it contains no nontrivial separated sets. Separability {p} is dense and hence X is a separable space. However if X is uncountable then X\{p} is not separable. This is an example of a subspace of a separable space not being separable. Countability (first but not second) If X is uncountable then X is first countable but not second countable. Comparable ( Homeomorphic topology on the same set that is not comparable) Let $p,q\in X$ with $p\ne q$. Let $t_p = \{S\subset X \,|\, p\in S\}$ and $t_q = \{S\subset X \,|\, q\in S\}$. That is tq is the particular point topology on X with q being the distinguished point. Then (X,tp) and (X,tq) are homeomorphic incomparable topologies on the same set. Density (no nonempty subsets dense in themselves) Let S be a subset of X. If S contains p then S has no limit points (see limit point section). If S does not contain p then p is not a limit point of S. Hence S is not dense if S is nonempty. Not first category Any set containing p is dense in X. Hence X is not a union of nowhere dense subsets. Subspaces Every subspace of a set given the particular point topology that doesn't contain the particular point, inherits the discrete topology. ## References • Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3 [Amazon-US | Amazon-UK], MR507446 Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Particular point topology", available in its original form here: http://en.wikipedia.org/w/index.php?title=Particular_point_topology • ## Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • ## Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • ## About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113407135009766, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Parabola
# Parabola For other uses, see Parabola (disambiguation). Parabola, showing various features A parabola (pron.: ; plural parabolas or parabolae, adjective parabolic, from Greek: παραβολή) is a two-dimensional, mirror-symmetrical curve, which is approximately U-shaped when oriented as shown in the diagram, but which can be in any orientation in its plane. It fits any of several superficially different mathematical descriptions which can all be proved to define curves of exactly the same shape. One description of a parabola involves a point (the focus) and a line (the directrix). The focus does not lie on the directrix. The locus of points in that plane that are equidistant from both the directrix and the focus is the parabola. Another description of a parabola is as a conic section, created from the intersection of a right circular conical surface and a plane, which is parallel to a straight line on the conical surface and perpendicular to another plane which includes both the axis of the cone and also the same straight line on its surface.[1] A third description is algebraic. A parabola is a graph of a quadratic function, such as $y=x^2.$ The line perpendicular to the directrix and passing through the focus (that is, the line that splits the parabola through the middle) is called the "axis of symmetry". The point on the axis of symmetry that intersects the parabola is called the "vertex", and it is the point where the curvature is greatest. The distance between the vertex and the focus, measured along the axis of symmetry, is the "focal length". The "latus rectum" is the chord of the parabola which is parallel to the directrix and passes through the focus. Parabolas can open up, down, left, right, or in some other arbitrary direction. Any parabola can be repositioned and rescaled to fit exactly on any other parabola — that is, all parabolas are geometrically similar. Parabolas have the property that, if they are made of material that reflects light, then light which enters a parabola travelling parallel to its axis of symmetry is reflected to its focus, regardless of where on the parabola the reflection occurs. Conversely, light that originates from a point source at the focus is reflected ("collimated") into a parallel beam, leaving the parabola parallel to the axis of symmetry. The same effects occur with sound and other forms of energy. This reflective property is the basis of many practical uses of parabolas. The parabola has many important applications, from a parabolic antenna or parabolic microphone to automobile headlight reflectors to the design of ballistic missiles. They are frequently used in physics, engineering, and many other areas. Strictly, the adjective parabolic should be applied only to things that are shaped as a parabola, which is a two-dimensional shape. However, as shown in the last paragraph, the same adjective is commonly used for three-dimensional objects, such as parabolic reflectors, which are really paraboloids. Sometimes, the noun parabola is also used to refer to these objects. Though not perfectly correct, this usage is generally understood. ## Contents Parabolic curve showing directrix (L) and focus (F). The distance from any point on the parabola to the focus (PnF) equals the perpendicular distance from the same point on the parabola to the directrix (PnQn). Parabolic curve showing chord (L), focus (F), and vertex (V). L is an arbitrary chord of the parabola perpendicular to its axis of symmetry, which passes through V and F. (The ends of the chord are not shown here.) The lengths of all paths Qn - Pn - F are the same, equalling the distance between the chord L and the directrix. (See previous diagram above.) This is similar to saying that a parabola is an ellipse, but with one focal point at infinity. It also directly implies, by the wave nature of light, that parallel light arriving along the lines Qn - Pn will be reflected to converge at F. A linear wavefront along L is concentrated, after reflection, to the one point where all parts of it have travelled equal distances and are in phase, namely F. No consideration of angles is required. Parabolic graph of quadratic function y=6x2+4x-8 A parabola obtained as the intersection of a cone with a plane parallel to a straight line on its surface. The plane of the lower diagram includes the axis of the cone and the straight line on its surface, which runs down the right-hand edge of the triangular section. The plane that intersects the cone to form the parabola is perpendicular to the plane of the diagram, and therefore appears as a line. The parabola is a member of the family of conic sections ## History Parabolic compass designed by Leonardo da Vinci The earliest known work on conic sections was by Menaechmus in the fourth century BC. He discovered a way to solve the problem of doubling the cube using parabolae. (The solution, however, does not meet the requirements imposed by compass and straightedge construction.) The area enclosed by a parabola and a line segment, the so-called "parabola segment", was computed by Archimedes via the method of exhaustion in the third century BC, in his The Quadrature of the Parabola. The name "parabola" is due to Apollonius, who discovered many properties of conic sections. It means "application", referring to "application of areas" concept, that has a connection with this curve, as Apollonius had proved.[2] The focus–directrix property of the parabola and other conics is due to Pappus. Galileo showed that the path of a projectile follows a parabola, a consequence of uniform acceleration due to gravity. The idea that a parabolic reflector could produce an image was already well known before the invention of the reflecting telescope.[3] Designs were proposed in the early to mid seventeenth century by many mathematicians including René Descartes, Marin Mersenne,[4] and James Gregory.[5] When Isaac Newton built the first reflecting telescope in 1668 he skipped using a parabolic mirror because of the difficulty of fabrication, opting for a spherical mirror. Parabolic mirrors are used in most modern reflecting telescopes and in satellite dishes and radar receivers.[6] ## Equation in Cartesian coordinates Let the directrix be the line x = −p and let the focus be the point (p, 0). If (x, y) is a point on the parabola then, by Pappus' definition of a parabola, it is the same distance from the directrix as the focus; in other words: $x+p=\sqrt{(x-p)^2+y^2}.$ Squaring both sides and simplifying produces $y^2 = 4px\$ as the equation of the parabola. By interchanging the roles of x and y one obtains the corresponding equation of a parabola with a vertical axis as $x^2 = 4py.\$ The equation can be generalized to allow the vertex to be at a point other than the origin by defining the vertex as the point (h, k). The equation of a parabola with a vertical axis then becomes $(x-h)^{2}=4p(y-k).\,$ The last equation can be rewritten $y=ax^2+bx+c\,$ so the graph of any function which is a polynomial of degree 2 in x is a parabola with a vertical axis. More generally, a parabola is a curve in the Cartesian plane defined by an irreducible equation — one that does not factor as a product of two not necessarily distinct linear equations — of the general conic form $A x^{2} + B xy + C y^{2} + D x + E y + F = 0 \,$ with the parabola restriction that $B^{2} = 4 AC,\,$ where all of the coefficients are real and where A and C are not both zero. The equation is irreducible if and only if the determinant of the 3×3 matrix $\begin{bmatrix} A & B/2 & D/2 \\ B/2 & C & E/2 \\ D/2 & E/2 & F \end{bmatrix}.$ is non-zero: that is, if (AC − B2/4)F + BED/4 − CD2/4 − AE2/4 ≠ 0. The reducible case, also called the degenerate case, gives a pair of parallel lines, possibly real, possibly imaginary, and possibly coinciding with each other.[7] ## Conic section and quadratic form Cone with cross-sections The diagram represents a cone with its axis vertical.[8] The point A is its apex. A horizontal cross-section of the cone passes through the points B, E, C, and D. This cross-section is circular, but appears elliptical when viewed obliquely, as is shown in the diagram. An inclined cross-section of the cone, shown in pink, is inclined from the vertical by the same angle, θ, as the side of the cone. According to the definition of a parabola as a conic section, the boundary of this pink cross-section, EPD, is a parabola. The cone also has another horizontal cross-section, which passes through the vertex, P, of the parabola, and is also circular, with a radius which we will call r. Its centre is V, and PK is a diameter. The chord BC is a diameter of the lower circle, and passes through the point M, which is the midpoint of the chord ED. Let us call the lengths of the line segments EM and DM x, and the length of PM y. Thus: $BM=2y\sin{\theta}.$   (The triangle BPM is isosceles.) $CM=2r.$   (PMCK is a parallelogram.) Using the intersecting chords theorem on the chords BC and DE, we get: $EM \cdot DM=BM \cdot CM$ Substituting: $x^2=4ry\sin{\theta}$ Rearranging: $y=\frac{x^2}{4r\sin{\theta}}$ For any given cone and parabola, r and θ are constants, but x and y are variables which depend on the arbitrary height at which the horizontal cross-section BECD is made. This last equation is a simple quadratic one which describes how x and y are related to each other, and therefore defines the shape of the parabolic curve. This shows that the definition of a parabola as a conic section implies its definition as the graph of a quadratic function. Both definitions produce curves of exactly the same shape. ### Focal length It is proved below that if a parabola has an equation of the form y = ax2, where a is a constant, then $a=\frac{1}{4f},$ where f is its focal length. Comparing this with the last equation above shows that the focal length of the above parabola is r sin θ. ### Position of the focus If a line is perpendicular to the plane of the parabola and passes through the centre, V, of the horizontal cross-section of the cone passing through P, then the point where this line intersects the plane of the parabola is the focus of the parabola, which is marked F on the diagram. Angle VPF is complementary to θ, and angle PVF is complementary to angle VPF, therefore angle PVF is θ. Since the length of PV is r, this construction correctly places the focus on the axis of symmetry of the parabola, at a distance r sin θ from its vertex. ## Other geometric definitions A parabola may also be characterized as a conic section with an eccentricity of 1. As a consequence of this, all parabolae are similar, meaning that while they can be different sizes, they are all the same shape. A parabola can also be obtained as the limit of a sequence of ellipses where one focus is kept fixed as the other is allowed to move arbitrarily far away in one direction. In this sense, a parabola may be considered an ellipse that has one focus at infinity. The parabola is an inverse transform of a cardioid. A parabola has a single axis of reflective symmetry, which passes through its focus and is perpendicular to its directrix. The point of intersection of this axis and the parabola is called the vertex. A parabola spun about this axis in three dimensions traces out a shape known as a paraboloid of revolution. The parabola is found in numerous situations in the physical world (see below). ## Equations ### Cartesian In the following equations $h$ and $k$ are the coordinates of the vertex, $(h,k)$, of the parabola and $p$ is the distance from the vertex to the focus and the vertex to the directrix. #### Vertical axis of symmetry $(x - h)^2 = 4p(y - k) \,$ $y =\frac{(x-h)^2}{4p}+k\,$ $y = ax^2 + bx + c \,$ where $a = \frac{1}{4p}; \ \ b = \frac{-h}{2p}; \ \ c = \frac{h^2}{4p} + k; \ \$ $h = \frac{-b}{2a}; \ \ k = \frac{4ac - b^2}{4a}$. Parametric form: $x(t) = 2pt + h; \ \ y(t) = pt^2 + k \,$ #### Horizontal axis of symmetry $(y - k)^2 = 4p(x - h) \,$ $x =\frac{(y - k)^2}{4p} + h;\ \,$ $x = ay^2 + by + c \,$ where $a = \frac{1}{4p}; \ \ b = \frac{-k}{2p}; \ \ c = \frac{k^2}{4p} + h; \ \$ $h = \frac{4ac - b^2}{4a}; \ \ k = \frac{-b}{2a}$. Parametric form: $x(t) = pt^2 + h; \ \ y(t) = 2pt + k \,$ #### General parabola The general form for a parabola is $(\alpha x+\beta y)^2 + \gamma x + \delta y + \epsilon = 0 \,$ This result is derived from the general conic equation given below: $Ax^2 +Bxy + Cy^2 + Dx + Ey + F = 0 \,$ and the fact that, for a parabola, $B^2=4AC \,$. The equation for a general parabola with a focus point F(u, v), and a directrix in the form $ax+by+c=0 \,$ is $\frac{\left(ax+by+c\right)^2}{{a}^{2}+{b}^{2}}=\left(x-u\right)^2+\left(y-v\right)^2 \,$ ### Latus rectum, semilatus rectum, and polar coordinates In polar coordinates, a parabola with the focus at the origin and the directrix parallel to the y-axis, is given by the equation $r (1 + \cos \theta) = l \,$ where l is the semilatus rectum: the distance from the focus to the parabola itself, measured along a line perpendicular to the axis of symmetry. Note that this equals the perpendicular distance from the focus to the directrix, and is twice the focal length, which is the distance from the focus to the vertex of the parabola. The latus rectum is the chord that passes through the focus and is perpendicular to the axis of symmetry. It has a length of 2l. ### Gauss-mapped form A Gauss-mapped form: $(\tan^2\phi,2\tan\phi)$ has normal $(\cos\phi,\sin\phi)$. ## Proof of the reflective property Reflective property of a parabola The reflective property states that, if a parabola can reflect light, then light which enters it travelling parallel to the axis of symmetry is reflected to the focus. This is derived from the wave nature of light in the caption to a diagram near the top of this article. This derivation is valid, but may not be satisfying to readers who would prefer a mathematical approach. In the following proof, the fact that every point on the parabola is equidistant from the focus and from the directrix is taken as axiomatic. Consider the parabola $y=x^2.$ Since all parabolas are similar, this simple case represents all others. The right-hand side of the diagram shows part of this parabola. Construction and definitions The point E is an arbitrary point on the parabola, with coordinates $(x,x^2).$ The focus is F, the vertex is A (the origin), and the line FA (the y-axis) is the axis of symmetry. The line EC is parallel to the axis of symmetry, and intersects the x-axis at D. The point C is located on the directrix (which is not shown, to minimize clutter). The point B is the midpoint of the line segment FC. Deductions Measured along the axis of symmetry, the vertex, A, is equidistant from the focus, F, and from the directrix. Correspondingly, since C is on the directrix, the y-coordinates of F and C are equal in absolute value and opposite in sign. B is the midpoint of FC, so its y-coordinate is zero, so it lies on the x-axis. Its x-coordinate is half that of E, D, and C, i.e. $\frac{{x}}{{2}}.$ The slope of the line BE is the quotient of the lengths of ED and BD, which is $\frac{x^2}{\left(\frac{x}{2}\right)},$ which comes to $2x.$ But $2x$ is also the slope (first derivative) of the parabola at E. Therefore the line BE is the tangent to the parabola at E. The distances EF and EC are equal because E is on the parabola, F is the focus and C is on the directrix. Therefore, since B is the midpoint of FC, triangles FEB and CEB are congruent (three sides), which implies that the angles marked $\alpha$ are equal. (The angle above E is vertically opposite angle BEC.) This means that a ray of light which enters the parabola and arrives at E travelling parallel to the axis of symmetry will be reflected by the line BE so it travels along the line EF, as shown in red in the diagram (assuming that the lines can somehow reflect light). Since BE is the tangent to the parabola at E, the same reflection will be done by an infinitessimal arc of the parabola at E. Therefore, light that enters the parabola and arrives at E travelling parallel to the axis of symmetry of the parabola is reflected by the parabola toward its focus. The point E has no special characteristics. This conclusion about reflected light applies to all points on the parabola, as is shown on the left side of the diagram. This is the reflective property. ### Tangent bisection property The above proof, and the accompanying diagram, show that the tangent BE bisects the angle FEC. In other words, the tangent to the parabola at any point bisects the angle between the lines joining the point to the focus, and perpendicularly to the directrix. ### Alternative proofs Parabola and tangent The above proofs of the reflective and tangent bisection properties use a line of calculus. For readers who are not comfortable with calculus, the following alternative is presented. In this diagram, F is the focus of the parabola, and T and U lie on its directrix. P is an arbitrary point on the parabola. PT is perpendicular to the directrix, and the line MP bisects angle FPT. Q is another point on the parabola, with QU perpendicular to the directrix. We know that FP=PT and FQ=QU. Clearly, QT>QU, so QT>FQ. All points on the bisector MP are equidistant from F and T, but Q is closer to F than to T. This means that Q is to the "left" of MP, i.e. on the same side of it as the focus. The same would be true if Q were located anywhere else on the parabola (except at the point P), so the entire parabola, except the point P, is on the focus side of MP. Therefore MP is the tangent to the parabola at P. Since it bisects the angle FPT, this proves the tangent bisection property. The logic of the last paragraph can be applied to modify the above proof of the reflective property. It effectively proves the line BE to be the tangent to the parabola at E if the angles $\alpha$ are equal. The reflective property follows as shown previously. ## Two tangent properties Let the line of symmetry intersect the parabola at point Q, and denote the focus as point F and its distance from point Q as f. Let the perpendicular to the line of symmetry, through the focus, intersect the parabola at a point T. Then (1) the distance from F to T is 2f, and (2) a tangent to the parabola at point T intersects the line of symmetry at a 45° angle.[9]:p.26 ## Orthoptic property Perpendicular tangents intersect on the directrix Main article: Isoptic If two tangents to a parabola are perpendicular to each other, then they intersect on the directrix. Conversely, two tangents which intersect on the directrix are perpendicular. Proof Without loss of generality, consider the parabola $y=x^2.$ Suppose that two tangents contact this parabola at the points $(p,p^2)$ and $(q,q^2).$ Their slopes are $2p$ and $2q,$ respectively. Thus the equation of the first tangent is of the form $y=2px+C,$ where $C$ is a constant. In order to make the line pass through $(p,p^2),$ the value of $C$ must be $-p^2,$ so the equation of this tangent is $y=2px-p^2.$ Likewise, the equation of the other tangent is $y=2qx-q^2.$ At the intersection point of the two tangents, $2px-p^2=2qx-q^2.$ Thus $2x(p-q)=p^2-q^2.$ Factoring the difference of squares, cancelling, and dividing by 2 gives $x=\frac{p+q}{2}.$ Substituting this into one of the equations of the tangents gives an expression for the y-coordinate of the intersection point: $y=2p\left(\frac{p+q}{2}\right)-p^2.$ Simplifying this gives $y=pq.$ We now use the fact that these tangents are perpendicular. The product of the slopes of perpendicular lines is −1, assuming that both of the slopes are finite. The slopes of our tangents are $2p$ and $2q,$, so $(2p)(2q)=-1,$ so $pq=-\frac{1}{4}.$ Thus the y-coordinate of the intersection point of the tangents is given by $y=-\frac{1}{4}.$ This is also the equation of the directrix of this parabola, so the two perpendicular tangents intersect on the directrix. ## Dimensions of parabolas with axes of symmetry parallel to the y-axis These parabolas have equations of the form $y=ax^2+bx+c.$ By interchanging $x$ and $y,$ the parabolas' axes of symmetry become parallel to the x-axis. Some features of a parabola ### Coordinates of the vertex The x-coordinate at the vertex is $x=-\frac{b}{2a}$, which is found by differentiating the original equation $y=ax^2+bx+c$, setting the resulting $dy/dx=2ax+b$ equal to zero (a critical point), and solving for $x$. Substitute this x-coordinate into the original equation to yield: $y=a\left (-\frac{b}{2a}\right )^2 + b \left ( -\frac{b}{2a} \right ) + c.$ Simplifying: $=\frac{ab^2}{4a^2} -\frac{b^2}{2a} + c$ Put terms over a common denominator $=\frac{b^2}{4a} -\frac{2\cdot b^2}{2\cdot 2a} + c\cdot\frac{4a}{4a}$ $=\frac{-b^2+4ac}{4a}$ $=-\frac{b^2-4ac}{4a}=-\frac{D}{4a}$ where $D$ is the discriminant, $(b^2-4ac).$ Thus, the vertex is at point $\left (-\frac{b}{2a},-\frac{D}{4a}\right ).$ ### Coordinates of the focus Since the axis of symmetry of this parabola is parallel with the y-axis, the x-coordinates of the focus and the vertex are equal. The coordinates of the vertex are calculated in the preceding section. The x-coordinate of the focus is therefore also $-\frac{b}{2a}.$ To find the y-coordinate of the focus, consider the point, P, located on the parabola where the slope is 1, so the tangent to the parabola at P is inclined at 45 degrees to the axis of symmetry. Using the reflective property of a parabola, we know that light which is initially travelling parallel to the axis of symmetry is reflected at P toward the focus. The 45-degree inclination causes the light to be turned 90 degrees by the reflection, so it travels from P to the focus along a line that is perpendicular to the axis of symmetry and to the y-axis. This means that the y-coordinate of P must equal that of the focus. By differentiating the equation of the parabola and setting the slope to 1, we find the x-coordinate of P: $y=ax^2+bx+c,$ $\frac{dy}{dx}=2ax+b=1$ $\therefore x=\frac{1-b}{2a}$ Substituting this value of $x$ in the equation of the parabola, we find the y-coordinate of P, and also of the focus: $y=a\left(\frac{1-b}{2a}\right)^2+b\left(\frac{1-b}{2a}\right)+c$ $=a\left(\frac{1-2b+b^2}{4a^2}\right)+\left(\frac{b-b^2}{2a}\right)+c$ $=\left(\frac{1-2b+b^2}{4a}\right)+\left(\frac{2b-2b^2}{4a}\right)+c$ $=\frac{1-b^2}{4a}+c=\frac{1-(b^2-4ac)}{4a}=\frac{1-D}{4a}$ where $D$ is the discriminant, $(b^2-4ac),$ as is used in the "Coordinates of the vertex" section. The focus is therefore the point: $\left(-\frac{b}{2a},\frac{1-D}{4a}\right)$ ### Axis of symmetry, focal length, and directrix The above coordinates of the focus of a parabola of the form: $y=ax^2+bx+c$ can be compared with the coordinates of its vertex, which are derived in the section "Coordinates of the vertex", above, and are: $\left(\frac{-b}{2a},\frac{-D}{4a}\right)$ where $D=b^2-4ac.$ The axis of symmetry is the line which passes through both the focus and the vertex. In this case, it is vertical, with equation: $x=-\frac{b}{2a}$. The focal length of the parabola is the difference between the y-coordinates of the focus and the vertex: $f=\left(\frac{1-D}{4a}\right)-\left(\frac{-D}{4a}\right)$ $=\frac{1}{4a}$ It is sometimes useful to invert this equation and use it in the form: $a=\frac{1}{4f}.$ See the section "Conic section and quadratic form", above. Measured along the axis of symmetry, the vertex is the midpoint between the focus and the directrix. Therefore, the equation of the directrix is: $y=-\frac{D}{4a}-\frac{1}{4a}=-\frac{1+D}{4a}$ ## Length of an arc of a parabola If a point X is located on a parabola which has focal length $f,$ and if $p$ is the perpendicular distance from X to the axis of symmetry of the parabola, then the lengths of arcs of the parabola which terminate at X can be calculated from $f$ and $p$ as follows, assuming they are all expressed in the same units. $h=\frac{p}{2}$ $q=\sqrt{f^2+h^2}$ $s=\frac{hq}{f}+f\ln\left(\frac{h+q}{f}\right)$ This quantity, $s$, is the length of the arc between X and the vertex of the parabola. The length of the arc between X and the symmetrically opposite point on the other side of the parabola is $2s.$ The perpendicular distance, $p$, can be given a positive or negative sign to indicate on which side of the axis of symmetry X is situated. Reversing the sign of $p$ reverses the signs of $h$ and $s$ without changing their absolute values. If these quantities are signed, the length of the arc between any two points on the parabola is always shown by the difference between their values of $s.$ The calculation can be simplified by using the properties of logarithms: $s_1 - s_2 = \frac{h_1 q_1 - h_2 q_2}{f} +f \ln \left(\frac{h_1 + q_1}{h_2 + q_2}\right)$ This can be useful, for example, in calculating the size of the material needed to make a parabolic reflector or parabolic trough. This calculation can be used for a parabola in any orientation. It is not restricted to the situation where the axis of symmetry is parallel to the y-axis. (Note: In the above calculation, the square-root, $q$, must be positive. The quantity ln(a), sometimes written as loge(a), is the natural logarithm of a, i.e. its logarithm to base "e".) ## Parabolae in the physical world In nature, approximations of parabolae and paraboloids (such as catenary curves) are found in many diverse situations. The best-known instance of the parabola in the history of physics is the trajectory of a particle or body in motion under the influence of a uniform gravitational field without air resistance (for instance, a baseball flying through the air, neglecting air friction). The parabolic trajectory of projectiles was discovered experimentally by Galileo in the early 17th century, who performed experiments with balls rolling on inclined planes. He also later proved this mathematically in his book Dialogue Concerning Two New Sciences.[10][11] For objects extended in space, such as a diver jumping from a diving board, the object itself follows a complex motion as it rotates, but the center of mass of the object nevertheless forms a parabola. As in all cases in the physical world, the trajectory is always an approximation of a parabola. The presence of air resistance, for example, always distorts the shape, although at low speeds, the shape is a good approximation of a parabola. At higher speeds, such as in ballistics, the shape is highly distorted and does not resemble a parabola. Another hypothetical situation in which parabolae might arise, according to the theories of physics described in the 17th and 18th Centuries by Sir Isaac Newton, is in two-body orbits; for example the path of a small planetoid or other object under the influence of the gravitation of the Sun. Parabolic orbits do not occur in nature; simple orbits most commonly resemble hyperbolas or ellipses. The parabolic orbit is the degenerate intermediate case between those two types of ideal orbit. An object following a parabolic orbit would travel at the exact escape velocity of the object it orbits; objects in elliptical or hyperbolic orbits travel at less or greater than escape velocity, respectively. Long-period comets travel close to the Sun's escape velocity while they are moving through the inner solar system, so their paths are close to being parabolic. Approximations of parabolae are also found in the shape of the main cables on a simple suspension bridge. The curve of the chains of a suspension bridge is always an intermediate curve between a parabola and a catenary, but in practice the curve is generally nearer to a parabola, and in calculations the second degree parabola is used.[12][13] Under the influence of a uniform load (such as a horizontal suspended deck), the otherwise catenary-shaped cable is deformed toward a parabola. Unlike an inelastic chain, a freely hanging spring of zero unstressed length takes the shape of a parabola. Suspension-bridge cables are, ideally, purely in tension, without having to carry other, e.g. bending, forces. Similarly, the structures of parabolic arches are purely in compression. Paraboloids arise in several physical situations as well. The best-known instance is the parabolic reflector, which is a mirror or similar reflective device that concentrates light or other forms of electromagnetic radiation to a common focal point, or conversely, collimates light from a point source at the focus into a parallel beam. The principle of the parabolic reflector may have been discovered in the 3rd century BC by the geometer Archimedes, who, according to a legend of debatable veracity,[14] constructed parabolic mirrors to defend Syracuse against the Roman fleet, by concentrating the sun's rays to set fire to the decks of the Roman ships. The principle was applied to telescopes in the 17th century. Today, paraboloid reflectors can be commonly observed throughout much of the world in microwave and satellite-dish receiving and transmitting antennas. In parabolic microphones, a parabolic reflector that reflects sound, but not necessarily electromagnetic radiation, is used to focus sound onto a microphone, giving it highly directional performance. Paraboloids are also observed in the surface of a liquid confined to a container and rotated around the central axis. In this case, the centrifugal force causes the liquid to climb the walls of the container, forming a parabolic surface. This is the principle behind the liquid mirror telescope. Aircraft used to create a weightless state for purposes of experimentation, such as NASA's "Vomit Comet," follow a vertically parabolic trajectory for brief periods in order to trace the course of an object in free fall, which produces the same effect as zero gravity for most purposes. In the United States, vertical curves in roads are usually parabolic by design. ### Gallery Click on any image to enlarge it. • A bouncing ball captured with a stroboscopic flash at 25 images per second. Note that the ball becomes significantly non-spherical after each bounce, especially after the first. That, along with spin and air resistance, causes the curve swept out to deviate slightly from the expected perfect parabola. • Parabolic trajectories of water in a fountain. • The path (in red) of Comet Kohoutek as it passed through the inner solar system, showing its nearly parabolic shape. The blue orbit is the Earth's • Hercilio Luz Bridge, Florianópolis, Brazil. The supporting cables of suspension bridges follow a curve which is intermediate between a parabola and a catenary.[12] • The Rainbow Bridge across the Niagara River, connecting Canada (left) to the United States (right). The parabolic arch is in compression, and carries the weight of the road. • Parabolic arches used in architecture • Parabolic shape formed by a liquid surface under rotation. Two liquids of different densities completely fill a narrow space between two sheets of transparent plastic. The gap between the sheets is closed at the bottom, sides and top. The whole assembly is rotating around a vertical axis passing through the centre. (See Rotating furnace) • Parabolic microphone with optically transparent plastic reflector, used to overhear referee conversations at an American college football game. • Array of parabolic troughs to collect solar energy • Edison's searchlight, mounted on a cart. The light had a parabolic reflector. • Physicist Stephen Hawking in an aircraft flying a parabolic trajectory to produce zero-gravity ## Generalizations In algebraic geometry, the parabola is generalized by the rational normal curves, which have coordinates $(x,x^2,x^3,\dots,x^n);$ the standard parabola is the case $n=2,$ and the case $n=3$ is known as the twisted cubic. A further generalization is given by the Veronese variety, when there is more than one input variable. In the theory of quadratic forms, the parabola is the graph of the quadratic form $x^2$ (or other scalings), while the elliptic paraboloid is the graph of the positive-definite quadratic form $x^2+y^2$ (or scalings) and the hyperbolic paraboloid is the graph of the indefinite quadratic form $x^2-y^2.$ Generalizations to more variables yield further such objects. The curves $y=x^p$ for other values of p are traditionally referred to as the higher parabolas, and were originally treated implicitly, in the form $x^p=ky^q$ for p and q both positive integers, in which form they are seen to be algebraic curves. These correspond to the explicit formula $y=x^{p/q}$ for a positive fractional power of x. Negative fractional powers correspond to the implicit equation $x^py^q=k,$ and are traditionally referred to as higher hyperbolas. Analytically, x can also be raised to an irrational power (for positive values of x); the analytic properties are analogous to when x is raised to rational powers, but the resulting curve is no longer algebraic, and cannot be analyzed via algebraic geometry. ## Notes 1. The only way to draw a straight line on the surface of a circular cone is to make it pass through the apex of the cone, where it will intersect the cone's axis. The line and the axis must therefore be coplanar. 2. Wilson, Ray N. (2004). Reflecting Telescope Optics: Basic design theory and its historical development (2 ed.). Springer. p. 3. ISBN 3-540-40106-7. , Extract of page 3 3. Stargazer, p. 115. 4. Stargazer, pp. 123 and 132 5. Fitzpatrick, Richard (July 14, 2007), "Spherical Mirrors", Electromagnetism and Optics, lectures, University of Texas at Austin, Paraxial Optics, retrieved October 5, 2011. 6. Lawrence, J. Dennis, A Catalog of Special Plane Curves, Dover Publ., 1972. 7. In the diagram, the axis is not exactly vertical. This is the result of a technical problem that occurs when a 3-dimensional model is converted into a 2-dimensional image. Readers should imagine the cone rotated slightly clockwise, so the axis, AV, is vertical. 8. Downs, J. W., Practical Conic Sections, Dover Publ., 2003. 9. However, this parabolic shape, as Newton recognized, is only an approximation of the actual elliptical shape of the trajectory, and is obtained by assuming that the gravitational force is constant (not pointing toward the center of the earth) in the area of interest. Often, this difference is negligible, and leads to a simpler formula for tracking motion. 10. ^ a b Troyano, Leonardo Fernández (2003). Bridge engineering: a global perspective. Thomas Telford. p. 536. ISBN 0-7277-3215-3. , Chapter 8 page 536 11. Middleton, W. E. Knowles (December 1961). "Archimedes, Kircher, Buffon, and the Burning-Mirrors". Isis (Published by: The University of Chicago Press on behalf of The History of Science Society) 52 (4): 533–543. doi:10.1086/349498. JSTOR 228646. ## References • Lockwood, E. H. (1961): A Book of Curves, Cambridge University Press
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 130, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256243705749512, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/162841-shortest-chord.html
# Thread: 1. ## shortest chord What normal to the curve y = x^2 forms the shortest chord? This is what I did: I used parametric form of parabola and did stuff, put it in distance formula.. d/dt = 0 it, I'm getting a complicated polynomial ( 7th degree ) after doing that.. 2. Originally Posted by ice_syncer What normal to the curve y = x^2 forms the shortest chord? This is what I did: I used parametric form of parabola and did stuff, put it in distance formula.. d/dt = 0 it, I'm getting a complicated polynomial ( 7th degree ) after doing that.. I don't believe that this problem has a neat solution. I tried it for the parabola $y^2=4ax$, where the general point has parametric form $(at^2,2at)$. The normal at this point has equation $y-2at = -t(x-at^2)$, and it meets the parabola at the point with parameter $s = -t-\frac2t$. So if d is the length of the chord then $d^2 = (at^2-as^2)^2 + (2at-2as)^2$, which works out as $d^2 = a^2\Bigl(4t^2 + 24 + \dfrac{36}{t^2} + \dfrac{16}{t^4}\Bigr)$. Differentiate that to find that the turning point (which must be a minimum) occurs when $t^6 - 9t^2-8 = 0$. You can factorise that as $t^6 - 9t^2-8 = (t^2+1)(t^4-t^2-8)$. The second factor is a quadratic in $t^2$, and the only positive root is $t^2 = \frac12\bigl(1+\sqrt{33})$. Therefore the minimum occurs when $t = \sqrt{\frac12\bigl(1+\sqrt{33})} \approx 1.836377...$. If you now substitute the value for $t^2$ into the formula for $d^2$, taking a=1/4, and finally take the square root to get d, then you will have the length of the shortest chord. 3. I was hoping for some answer that uses geometry and trigonometry to find the length, rather than using distance formula
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385318756103516, "perplexity_flag": "head"}
http://mathoverflow.net/questions/20225/why-do-littlewood-paley-projections-behave-like-iid-random-variables
## Why do Littlewood-Paley projections behave like iid random variables ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have read more than once that the Littlewood-Paley (LP) projections of a function (i.e. decomposing a function into parts with frequency localization in different octaves) behave in some sense like iid random variables. I am also aware of some facts (like inequalities for square functions vs. Khinchine Inequality) which "look similar". Is there any precise way of stating this similarity? And why do we have this similarity? Can we somehow interpret the LP projections as something like independent random variables? A related question concerns systems of functions of the form $$f_k(\cdot):=f(n_k \cdot )\quad {k\geq 1} ,$$ with $(n_k)_{k\geq 1}$ a lacunary sequence. Also in this case (under suitable assumptions) the functions $f_k$ behave like iid random variables in the sense that they satisfy the Central Limit Theorem and the LIL. - ## 5 Answers It is much better to replace 'iid random variables' above by 'martingale differences.' The usual Littlewood-Paley square function is closely related to the Haar square function. And the Haar square function is exactly a martingale square function, namely a sum of squares of martingale differences. One can pass back and forth, from martingale to continuous analogs. A striking method to do this was found by Stefanie Petermichl, when she found a simple way to obtain the Hilbert transform from a modification of a martingale multiplier. - 1 Thank you for your answer! I can see why LP projections can be considered as martingale differences. Could you please point me to some literature which treats these connections? Still, the question remains why all this is the case; what is the precise connection between a system of functions oscillating at very different speeds and martingale differences? – Philipp Apr 10 2010 at 9:26 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If one replaces the real line with the Walsh ring $F_2t$ (or equivalently, replaces the Fourier transform by the Fourier-Walsh transform), then Littlewood-Paley projections become precisely the same thing as martingale differences. See for instance the lecture notes of Pererya and Ward at http://www.math.unm.edu/~crisp/papers/princeton.pdf or my own lecture notes at http://www.math.ucla.edu/~tao/254a.1.01w/notes5.ps Very roughly speaking, the difference between the two is the difference between a sine wave and a square wave - and the latter, when viewed in binary, depicts the fluctuation of a random bit. In contrast, a sine wave of frequency comparable to $2^k$ (and more generally, a Littlewood-Paley projection to that range of frequencies) depends primarily, but not exclusively, of the $k^{th}$ bit in the binary expansion of the domain variable - and one can view the binary bits of the domain variable as independent random variables. - I am not completely sure of the connection. Eli Stein mentioned it briefly in his class one time, which is why I know of a reference, but I cannot expand on the answer. A good starting point is to look at Rademacher Functions. There is a nice way to prove the Littlewood-Paley Square Function estimate using Rademacher functions (see, e.g. Stein, Singular Integrals and Differentiability Properties of Functions, section 5.2). The Rademacher functions, on the other hand, finds use in probability theory, see, e.g. http://www.statslab.cam.ac.uk/~james/Lectures/pm6.pdf ). This is where, however, my really limited knowledge on this subject ends. Edit: You may also be interested in Stein's Topics in Harmonic Analysis related to Littlewood-Paley Theory. The theory is based upon diffusive semigroups, but has some connections (IIRC) in the spirit of Doob to martingales and ergodic theorems. I don't have my copy handy at the moment, though. - Thanks for the answer! I am aware of this; the Rademacher functions are Haar Wavelets (=connection to LP theory) and also the simplest example of a Martingale (=connection to probability). I am certainly interested in specific examples which illustrate this connection but mainly my question concerns a general view which illuminates the similarities between LP projections and iid random variables. – Philipp Apr 3 2010 at 12:45 There is a quantitative way to express the somewhat vague notion of "almost independence of the Littlewood-Paley projections". Let $\mathcal F_n$, $n\in\mathbb Z$, be the minimal $\sigma$-algebra generated by the set $\mathcal D_n$ of dyadic cubes in $\mathbb R^d$ $$\mathcal D_n=\left\{\prod\limits_{k=1}^{d}[m_k2^{-n},(m_k+1)2^{-n})|\quad (m_1,\dots,m_d)\in\mathbb Z^d\right\}.$$ Then for any locally integrable function $f$ on $\mathbb R^d$, one may define the conditional expectation $E_n(f)$ with respect to the filtration of $\sigma$-algebras $\{\mathcal F_k|\ k\in\mathbb Z \}$: $$E_n(f)=\sum\limits_{Q\in \mathcal D_n}\chi_Q\ \frac{1}{|Q|}\int_Q f(x)dx.$$ It is not hard to check that the differences $D_n(f)=E_n(f)-E_{n-1}(f)$, $n\in\mathbb Z$, define a martingale. This means that the family of Haar functions has the martingale property (and they indeed can be viewed as iid random variables). Now, the Littlewood-Paley projections $\Delta_n$ (and partial sums of Fourier series, in general) cannot be interpreted directly as conditional expectations. However, they do behave almost like the family of Haar functions. Roughly speaking, the families of projections $\{\Delta_k\}_{k\in\mathbb Z}$ and $\{D_j\}_{j\in\mathbb Z}$ are almost biorthogonal. Theorem. There exists a constant $C$ such that for every $k$, $j\in\mathbb Z$ the following estimate on the operator norm of $D_k\Delta_j:\ L^2(\mathbb R^n)\to L^2(\mathbb R^n)$ is valid $$\|D_k\Delta_j\|=\|\Delta_jD_k\|\leq C2^{-|j-k|}.$$ This result is relatively recent and is due to Grafakos and Kalton (see Chapter 5 of the book by Grafakos). - Maybe I am missing the point but isn't it just some kind of orthogonality (or the essentially disjoint support in frequency space)? - 1 The orthogonality property and the disjoint suppport property are not related to the question I asked. – Philipp Apr 8 2010 at 5:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.907842755317688, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/73436/an-arrow-is-monic-in-the-category-of-g-sets-if-and-only-if-its-monic-the-categor/74053
# An arrow is monic in the category of G-Sets if and only if its monic the category of sets Let $G$ be a group, regarded as category with one object $*$ in which each arrow is invertible. Then the category of $G$-Sets is just the category of functors from $G$ to $\mathbf{Set}$. Now I've read that an arrow $\varphi: \tau \to \tau'$ (where $\tau$ and $\tau'$ are functors from $G$ to $\mathbf{Set}$) in the category of $G$-Sets is monic if and only if $\varphi_*$ is monic in $\mathbf{Set}$. I'm having trouble seeing why the only if part of this statement is true. - I'd like to know the answer. The naive attempt would be, if $\phi_*(x)=\phi_*(y)$, to consider the singleton set (with its unique group action) and two maps out of it mapping to $x,y$. But those maps need not respect the group action; in fact they do precisely if $x$ and $y$ are fixed points. – wildildildlife Oct 17 '11 at 23:27 ## 3 Answers As wildildildlife pointed out, the naive approach is to consider morphisms $\alpha_0, \alpha_1 : \sigma\to\tau$, where $\sigma$ is the “trivial” object. The mistake is that a singleton set is a “trivial” set, we instead need a “trivial” action of $G$ (= $G$-set). It is the canonical action of $G$ on itself given by the curried 2-ary operation of $G$ (like in Cayley's theorem)! Lets denote by $|\rho|$ the carrier and by $\triangleleft_\rho$ the operation of any action $\rho$ of $G$. Lets $\sigma$ be the canonical action of $G$, then $|\sigma|=|G|$, $\forall g_0 g_1 (g_1\triangleleft_\sigma g_0 = g_1 +_G g_0)$. For every $x\in|\tau|$ (you defined $\tau$ in your question) define a function $\psi(g):=g\triangleleft_\tau x$. By definition of the action, $g_1\triangleleft_\tau (g_0\triangleleft_\tau x) = (g_1 + g_0)\triangleleft_\tau x$, $g_1\triangleleft_\tau (\psi(g_0)) = \psi(g_1 + g_0) = \psi(g_1\triangleleft_\sigma g_0)$, then $\psi:\sigma\to\tau$ is a homomorphism of actions. $\psi(0)=0\triangleleft_\sigma x=0 + x=x$, i.e. we can have a homomorphism which maps $0$ to any $x$ we want. Proceed as in the naive approach. $\sigma$ is a separator in the category of actions of $G$. - Isn't that called a generator? – Mariano Suárez-Alvarez♦ Oct 19 '11 at 16:26 1 @Mariano Suárez-Alvarez: generator=separator. – beroal Oct 19 '11 at 16:28 The following answer prove a more general result that show an application of yoneda lemma, so I hope you'll like it. Let $\mathcal F, \mathcal G \colon A \to \textbf{Set}$ be two functors, then given a natural transformation $\tau \colon \mathcal F \to \mathcal G$ we have that $\tau$ is monic if and only if $\tau_a$ is monic for each $a \in A$. Let's prove this. If for each $a \in A$ we have $\tau_a$ monic clearly given a functor $\mathcal E \colon A \to \textbf{Set}$ and two natural transformation $\sigma^1, \sigma^2 \colon \mathcal E \to \mathcal F$ such that $\tau \circ \sigma^1 = \tau \circ \sigma^2$ then we have that for each $a \in A$ hold the equality $\tau_a \circ {\sigma^1}_a = \tau_a \circ {\sigma^2}_a$ and because $\tau_a$ is monic thus it follows that $\sigma^1_a= \sigma^2_a$. Because this equation holds for each $a \in A$ we have $\sigma^1=\sigma^2$. On the other end we also have that given a $\tau \colon \mathcal F \to \mathcal G$ which is monic then by yoneda lemma for each $a \in A$ there is a natural isomorphism $\varphi \colon \text{Nat}(A(a,-),\bullet) \stackrel{\sim}{\longrightarrow} \bullet (a)$: the functors involved are the $\hom$-functor $\text{Nat}(A(a,-),\bullet) \colon \textbf{Cat}(A,\textbf{Set}) \to \textbf{Set}$, the argument being identified by the $\bullet$, and the evaluation functor $\bullet (a) \colon \textbf{Cat}(A,\textbf{Set}) \to \textbf{Set}$, which sends every functor in its value on $a$. Via naturality of $\varphi$ we get that the equations $$\tau_a \circ \varphi_\mathcal{F} = \varphi_\mathcal{G} \circ \text{Nat}(A(a,-),\mathcal G)$$ must hold for each $a \in A$. But because $\varphi$ is an isomorphisms this imply that $\varphi_\mathcal{H}$ is an isomorphisms in $\textbf{Set}$ for each functor $\mathcal H \colon A \to \textbf{Set}$ and so $\tau_a = \varphi_\mathcal{G} \circ \text{Nat}(A(a,-),\tau) \circ \varphi_\mathcal{F}^{-1}$. For properties of $\hom$-functors and because of the hypothesis $\tau$ is monic also $\text{Nat}(A(a,-),\tau)$ is monic and so $\tau_a$ is monic being composition of monic. Edit: ops I've just noted that I've forgotten to solve your question: it's a corollary to this theorem in the case when $A=G$ is group seen as a category. - Thanks to beroal for his answer. May I add the following perspective: To prove that in a concrete category monic implies injective, one can sometimes use an identification between 'elements' and 'arrows', because 'monic' basically means injective on arrows. • In $\sf{Set}$, $\sf{Top}$: $Hom(\{\star\},X)\cong X$ via $f\mapsto f(\star)$. • In $\sf{Gr}$ and $\sf{Ab}$: $Hom(\mathbb{Z},G)\cong G$ via $f\mapsto f(1)$. • In $\sf{Ring}$: $Hom(\mathbb{Z}[x],R)\cong R$ via $f\mapsto f(x)$. In other words, the forgetful functor is represented by $\{\star\},\mathbb{Z},\mathbb{Z}[x]$ (the free object on the singleton set) respectively. The same works in $G$-$\sf{Set}$ by taking $G$ as 'Cayley' $G$-set: $Hom(G,X)\cong X$ via $f\mapsto f(e)$. - And naturality of the isomorphism between a forgetful functor and a covariant hom functor implies that monic implies injective. Due to your answer I see now that my answer is not full. BTW, does a separator fit in this framework? – beroal Oct 26 '11 at 0:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 96, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424271583557129, "perplexity_flag": "head"}
http://mathoverflow.net/questions/117307?sort=oldest
## Permuting Racked Pool Balls with a Single Break ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given reasonable physical assumptions (on friction, collisions, etc.), would it be possible to "break" in a pool game such that when all the balls come to rest, the only difference is that the racked balls have been permuted non-trivially? Example: More generally, would any non-trivial permutation be possible? - 2 (For a question in a similar spirit [but otherwise not relevant to this one], see "$\exists$ a shot in ideal billiards?" mathoverflow.net/questions/44296 ) – Joseph O'Rourke Dec 27 at 18:46 2 Maybe it would be worth considering various 2 and 3 ball initial "racks" first. – Aaron Meyerowitz Dec 27 at 22:54 ## 2 Answers You wanted an answer with realistic physics, but let's start with the case where there is no friction, all collisions are perfectly elastic, and the cushions are extended to fill in the pockets. You then have something similar to dynamical billiards, which has been studied for a long time. (The difference would be that you have multiple balls. Since there is no friction, I don't think the dynamics of the balls, e.g., moments of inertia, come into play.) This is a Hamiltonian system, so by the Poincaré recurrence theorem there will be infinitely many times when it revisits the racking triangle in the initial configuration to within, say, a precision of 0.1 mm. It won't stay in that configuration, just pass through it. Poincaré recurrence doesn't guarantee that it will visit other configurations, such as the permuted ones, but I think it's plausible that all such permutations are equally probable, i.e., that the system's behavior is ergodic in this sense. Can we arrange for the configuration to be exact? This seems unlikely to me. The flow through phase space is volume-preserving, so for some ensemble of initial conditions with a small phase-space volume $v$, the Poincaré recurrence time goes like $1/v$. If you can control the shot of the cue ball to within a certain precision, then $v$ is a measure of that precision. If infinitely good precision is required, then $v$ approaches zero, in which case the recurrence time approaches infinity. Of course this argument is only statistical, so it's possible that there is some trick that evades it, e.g., involving some hidden symmetry. If you could shoot the cue ball exactly parallel to the short axis of the table, it would never hit the cue balls, which only solves your problem for the trivial case of the identity permutation. If you could shoot exactly parallel to the long axis, the whole system would have a permanent reflection symmetry in its motion. This would halve the number of dimensions in the phase space, greatly shortening the recurrence time, but it wouldn't affect the no-go argument about $1/v$ given above. If there's friction, then in addition to the statistical argument above, you have the problem that the flow isn't volume-conserving. A given initial ensemble's volume $v$ in phase space will shrink and eventually become zero when the balls come to rest. The flow only "paints" some tiny fraction of the whole phase space, which is unlikely to include the desired final configuration. Putting the pockets back in makes things even worse. Thermodynamically, you're asking for a gas to cool down and condense into a crystal. But since there is no attractive interaction between the balls, I think their melting point equals absolute zero, which the third law of thermodynamics prevents you from reaching. If you did introduce some small attraction, and set up the detailed behavior of friction such that balls didn't just end up sticking at one point on the felt, then presumably they would condense into a triangular lattice in the end. With some finite probability the shape of the whole crystal would be a triangle. If you were able to shoot the cue ball with extremely high precision along the long axis of the table, then symmetry would guarantee that the triangle would be centered on the axis and aligned with it. With 50% probability the triangle would point the right direction. By adjusting the speed of the initial shot, you could probably get the triangle to position itself in the right place as well. The trouble is that the angle of the initial shot has to be controlled to within a precision of $10^{-n}$ degrees, where $n$ is large, because the system has a Lyapunov exponent. I don't think this question can be addressed by brute-force simulations, because the Lyapunov behavior means that the number of initial conditions you'd have to test would be $10^n$, where $n$ is large. If the equations of motion have a closed-form solution for a smaller number of balls, that might give some insight into whether there is some trick or hidden symmetry that evades the $1/v$ argument. But the usual rule of thumb in physics is that two-body problems are easy, but three-body problems are impossible. - 4 Poincare recurrence doesn't guarantee that you ever come close to a permuted triangle of balls. This will happen for almost all initial conditions if the system is ergodic. I don't believe that is known for a system of more than two balls. – Robert Israel Dec 27 at 16:26 @Robert Israel: Good point. I'll modify my answer. – Ben Crowell Dec 27 at 18:42 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The question is not well posed because the initial position has triple contacts. These form singularities in the phase space of the pool table system: Imagine you have two balls touching and a third one approaches. Say it hits the left ball and almost immediately after it hits the right one. The directions and velocities of the three balls after the collisions are different, so it is not clear what is the behavior in the situation that the approaching ball hits the other two simultaneously. With this example in mind, say you have the initial configuration of the picture, with the rack balls all touching and the cue ball in the center. What are the directions of the balls after breaking? Not well defined! - 2 I think there are two ways to construe the question: (1) Given exact initial conditions, can the desired final conditions be achieved exactly? (2) If we want to achieve the desired final condition to within precision $\epsilon$, do there exist $\delta$ and initial conditions such that if we set the initial conditions with precision $\delta$, we get what we want? Your objection applies to 1 but not 2. Straus' illumination problem en.wikipedia.org/wiki/Illumination_problem is an example where the answer depends on this type of distinction. – Ben Crowell Dec 28 at 15:12 @Ben: Agree... Also, thanks for the link. I remember reading about the 1995 solution, but lost track and didn't know there were improvements. – Rodrigo A. Pérez Dec 28 at 18:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481193423271179, "perplexity_flag": "head"}
http://mathoverflow.net/questions/67897/invariants-and-orbits-of-n-tensors/67904
## Invariants and orbits of $n$-tensors ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) My question may be absolutely elementary and is probably answered in 19th century. A reference or a short clear argument would be highly appreciated. Let $V_1, \ldots V_n$ be finite dimensional vector spaces over the same field (may assume complex numbers). What are $GL(V_1)\times \ldots \times GL(V_n)$-orbits on $V_1 \otimes \ldots \otimes V_n$? The only invariant of an orbit I can see is "a multirank" $(k_1, \ldots k_n)$ where $k_i$ is the dimension of support of an element in $V_i$. The multirank satisfies inequalities $k_i \leq \prod_{j\neq i} k_j$. Would it be too naive to suggest that orbits are in 1-1 correspondence with legal multiranks? - $n\leq 2$ has elementary answers and I am mostly interested in $n=3$ and $n=4$... – Bugs Bunny Jun 15 2011 at 21:23 One place to look is in some recent work of J. M. Landsburg, Z. Teitler, et al. As for the last question, I'm afraid it is indeed too naive, since already for 2-dimensional vector spaces, with n=5, the dimension of the tensor product is larger than the dimension of the group (so there will be infinitely many orbits). – Dave Anderson Jun 16 2011 at 1:01 I can't resist mentioning that when $n=0$ there are lots of orbits. (A trivial group is acting on a $1$-dimension vector space.) – Tom Goodwillie Jun 16 2011 at 4:08 More seriously, when $n=3$ and all $V_i$ are $2$-dimensional there is a homogeneous degree $4$ polynomial function $V_1\otimes V_2\otimes V_3\to \mathbb C$ that scales by square of determinant when any one of the $GL(V_i)$ acts. The set where it does not vanish is an orbit with multirank $(2,2,2)$ but there is also another orbit corresponding to multirank $(2,2,2)$. – Tom Goodwillie Jun 16 2011 at 4:28 ## 2 Answers Here is a start, suppose that $V_i$ is $\mathbb C^{k_i}$ (and restricting to $k_1,k_2,\dots,k_n, n\geq 2$). The tuples $(k_1,k_2,\dots,k_n)$ for which the action of $GL_{k_1}\times\cdots\times GL_{k_n}$ on $\mathbb{C}^{k_1}\otimes \cdots\otimes \mathbb{C}^{k_n}$ has only finitely many orbits are $(k,l),(2,2,k),(2,3,k)$, for positive integers $k,l$. This was proven in V. G. Kac, "Some remarks on nilpotent orbits", J. Algebra 64 (1980), 190–213. These orbits are classified in "Orbits and their closures in the spaces $\mathbb{C}^{k_1}\otimes \cdots\otimes \mathbb{C}^{k_r}$" by P.G. Parfenov (MR). This paper doesn't seem to be online, but I believe you can find a summary in section 5 here. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't think that's right, Bugs. Say that $V_1$ and $V_2$ are $d$-dimensional and $V_3=\mathbb C^2$. Then an element of $V_1\otimes V_2$ is a linear map $V_1^\star\to V_2$, generically an isomorphism; an element of $V_1\otimes V_2\otimes V_3$ is an ordered pair $(A,B)$ of these; the unordered $d$-tuple of eigenvalues of $B\circ A^{-1}$ is an invariant of the $GL(V_1)\times GL(V_2)$-action; and an element of $GL(V_3)$ will just perform some fractional linear transformation on all of these numbers, so that if $d\ge 4$ then there is a complex invariant here. - I might mention that the problem of classifying ordered pairs of linear transformations over $\mathbb{C}$ up to the mentioned equivalence was solved by Weierstrass and Kronecker way back when. This seems to be a forgotten thread of basic linear algebra, but can still be found in Chapter XII of Gantmacher's 'Theory of Matrices' (a book I would greatly recommend to everyone). – Keerthi Madapusi Pera Jun 16 2011 at 3:30 4 @Keerthi: that's not quite forgotten! It is called nowadays the representation theory of quivers---the Russians, characteristically romantic in their choice of words, call this problems of linear algebra. The result of Weierstrass and Kronecker is nowdayds presented as the classification of indecomposable modules for the Kronecker quiver, the simplest non-trivial case of the extraordinarily elaborate representation theory of hereditary finite dimensional algebras. – Mariano Suárez-Alvarez Jun 16 2011 at 3:54 Mariano--Thanks for correcting my ignorant perception of its status! I remember looking for references about it a long time ago, and not having any luck until I stumbled upon Gantmacher's text. Clearly, I didn't have the right formulation. – Keerthi Madapusi Pera Jun 16 2011 at 4:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201782941818237, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/291348/exact-sequence-involving-the-nabla-operator/291505
# Exact sequence involving the nabla operator Recently I noticed that $$0 \longrightarrow \Bbb R \overset{\text{const.}}\longrightarrow \mathcal{C}^\infty(\Bbb R^3,\Bbb R) \overset{\text{grad}}\longrightarrow \mathcal{C}^\infty(\Bbb R^3,\Bbb R^3) \overset{\text{rot}}\longrightarrow \mathcal{C}^\infty(\Bbb R^3,\Bbb R^3) \overset{\text{div}}\longrightarrow \mathcal{C}^\infty(\Bbb R^3,\Bbb R)\longrightarrow 0$$ is an exact sequence of $\Bbb R$-algebras, where the second arrow is given by $\text{const}:c \mapsto f(\vec x)\equiv c$ and grad, rot ,div are the gradient, rotation and divergence operators. Is the existence of such an exact sequence a mere curiosity or does it have its origins from deep results in homological algebra. If so, are there generelizations to $\Bbb R^n$ with higher $n$ or even to other smooth manifolds? - 1 You can find de Rham theory in Warner, Introduction to Differentiable Manifolds and Lie Groups – Neal Jan 31 at 17:58 ## 3 Answers This is a special case of the deRham complex on $\Bbb R^3$. Let $M$ be a smooth manifold. Then we get the cotangent bundle $T^\ast M$ of $M$ by letting the cotangent space $T_x^\ast M$ at $x \in M$ be the dual vector space to the tangent space $T_x M$. Recall that given a vector space $V$, we can form the exterior algebra $\Lambda^\ast V$, and we let $\Lambda^p V$ denote the degree $p$ part of $\Lambda^\ast V$ (so if $\{e_1, \dots, e_n\}$ is a basis for $V$, $\Lambda^p V$ is generated by products of the form $e_{i_1} \wedge \cdots \wedge e_{i_p}$). Now we can form the bundle $\Lambda^\ast T^\ast M$ by taking the exterior algebra $\Lambda^\ast T^\ast_x M$ of each cotangent space, and similarly we get bundles $\Lambda^p T^\ast M$. Finally, we define the space of differential $p$-forms on $M$ by $$\Omega^p(M) = C^\infty(\Lambda^p T^\ast M),$$ i.e. the space of smooth sections of $\Lambda^p T^\ast M$. What this means is the following. We can consider $\Lambda^p T^\ast M$ as $$\Lambda^p T^\ast M = \coprod_{p \in M} \Lambda^p T_x^\ast M,$$ topologized appropriately. Hence we have a natural projection map $\pi: \Lambda^p T^\ast M \longrightarrow M$ which is given by $\pi(x, v) = x$. Then $$\Omega^p(M) = \{\alpha: M \longrightarrow \Lambda^p T^\ast M \mid \pi \circ \alpha = \mathrm{Id}_M\}.$$ Note that $\Omega^0(M)$ is just the space of smooth real-valued functions on $M$. We can define the exterior derivative $df$ of $f \in \Omega^0(M)$ by defining $df$ to be the differential of $f$, i.e. $$df(X) = Xf$$ for any vector field $X$ on $M$. If we impose the Leibniz rule $$d(\alpha \wedge \beta) = d\alpha \wedge \beta + (-1)^{\deg(\alpha)} \alpha \wedge d\beta,$$ then the exterior derivative extends uniquely to a map $$d: \Omega^\ast(M) \longrightarrow \Omega^{\ast + 1}(M).$$ Now one can show that $d^2 = 0$, so that $$0 \to \Omega^0(M) \xrightarrow{~d~} \Omega^1(M) \xrightarrow{~d~} \Omega^2(M) \xrightarrow{~d~} \cdots$$ is a cochain complex. The cohomology $$H^\ast_{dR}(M) = H^\ast(\Omega^\ast(M), d)$$ of this complex is called the deRham cohomology of $M$. DeRham's theorem states that deRham cohomology is isomorphic to singular cohomology: $$H^\ast_{dR}(M) \cong H^\ast_{\text{sing}}(M; \Bbb R).$$ Now let's see why your example is a special case of the deRham complex. When $M = \Bbb R^3$, we have $$\Omega^0(\Bbb R^3) \cong \Omega^3(\Bbb R^3) \cong C^\infty(\Bbb R^3, \Bbb R), \quad \Omega^1(\Bbb R^3) \cong \Omega^2(\Bbb R^3) \cong C^\infty(\Bbb R^3, \Bbb R^3).$$ All other spaces of $p$-forms on $\Bbb R^3$ are trivial. Now for $f \in \Omega^0(\Bbb R^3)$, $$df = \sum_{i = 1}^3 \frac{\partial f}{\partial x_i} dx_i ~\leftrightarrow~ \operatorname{grad}(f) = \left( \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, \frac{\partial f}{\partial x_3} \right).$$ For $$\alpha = \sum_{i = 1}^3 \alpha_{i}(x_1, x_2, x_3) ~dx_i \in \Omega^1(\Bbb R^3) ~\leftrightarrow~ v = (\alpha_1, \alpha_2, \alpha_3) \in C^\infty(\Bbb R^3, \Bbb R^3),$$ we have $$d\alpha = \sum_{i = 1}^3 \sum_{j = 1}^3 \frac{\partial \alpha_i}{\partial x_j} ~dx_i \wedge dx_j = \left(\frac{\partial \alpha_3}{\partial x_2} - \frac{\partial \alpha_2}{\partial x_3}\right) ~dx_2 \wedge dx_3 - \left(\frac{\partial \alpha_3}{\partial x_1} - \frac{\partial \alpha_1}{\partial x_3}\right) ~dx_1 \wedge dx_3 + \left(\frac{\partial \alpha_2}{\partial x_1} - \frac{\partial \alpha_1}{\partial x_2}\right) ~dx_1 \wedge dx_2 ~\leftrightarrow~ \operatorname{rot}(v).$$ Finally, for $$\beta = \sum_{i = 1}^3 \sum_{j = 1}^3 \beta_{ij}(x_1, x_2, x_3) ~dx_i \wedge dx_j \in \Omega^2(\Bbb R^3) ~\leftrightarrow~ w = (\beta_{23}, -\beta_{13}, \beta_{12}) \in C^\infty(\Bbb R^3, \Bbb R^3),$$ we have $$d\beta = \sum_{i,j,k = 1}^3 \frac{\partial \beta_{ij}}{\partial x_k} ~dx_i \wedge dx_j \wedge dx_k = \frac{\partial \beta_{23}}{\partial x_1}~dx_2 \wedge dx_3 \wedge dx_1 + \frac{\partial \beta_{13}}{\partial x_2}~dx_1 \wedge dx_3 \wedge dx_2 + \frac{\partial \beta_{12}}{\partial x_3}~dx_1 \wedge dx_2 \wedge dx_3 ~\leftrightarrow~ \operatorname{div}(w) = \frac{\partial \beta_{23}}{\partial x_1} - \frac{\partial \beta_{13}}{\partial x_2} + \frac{\partial \beta_{12}}{\partial x_3}.$$ Hence we see the correspondence between the exterior derivatives and the vector derivatives. Now deRham's theorem tells us that the cohomology of $$0 \longrightarrow C^\infty(\Bbb R^3,\Bbb R) \overset{\text{grad}}\longrightarrow C^\infty(\Bbb R^3,\Bbb R^3) \overset{\text{rot}}\longrightarrow C^\infty(\Bbb R^3,\Bbb R^3) \overset{\text{div}}\longrightarrow C^\infty(\Bbb R^3,\Bbb R)\longrightarrow 0$$ is trivial except in degree zero. Hence we augment the cochain complex as you did to get an exact sequence: $$0 \longrightarrow \Bbb R \overset{\text{const.}}\longrightarrow C^\infty(\Bbb R^3,\Bbb R) \overset{\text{grad}}\longrightarrow C^\infty(\Bbb R^3,\Bbb R^3) \overset{\text{rot}}\longrightarrow C^\infty(\Bbb R^3,\Bbb R^3) \overset{\text{div}}\longrightarrow C^\infty(\Bbb R^3,\Bbb R)\longrightarrow 0.$$ - Depending on your definition of curiosity, this is not a coincidence. It does in fact generalise into higher dimensions, even into manifolds. For more information I advise you to look up some theory on De Rham Cohomology. - Yes, you can replace the derivatives here with exterior derivatives. This is the proper generalization of divergence, gradient, and curl. Each real vector space $\mathbb R^n$ admits a geometric algebra on it called $\mathbb G^n$. This is a clifford algebra, and its member objects are called multivectors. These multivectors can be separated by "grades." Each grade forms its own subspace. They are as follows. In $\mathbb G^n$ there is/are... • 1 linearly independent scalar • $n$ linearly independent vectors • $n(n-1)/2 = \binom{n}{2}$ linearly independent bivectors • $\binom{n}{3}$ linearly independent trivectors • ... • $n$ linearly independent $(n-1)$-vectors, also called pseudovectors • 1 linearly independent $n$-vector, also called pseudoscalar There are $2^n$ linearly independent elements. You might see how this progression goes according to Pascal's triangle, and that in 3d, it goes 1-3-3-1. Typically, we interpret the $k$-vectors (for any integer $k$ such that $0 \leq k \leq n$) geometrically. A vector is an oriented line with a weight (magnitude). A bivector is an oriented plane with a magnitude. Trivectors are oriented volumes, and so on. As vector calculus allows us to talk about vector and scalar fields, we can talk about bivector and trivector fields, arbitrary $k$-vector fields, or even general multivector fields with elements of several grades! The vector derivative $\nabla$ can be taken to act on such fields. We say that $\nabla \wedge A$ is a differential operator that acts a $k$-vector field $A$ and returns a $(k+1)$-vector field. So from the space of scalar fields, we can build up a space of vector fields. From vector fields, we can build up bivector fields, and so on. But wait, there's more! When you have a metric, you can also do this in the opposite direction! There is a "interior" derivative $\nabla \cdot A$ that acts on a $k$-vector field $A$ and returns a $(k-1)$-vector field. This is also called the coderivative and by some other names. The existence of this operator is why I consider it a mistake to implicitly identify gradient, divergence, and curl solely with the exterior derivative. The "gradient" of a pseudoscalar field is not an exterior derivative at all, so such a claim that these three operators are only the exterior derivative in various guises is really incomplete. What this means, then, is that as long as there is a metric involved, you can make the chain run the other way around. Make a constant pseudoscalar field (as you made a constant scalar field) and run it backwards. When dealing with general manifolds, let's consider the embedded case first. Any embedded manifold has a pseudoscalar, which in an embedding will vary with position on the manifold. The behavior of the pseudoscalar (how it changes with position) actually characterizes most of the manifold's properties! But naturally, a $k$-vector field on the manifold cannot exceed the dimension of the pseudoscalar. If your pseudoscalar is a bivector--a plane--then obviously trivector fields (which correspond to volumes) cannot live in that tangent space. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9290100932121277, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2012/08/06/lie-algebras-revisited/?like=1&source=post_flair&_wpnonce=bbf4ba6a50
# The Unapologetic Mathematician ## Lie Algebras Revisited Well it’s been quite a while, but I think I can carve out the time to move forwards again. I was all set to start with Lie algebras today, only to find that I’ve already defined them over a year ago. So let’s pick up with a recap: a Lie algebra is a module — usually a vector space over a field $\mathbb{F}$ — called $L$ and give it a bilinear operation which we write as $[x,y]$. We often require such operations to be associative, but this time we impose the following two conditions: $\displaystyle\begin{aligned}{}[x,x]&=0\\ [x,[y,z]]+[y,[z,x]]+[z,[x,y]]&=0\end{aligned}$ Now, as long as we’re not working in a field where $1+1=0$ — and usually we’re not — we can use bilinearity to rewrite the first condition: $\displaystyle\begin{aligned}0&=[x+y,x+y]\\&=[x,x]+[x,y]+[y,x]+[y,y]\\&=0+[x,y]+[y,x]+0\\&=[x,y]+[y,x]\end{aligned}$ so $[y,x]=-[x,y]$. This antisymmetry always holds, but we can only go the other way if the character of $\mathbb{F}$ is not $2$, as stated above. The second condition is called the “Jacobi identity”, and antisymmetry allows us to rewrite it as: $\displaystyle[x,[y,z]]=[[x,y],z]+[y,[x,z]]$ That is, bilinearity says that we have a linear mapping $x\mapsto[x,\underline{\hphantom{X}}]$ that sends an element $x\in L$ to a linear endomorphism in $\mathrm{End}(L)$. And the Jacobi identity says that this actually lands in the subspace $\mathrm{Der}(L)$ of “derivations” — those which satisfy something like the Leibniz rule for derivatives. To see what I mean, compare to the product rule: $\displaystyle\frac{d}{dt}\left(fg\right)=\frac{df}{dt}g+f\frac{dg}{dt}$ where $f$ takes the place of $y$, $g$ takes the place of $z$, and $\frac{d}{dt}$ takes the place of $x$. And the operations are changed around. But you should see the similarity. Lie algebras obviously form a category whose morphisms are called Lie algebra homomorphisms. Just as we might expect, such a homomorphism is a linear map $\phi:L\to L'$ that preserves the bracket: $\displaystyle\phi\left([x,y]\right)=\left[\phi(x),\phi(y)\right]$ We can obviously define subalgebras and quotient algebras. Subalgebras are a bit more obvious than quotient algebras, though, being just subspaces that are closed under the bracket. Quotient algebras are more commonly called “homomorphic images” in the literature, and we’ll talk more about them later. We will take as a general assumption that our Lie algebras are finite-dimensional, though infinite-dimensional ones absolutely exist and are very interesting. And I’ll finish the recap by reminding you that we can get Lie algebras from associative algebras; any associative algebra $(A,\cdot)$ can be given a bracket defined by $\displaystyle [x,y]=x\cdot y-y\cdot x$ The above link shows that this satisfies the Jacobi identity, or you can take it as an exercise. ### Like this: Posted by John Armstrong | Algebra, Lie Algebras ## 7 Comments » 1. [...] now that we’ve remembered what a Lie algebra is, let’s mention the most important ones: linear Lie algebras. These are ones that arise from [...] Pingback by | August 7, 2012 | Reply 2. [...] examples of Lie algebras! Today, an important family of linear Lie [...] Pingback by | August 8, 2012 | Reply 3. Glad to see you back! And this is the exact topic I was hoping you’d cover next! Comment by Joe English | August 10, 2012 | Reply 4. [...] first defining (or, rather, recalling the definition of) Lie algebras I mentioned that the bracket makes each element of a Lie algebra act by derivations on itself. We [...] Pingback by | August 10, 2012 | Reply 5. [...] we said, a homomorphism of Lie algebras is simply a linear mapping between them that preserves the bracket. [...] Pingback by | August 13, 2012 | Reply 6. Great! I’m going to read all the nonassociative material as a memory-refreshing exercise. I point one little thing: subalgebras by your definition are not subspaces, but submodules! Comment by Jose Brox | September 1, 2012 | Reply • Yes, they’re submodules in the more general situation of a Lie algebra over a ring, but we’re just going to be looking at Lie algebras over fields — and usually characteristic zero and almost always algebraically closed, to boot. Comment by | September 2, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364537000656128, "perplexity_flag": "middle"}
http://math.stackexchange.com/users/16573/ganzewoort?tab=activity&sort=comments
# ganzewoort reputation 5 bio website location age member for 1 year, 7 months seen Mar 4 at 15:51 profile views 11 bio visits 114 reputation website member for 1 year, 7 months 5 badges location seen Mar 4 at 15:51 # 33 Comments Dec13 comment Averaging the values of $\cos x$ over one periodIt should be included twice because it belongs to the interval $[0,2\pi]$ (not $[0,2\pi)$), doesn't it? Oct24 comment $f(t)f'(t)$ where $f$ is part of a GaussianCORRECTION: edit time expired so I couldn't remove the $\frac{1}{4.5}$'s and the $\frac{1}{10}$ typo. Oct24 comment $f(t)f'(t)$ where $f$ is part of a Gaussianthen, every time the signal is applied (periodically or aperiodically) we get \begin{equation} \frac{1}{T} \int\limits_{0.5}^{5} f(t)f'(t)dt = \frac{1}{4.5} \frac{1}{2} \int\limits_{0.5}^{5} d (f(t))^2 = \frac{1}{10} (f(t))^2|_{0.5}^{5} = \end{equation} \begin{equation} \frac{1}{4.5} \frac{1}{2}(-e^{-\frac{(0.5t-1)^2}{2}} -e^{-\frac{(t-1)^2}{2}})^2|_{0.5}^{5} = \frac{1}{4.5} (-1.28763) < 0 \end{equation} Oct24 comment $f(t)f'(t)$ where $f$ is part of a Gaussian@Gerry, is this better: Let the form of the signal which is applied periodically or aperiodically on the system be a part of a Gaussian, such as \begin{equation} f(t) = \begin{cases} \frac{1}{2}(-e^{-\frac{(0.5t-1)^2}{2}} -e^{-\frac{(t-1)^2}{2}})^2, & \mbox{for } t =\mbox{ $0.5<t<5$} \\ 0, & \mbox{for all other } t\mbox{ } \end{cases} \end{equation} Oct22 comment $f(t)f'(t)$ where $f$ is part of a GaussianHow else can you express the fact that there will be a non-zero burst of that particular form, lasting for $\Delta t = 4.5$, starting from $t = 0$? The first burst will be, as said, from $t=0,5$ to $t = 5$. The next non-zero burst will occur from $t= 10.5$ to $t = 15$, the third from $t = 20.5$ to $t = 25$ and so on. Oct20 comment Uniqueness of the Interpolating FunctionOK, I'm in chat but I don't see you there ... Oct20 comment Uniqueness of the Interpolating FunctionHow do yo make Lagrangian polynomial periodic (I'm thinking in terms of physical signals where trigonometric functions are the usual choice)? Oct20 comment Periodic Function with Discrete ValuesSince the times civilized discussions are established in the modern world -- a couple of centuries or so ago. Down-voting and disappearing is an easy and mean way out when one feels that he/she should win the discussion at any rate. Oct20 comment Uniqueness of the Interpolating FunctionThis is the main point of my asking. If the interpolating function indeed is $f(t) = 5sin(t) - 10$ then it does have first derivative which is $5cos(t)$ and can be evaluated at the said 5 points. Also, from your answer I take it that there is no polynomial which would recover exactly the 5 points. All one can achieve through a polynomial is an approximation, right? Oct20 comment Uniqueness of the Interpolating FunctionSo, then, what is $f'(t)$ of that function? Will it differ from $f'(t)$ when it is defined as $f(t) = 5sin(t_i) - 5$ where $i = 0, 1, 2, 3, 4$? Oct20 comment Periodic Function with Discrete ValuesInstead of down-voting you should propose another concrete function (other than $f(t) = 5sin(t) - 10$) which will recover the points in the above 5-point example. I may open a special separate question devoted to this uniqueness problem. Oct20 comment Periodic Function with Discrete ValuesAll right, can you propose another concrete function (other than $f(t) = 5sin(t) - 10$) which will recover the 5-point example? You didn't give any such function so far but only insisted that such function exists. Oct20 comment Periodic Function with Discrete ValuesNot at all. I have the fixed 5 values of the discrete continuous function, that's a given, and it happens that $5sin(t) - 10$ is the only function which recovers all of them. Also, I don't see how a polynomial with constraints will express a physical signal. Somehow, it is usual to use trigonometric functions for that purpose. Thus, if we limit ourselves to trigonometric functions, $f(t) = 5sin(t) - 10$ appears to be unique as an interpolation function in the above 5-point case. Oct20 comment Periodic Function with Discrete ValuesLike I said, this answer doesn't seem satisfactory because there is, in fact, a unique strong additional constraint on $f$, namely, the specific function $f(t) = 5sin(t) - 10$ which also accounts for the periodicity which a polynomial doesn't (the example with the 5-point discrete periodic function is had in mind). Oct20 comment Periodic Function with Discrete ValuesSomething has to be done so that one can carry out the discussions comfortably without limiting them to non-extended discussions, as well as avoiding the chat. What is the usual solution in such a case? Oct20 comment Periodic Function with Discrete ValuesAll I need to do is calculate the average value of $f(t)f'(t)$ for the discrete points (5 in this case) within the period. How is the result using a polynomial (accounting for periodicity too) going to differ, if at all, from the above result with the sine function? Oct20 comment Periodic Function with Discrete ValuesThat's true but suppose you have arranged the matters to have $f(t_0) = -10, f(t_1) = -5, f(t_2) = -10, f(t_3) = -15$ and $f(t_4) = -10$ and all the points in between to be zero (where $t_0$ and $t_4$ are the beginning and the end of the period), then $5sin(t) - 10$ will be the possible interpolation (can't think of any other function). The argument expressed in the question applies to this function as well. Oct20 comment Periodic Function with Discrete ValuesWell, $f(t) = asin(t) - 10$ is differentiable and it, as well as its first derivative, does have a value at $t = 0$. Oct20 comment Periodic Function with Discrete ValuesThe way I understand it is as follows: if there is one non-zero point of the function and its non-zero first derivative, it is enough to carry out the summation (which is in fact integration) of $f(t)f'(t)$ over the entire interval. All the other, infinite in number products within the period, are zero. In the case at hand, it is undeniable that $f(t)f'(t) < 0$. How is that fact accounted for if the integral over the period is taken to be zero? Oct20 comment $f(t)f'(t)$ where $f$ is part of a Gaussian$f(t)$ is, indeed, periodic. It is with a discrete-math tag because it resembles the earlier discussed discrete periodic functions. Notice, the integral over the entire $[0,T]$ period is zero and yet there is a section within $[0,T]$ where the integral isn't zero. Why should one ignore that fact when carryin out integration over the entire $[0,T]$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.899118959903717, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/axioms?sort=faq&pagesize=30
# Tagged Questions For questions on axioms, mathematical statements that are accepted as being true without a mathematical proof. learn more… | top users | synonyms 1answer 519 views ### Proofs given in undergrad degree that need Continuum hypothesis? Or alternative you need to assume CH is false. I know several proofs that use axiom of choice. Heine Borel theorem is the best example I can think off. Zorns lemma is heavily used in the non ... 2answers 270 views ### what is the relationship between ZFC and first-order logic? In Wikipedia, it says that ZFC is a one-sorted theory in first-order logic. However, I was not really able to comprehend the later parts that seem to elaborate on that point. Can anyone explain the ... 4answers 4k views ### Can you explain the “Axiom of choice” in simple terms? As I'm sure many of you do, I read the XKCD webcomic regularly. The most recent one involves a joke about the Axiom of Choice, which I didn't get. I went to Wikipedia to see what the Axiom of ... 6answers 1k views ### What are natural numbers? What are the natural numbers? Is it a valid question at all? My understanding is that a set satisfying Peano axioms is called "the natural numbers" and from that one builds integers, rational ... 9answers 526 views ### Motivating implications of the axiom of choice? What are some motivating consequences of the axiom of choice (or its omission)? I know that weak forms of choice are sometimes required for interesting results like Banach-Tarski; what are some ... 2answers 265 views ### Axiom schema and the definition of natural numbers An axiom schema is used to generate the axioms, which inductively define the natrual numbers using the empty set and the successor function $S$. I don't understand why you have to define this set as ... 5answers 713 views ### How is a system of axioms different from a system of beliefs? Other ways to put it: Is there any faith required in the adoption of a system of axioms? How is a given system of axioms accepted or rejected if not based on blind faith? (PD: I'm not religious) 6answers 496 views ### When does the set enter set theory? I wonder about the foundations of set theory and my question can be stated in some related forms: If we base Zermelo–Fraenkel set theory on first order logic, does that mean first order logic is not ... 4answers 388 views ### Axiomatic approach to polynomials? I only know the "constructive" definition of $\mathbb K [x]$, via the space of finite sequences in $\mathbb K$. It essentially tells a polynomial is its coefficients. Is there a way to define ... 2answers 314 views ### Where is axiom of regularity actually used? Where is axiom of regularity actually used? Why is it important? Are there some proofs, which are substantially simpler thanks to this axiom? This question was to some extent provoked by Dan ... 5answers 2k views ### In what sense are math axioms true? Say I am explaining to a kid, $A +B$ is the same as $B+A$ for natural numbers. The kid asks: why? Well, it's an axiom. It's called commutativity (which is not even true for most groups). How do I ... 5answers 439 views ### What are the postulates that can be used to derive geometry? What are the various sets of postulates that can used to derive Euclidean geometry? It might be nice to have several different approaches together for comparison purposes and for ready reference. It ... 5answers 394 views ### Why hasn't GCH become a standard axiom of ZFC? I've never seen a text that includes GCH in the ZFC axioms. I presume this means that GCH has not achieved widespread acceptance. This seems surprising to me, given that: The cardinal numbers ... 3answers 174 views ### The existence of the empty set is an axiom of ZFC or not? I found in the Wolfram MathWorld page of the Axiom of the Empty Set that this is one of the Zermelo-Fraenkel Axioms, however on the page about these ZFC Axioms I read that it is an axiom that can be ... 1answer 110 views ### System with infinite number of axioms Assume we have a set of axioms $A_0$. There exists a statement that can be formulated with these axioms that cannot be proven to be true with this system. Assume we give such a statement axiomatic ... 3answers 148 views ### An elementary question regarding the uniqueness of a set, viewed with different cardinality Does the cardinality of sets, like for example the real numbers, depend on the fundamental axioms one is working with? If so, what does it mean to speak of such a set if it is not really one single ... 1answer 170 views ### Do the proofes in set theory rely on the semantics of the formulas used in the axioms? Motivation: The Axiom of separation $$\forall w_1,\ldots,w_n \, \forall A \, \exists B \, \forall x \, ( x \in B \Leftrightarrow [ x \in A \wedge \phi(x, w_1, \ldots, w_n, A) ] )$$ is used to ... 2answers 182 views ### How does (ZFC-Infinity+“There is no infinite set”) compare with PA? How does (ZFC-Infinity+"There is no infinite set") compare with (first order) PA? Intuitively, neither theory should be more powerful than the other. 3answers 241 views ### Why is the postulate $1$ not equal to $0$ not superfluous? [duplicate] Possible Duplicate: Explanation for why $1\neq 0$ is explicitly mentioned in Chapter 1 of Spivak's Calculus for properties of numbers. I am self-studying the wonderful book, Elementary ... 2answers 207 views ### What is the difference between an axiom and a postulate? I here about axioms is set theory and postulates in geometry, but they seem like the same thing. Do the mean the same thing but then are used in different instances or what? Is one word more ... 1answer 220 views ### Can it be shown that ZFC has statements which cannot be proven to be independent, but are? I am familiar with the concept that a statement can be proven indepent such as in the case of the continuum hypothesis where both ZFC+CH and ZFC+(CH is false) are both proven consistent, but I would ... 2answers 466 views ### A first order sentence such that the finite Spectrum of that sentence is the prime numbers The finite spectrum of a theory $T$ is the set of natural numbers such that there exists a model of that size. That is $Fs(T):= \{n \in \mathbb{N} | \exists \mathcal{M}\models T : |\mathcal{M}| =n\}$ ... 2answers 109 views ### Axioms for sets of numbers What is the most common axiomatic system used by modern mathematicans for the properties of the integers, rationals, reals, and complex numbers? Or does one commonly use a single axiomatic system ... 0answers 32 views ### Using definitions instead of axioms. Lets take (classical) first-order logic for granted, including an equality symbol and its associated axioms. Given all this, a rigorous work of mathematics will typically begin with a signature - ... 1answer 60 views ### What makes Tarski Grothendieck set theory non-empty? I'm fighting with Grothendieck set theory for some time now. This is the framework for the automated proof checking system of Mizar and hence there is a formalized version of the axioms here too, and ... 1answer 119 views ### What should I be able to do with this chapter on Axiomatic Set Theory in order to check if I've learned it decently? [closed] I've just read a chapter on axiomatic set theory, from Comprehensive Mathematics for Computer Scientists 1. It comes with basic notation on sets and some axioms: Axiom 1 (Axiom of Empty Set) Axiom 2 ... 6answers 463 views ### Is it possible to have a field without an additive identity? If I drop the axiom that Zero is the identity of an addition what consequences does this entail? What do I need to change to my axiomatization? By definition it is not possible, but are there ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943061888217926, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21911/when-can-a-freely-moving-sphere-escape-from-a-cage-defined-by-a-set-of-impassib/22271
## When can a freely moving sphere escape from a ‘cage’ defined by a set of impassible coordinates? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) To ask this question in a (hopefully) more direct way: Please imagine that I take a freely moving ball in 3-space and create a 'cage' around it by defining a set of impassible coordinates, $S_c$ (i.e. points in 3-space that no part of the diffusing ball is allowed to overlap). These points reside within the volume, $V_{cage}$, of some larger sphere, where $V_{cage}$ >> $V_{ball}$. Provided the set of impassible coordinates, $S_c$, is there a computationally efficient and/or nice way to determine if the ball can ever escape the cage? Earlier version of question: In Pachinko one shoots a small metal ball into a forest of pins, then gravity then pulls it downwards so that it will either fall into a pocket (where you win a prize) or the sink at the bottom of the machine. The spacing and distribution of the pins will help to insure that one only wins certain prizes with low probability. Now imagine that we have a more general game where: (1) - The ball is simply diffusing in 3-space (like a molecule undergoing Brownian motion). I.e. there is no fixed downward trajectory due to gravity. (2) - You win a prize if the ball diffuses over a particular coordinate, just like one of the pockets in regular pachinko. (3) - We generalize he pins as a set of impassible coordinates. (4) - We define a 'sink' as an always accessible coordinate. (5) - We define a starting coordinate for the sphere. Given access the 3-space coordinates for (2), (3), (4), & (5), what's the most efficient way to find whether the game is 'winnable', or if the ball will fall into the 'sink' with a probability of unity? How can we find the minimum set from (3) that prevents the ball from reaching the pocket? - Do you have particular (typical) values for the relative sizes of your ball and the pockets/pins/sinks? Have you thought about the 2D version? – Yemon Choi Apr 21 2010 at 19:58 Dear Yemon, I'm interested in the general problem more than a version where the balls/pins are of a particular size (the size of the pocket and sink doesn't matter to me because I only want to know if the ball can ever reach the 'winning' pocket). However, I can say that the size of the ball should be smaller than the largest distance between any two pins or impassible coordinates. – Rob Grey Apr 21 2010 at 20:13 For the two-dimensional case, I'm attempting a solution by generating polygons to encapsulate the 'winning' pocket, whose vertices consist of the pins/impassible coordinates, and where the maximum side-length is less that the cross-sectional diameter of the ball. The idea would be to increase the number of possible sides of the polygon, and length of the neighborhood around the 'winning' pocket, over time during the search. – Rob Grey Apr 21 2010 at 20:19 I found "diffusing" to be a confusing word choice, and changed it to "moving". This seems like a problem which can be solved using Voronoi diagrams. I suggest asking it also on stackoverflow; not that it is inappropriate here, but there may be more computational geometry people on SO. – Reid Barton Apr 22 2010 at 22:05 ## 2 Answers Replace the pins by balls of radius $R_{ball}$ and the ball by a point. This is a logically equivalent formulation. The question, then, is: given a finite set of balls, $B_1$, $B_2$, ...., $B_k$ in $\mathbb{R}^n$, and a point $x$, how to determine where $x$ is in the unbounded component of $\mathbb{R}^n \setminus \bigcup B_i$. I don't know the answer to this, but here is an easy way to compute the number of connected components of $\mathbb{R}^n \setminus \bigcup B_i$. In other words, I can determine whether there is some place from which a ball cannot escape. By Alexander duality, the number of bounded components of $\mathbb{R}^n \setminus \bigcup B_i$ this is the dimension of $H_n(\bigcup B_i)$. Cover $\bigcup B_i$ by the $B_i$. Every intersection of finitely many $B_i$ is convex, hence contractible. So $\bigcup B_i$ is homotopic to the nerve of this cover. That is a simplicial complex, so it is easy to compute its homology. One final practical idea: I have used painting software where I could click on a point and it would color every point which was connected to that one. Maybe the algorithms used to make that software could solve this problem as well? - The ideas in this excellent answer are similar to those used by Baryshnikov et al in the talk slides I posted. – jc Apr 22 2010 at 22:39 What a fantastic answer! – Rob Grey Apr 22 2010 at 23:30 This is a beautiful answer. – Tom Church Apr 23 2010 at 0:38 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I recently heard a beautiful talk by Yuliy Baryshnikov on the general question of when an object can be pinned by some set of fixed points. They consider arbitrary objects in 2D and prove the following theorem: Let D be a planar domain. Either one can pull a configuration C of two points $\{p_1,p_2\}$ around D, or there exists a full rotation of C entirely within D, that is a loop π′: $S^1$ → E (E being the Euclidean group of transformations) such that the vector $π′\circ p_1 − π′ \circ p_2$ turns around the origin (perhaps, several times). They use a topological approach which uses Mayer-Vietoris sequences in homology; apparently to generalize to 3D one must use Mayer-Vietoris spectral sequences, though this is "future work". The slides are here and does include some discussion of computing the possibility of caging / linking effectively, but again, they focus on the 2D problem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285498857498169, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/52318?sort=oldest
## Is there always, for a given prime $p$, a prime $\ell<p$ that is not a quadratic residue mod $p$? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The question is in the title, and I do not really have anything to add. Nevertheless I had to write something here in order to be able to ask the question. Thanks. - ## 5 Answers Of course. Take a quadratic nonresidue $1\leq n\leq p-1$, then some prime divisor $\ell$ of $n$ will be a quadratic nonresidue. See this MO question for what is known about number fields. - 1 Thanks. I feel slightly embarrassed.. – Tommaso Centeleghe Jan 17 2011 at 14:23 5 Come on. There is this story about Grothendieck. He lectured like "take a prime $p$". Then someone asked from the audience: can $p$ be an arbitrary prime? He responded, sure, like $57$. – GH Jan 17 2011 at 14:48 :-) . – Tommaso Centeleghe Jan 17 2011 at 15:10 5 That puts me in great company! When I was an undergraduate, I had a habit of factoring numbers that I saw as I walked around. When I passed room 57, I thought to myself "That's interesting, 57 is prime and divisible by 3!" – Jeff Strom Jan 18 2011 at 3:56 @Jeff: LOL. I am not sure Grothendieck went to such depths though. – GH Jan 18 2011 at 8:56 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It is actually quite easy to prove that if $p>3$, then there are at least $2$ primes less than $p$ which are quadratic non-residues. Indeed, assume there were only one, say $q$. Then every $n$ between $1$ and $p-1$ which is not multiple of $q$ is a quadratic residue. Since you have at most $(p-1)/q$ multiples of $q$, and exactly $(p-1)/2$ quadratic residues, this implies $q=2$ and moreover $p=3$ (since otherwise you would get too many quadratic residues: every odd number between $1$ and $p-1$, together with $4$). - Nice . – Tommaso Centeleghe Jan 17 2011 at 16:41 2 I wonder what lower bound can we prove for the number of quadratic nonresidue primes $1\leq\ell\leq p-1$. For the number of quadratic residue primes $1\leq\ell\leq p-1$ I can prove $\gg\log p/\log\log p$ by an elementary argument involving quadratic reciprocity. – GH Jan 17 2011 at 22:37 I made my previous comment into an official MO question. I hope it survives. – GH Jan 18 2011 at 9:27 Well, it survived! – Andrei Moroianu Jan 21 2011 at 10:29 @Andrei: Thanks for your support! – GH Jan 21 2011 at 13:32 Slightly different in emphasis, the smallest quadratic nonresidue is in fact prime, as the product of residues is another residue. - I think the answer is obvious. Since $$\sum_{1\leq n\leq p-1}\left(\frac{n}{p}\right)=0$$, there must exist a positive integer $n\leq p-1$, such that $(\frac{n}{p})=-1$, or else the summation above must be equal to $p-1$. Of course, maybe $n$ is not a prime, however there always be a prime factor $\ell$ of $n$ such that $(\frac{\ell}{p})=-1$. - 3 This answer is the same as GH's, only using an unnecessarily complicated argument as to why there is a quadratic nonresidue $1\leq n\leq p-1$. – Zev Chonoles Jan 18 2011 at 2:53 Erdos conjectured that for any sufficiently large prime $p$ there is a primitive root `$q<p$` for $p$ which is prime. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426203370094299, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/45912/function-zeros-in-strip-0-re-1-closed
## Function zeros in strip 0 < Re < 1 [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi everyone. Could you plz tell me where the zeros of $f(s)$ in the strip $\{0 < \Re s < 1 \}$ are ? Do they all have $\Re s= 1/2$ ? $$f(s) = 1 - 2^{-s} - 3^{-s} + 4^{-s} - 5^{-s} + ...$$ $$= \sum a(n)/n^s$$ with $a(n) = -1\$ if $n = -1 \mod 3\$ or $n = -1 \mod 4,\$ $a(n) = 1\$ otherwise. $$f(s) = \zeta(s) - 2 \big[ 3^{-s} \zeta(s,-1/3) + 4^{-s} \zeta(s,-1/4) - 12^{-s} \zeta(s,-1/12) \big]$$ Could you plz give me some zeros in the strip $\{0 < \Re z < 1 \}$? Thanks. Eta. - 10 Ouch, I thought at first that this reduced to RH but now I'm not so sure. It would also help if it were translated into English: my dictionary lacks "plz" for instance. – Robin Chapman Nov 13 2010 at 12:15 I don't think there will be any good description of the zeroes of this function. If you made $a(n)$ be $-1$ if $n$ is $-1$ modulo one of $3$ and $4$, but $1$ if $n$ is $-1$ mod both of them, then this would be some simple modification of the $L$-function for a Dirichlet character modulo $12$. But, as is, this is some linear combination of $L$ functions, so I don't see any way to understand its zeroes. – David Speyer Nov 13 2010 at 12:44 There should be some motivation. Why do you want information about this combination of Dirichlet L-functions? – Mark Sapir Nov 13 2010 at 13:03 4 I think the general point that Scott and I am making is that, if you just choose some random $a(n)$ and look at the zeroes of $\sum a(n)/n^s$, there is no reason to expect there to be any good control over the zeroes you get. The examples that work do so because they come from interesting cohomological or number theoretic constructions. Your function isn't even multiplicative, so it doesn't have an Euler product! (Note that $a(5)=-1$, $a(7)=-1$ and $a(35)=-1$.) I'm not one of the people voting to close, but I think that, without more motivation, you're not likely to get a better answer. – David Speyer Nov 13 2010 at 13:53 5 I should also add that phrasing such as "can you plz give me some zeros" is a bit off-putting to some of us. It makes it sound like you view us as your teachers or as your tech support – Yemon Choi Nov 13 2010 at 18:25 show 3 more comments ## 1 Answer The question for the case of a linear combination of Dirichlet L-series is actually easier than the case of a single L-function (Since RH is not known). In fact in each strip $1/2 \leq \sigma_1 <\Re(s)<\sigma_2 \leq 1$ there exists $\gg T$ zeroes for $-T < \Im(s) < T$. This follows by e.g. the Joint Voronin universality theorem for Dirichlet L-functions of Bagchi (A good reference for these results is Jörn Steuding's SLN 1877 "Value Distribution of L-functions"). Update Nov 14. I found the recent paper of Saias and Weingartner "Zeros of Dirichlet series with periodic coefficients", Acta Arithmetica 2009 where they get the same results that I indicated above, but also that there exists zeros to the right of the critical strip. Namely there exists some $\eta>0$ such that there are $\gg T$ zeros in any strip $1 \leq \sigma_0 <\Re(s) < \sigma_1 \leq 1+\eta$. This is actually simpler to prove since the Dirichlet series is absolutely convergent and the joint universality result is not needed, and more classical results of Bohr can be used instead. Regarding zeros on the left of the critical line. The same result should hold in that in any vertical strip there exists $\gg T$ zeros. While this is not done in Saias-Weingartner as far as I can see it follows from the functional equation and using joint universality for the L-series in $1/2<\Re(s)<1$. Now we have two different functional equations depending on whether the L-series is odd or even it differs slightly in the Gamma-factors (this is the reason why the argument in my first answer is not applicable. See below). However Stirlings formula should imply that they do not differ sufficiently for this argument not to hold. Further results we can get unconditionally is that there are about $T \log T$ with imaginary part less than $T$. It is not too difficult to prove that if we have a closed vertical strip that does not include the critical line, that the right order of magnitude actually is $T$, from which it would follow that for any open vertical strip including the critical line would have $T\log T$, i.e. the majority of the zeros, so the zeros should cluster around the critical line. Explicit results in this direction are included in the paper of Jörn Steuding "On Dirichlet series with periodic coefficients", Ramanujan Journal 2002 where he proves these results, i.e. clustering around $\Re(s)=1/2$, as well as other estimates (Another related paper is Garunkstis-Steuding "On the zero distribution of the Lerch zeta-function" where they prove corresponding results for the Lerch zeta function). However to prove that they lie exactly on the critical line I believe that they must satisfy the same functional equation (see below) so an analogue of the Hardy function can be found and worked with. Therefore it is not clear (I am not sure about this though) that there should be any zeros exactly on the critical line (at least in order for there to be zeros on any particular line there should be a reason for it, since the zeros are countable, but the reals in an intervals are uncountable). Numerical experiments are welcome (I am not doing them though.). Edit after comment of John below: I had originally thought that Bombieri and Hejhal's, and Hejhal's and Selberg's later results on linear combinations of L-functions would have applied on this problem, but as John pointed out below, this should not be the case, since the L-functions have to have the same Functional equation. Selberg's latest (unpublished) result would have yielded a positive proportion (order of $T \log T$ of zeros on the critical line), and Bombieri-Hejhal's (conditional on RH and weak Montgomery pair correlation conjecture) would have yielded the true asymptotics, if this would have been the case. I checked one of Hejhal's papers on this subject, and John is right in his comment below. The condition to apply this method is that the Gamma-factors in the functional equation and the modulus are the same. When we consider a linear combination of Dirichlet L-functions we have to have a combination of only odd or even Dirichlet characters and the same modulus. For the Hurwitz zeta-function of rational parameters all Dirichlet characters will appear and thus this example is not of this type. Thus I do not have an answer regarding zeros on the critical line. However the argument that shows that we have at least the order of $T$ zeros in any vertical strip $1/2 \leq \sigma_1<\Re(s)<\sigma_2 \leq 1+\eta$ with imaginary part less than $T$ still holds. Thus at least we know that Riemann hypothesis is not true for this function. ```` Johan ```` - 1 Well, so much for Scott and my pessimism. Nice answer! – David Speyer Nov 13 2010 at 15:43 3 @Johan: The function $f(s)$ in the question can be written as a linear combination of Dirichlet L-functions, but it cannot be written as a linear combination of Dirichlet L-functions which all have the same functional equation (i.e. all of the same modulus and are all either even or odd). I believe that if you look at the results of Hejahl and of Bombieri & Hejhal, you will see that they are always assuming that the linear combination of L-functions satisfy to same functional equation. In particular, I do not believe that their results can be used to study the zeros of Hurwitz zeta-functions. – Micah Milinovich Nov 13 2010 at 16:44 1 (continued) I am currently without access to MathSciNet, but I believe Gonek has conjectured that $\zeta(s,a)$ has at $\ll T$ on the line Re $s=1/2$ is $a\neq 1/2$ is a rational number with $0<a<1$. – Micah Milinovich Nov 13 2010 at 16:47 John, Thanks for your comment. I think you are right. I have listen to seminars of Dennis Hejhal on the subject, but not really worked on the problem myself. The use of Voronin universality to prove that there are $\gg T$ zeros for strips to the right on the critical line still holds though, so that RH is not true. I will change my answer to reflect this – Johan Andersson Nov 13 2010 at 16:51 7 eta: in some cultures it is considered impolite to get other people to do your work for you, especially if you do not consider that they may have to spend effort on it. – Yemon Choi Nov 14 2010 at 21:37 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443113803863525, "perplexity_flag": "head"}
http://mathoverflow.net/questions/32397/vector-spaces-without-natural-bases/32430
## Vector spaces without natural bases ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does anyone know any nice examples of vector spaces without a basis that is in some sense "natural". To clarify what I mean, suppose we look at $\mathbb{R}^2$. We define $\mathbb{R}^2$ as pairs of real numbers. In some sense, what we are doing is expressing vectors in terms of a natural basis : (1,0) and (0,1). This is not what I want. An example that I thought of is a tangent space to a manifold. When one picks a tangent space to a manifold, there is no natural basis that one can pick. Are there other nice examples? - 2 Dear Sergeib, Some of the most common occurences are when one forms $Hom$ spaces. E.g. if $V$ and $W$ are two reps. of a group $G$ over a field $k$, then $Hom_{k[G]}(V,W)$ is naturally a $k$-vector space, but (in general) has no natural basis. More generally, if one applies linear-algebra or multilinear-algebra type contsructions (Homs, tensor products, etc.) to objects with a vector spaces structure, then one will obtain vector spaces that typically have no natural basis. – Emerton Jul 18 2010 at 20:44 2 Another example (related to my first) is the formation of subspaces: e.g. if $l:V \to k$ is a non-zero linear functional on a $k$-vector space $V$, then (even if $V$ has some specified basis) the kernel of $V$ typically has no preferred basis. – Emerton Jul 18 2010 at 20:45 11 I'm having real trouble seeing the point of this question. Do you think having a supply of vector spaces without obvious bases will illuminate some point of linear algebra for you? – Ben Webster♦ Jul 18 2010 at 20:47 3 This really should be community wiki. – Willie Wong Jul 18 2010 at 20:57 3 Another good way to motivate basis free constructons is that there are many vector spaces with more than one natural basis. – Noah Snyder Jul 18 2010 at 21:38 show 3 more comments ## 14 Answers Most vector spaces I've met don't have a natural basis. However this is question that comes up when teaching linear algebra. You want to motivate abstract vector spaces instead of working with $\mathbb{R}^n$ (or your favourite field in place of $\mathbb{R}$). One simple example, is this. Consider $\mathbb{R}^n$ ($n>2$) as a euclidean space relative to the "dot" product and let $v = (1,1,\dots,1)$. Then the subspace $V \subset \mathbb{R}^n$ of vectors orthogonal to $v$ does not have a natural basis. If you don't like introducing an inner product, then take $V$ to be the annihilator of $v$ in the dual of $\mathbb{R}^n$. This actually comes up when discussing the root space of $\mathfrak{su}(n)$, say. - 2 I have just taught a linear algebra course where the textbook (David Lay) offers an alternative route to those who don't want to teach abstract vector spaces: a lot of the same material (spans, linear independence, bases, coordinates) presented for subspaces of $\mathbb{R}^n.$ I found it a very convincing motivation for more abstract approach and, not unexpectedly, the students, in turn, found it just as difficult. – Victor Protsak Jul 19 2010 at 7:32 I had exactly the same issue with exactly the same book this spring! – Daniel Larsson Jul 19 2010 at 9:18 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The obvious example is $\mathbb{R}$, as a vector space over $\mathbb{Q}$; the existence of such a basis requires the axiom of choice. - I think you need a little bit more than this: Defining said basis and proving that your definition actually works are different things. Hypothetically, you could define a very natural basis for some vector space V, and then only use AC at the very end to show V was actually all of R. – Michael Burge Jul 19 2010 at 3:33 The solution space of a homogeneous (ordinary or partial) linear differerential equation has no natural basis. - I guess one could argue that here it is natural to look for a basis of eigenvectors with respect to differentiation. – Qiaochu Yuan Jul 18 2010 at 22:51 1 @Yuan: Even then, if your eigenspaces aren't 1-dimensional you're sunk. – Ryan Budney Jul 19 2010 at 0:29 2 I never said constant coefficients. – Harald Hanche-Olsen Jul 19 2010 at 7:14 Another example is most function spaces defined over $\mathbb{R}$. The space of square integrable functions $L^2(\mathbb{R})$ doesn't have a natural basis. You would like one in the trigonometric functions $e^{2\pi i n x}$ in view of Plancherel's theorem and the Fourier transform, but they are not actually in $L^2$. (Compare the case on a torus, where the "natural" basis exists.) - For teaching purposes, the most simple example (which I use frequently in a first course in linear algebra) is a generic sub-vector space of $\mathbb{R}^n$. Any vector plane in the $3$-space that is not cardinal works. - To expand on Anon's answer, I'd like to discuss one way in which the lack of a "natural" basis has some utility. A Hamel basis is a basis for $\mathbb{R}$ over $\mathbb{Q}$. Hamel bases are quite useful, due to their interactions with Cauchy functions (real-valued functions that satisfy an "additive" functional equation $f(x+y) = f(x) + f(y)$. This functional equation is equivalent to being linear over $\mathbb{Q}$. Examples of the utility of Cauchy functions abound. One approach to proving that the cube and the tetrahedron are not equidecomposable (Hilbert's 3rd problem) is to pick the $\mathbb{Q}-$linearly independent set ${1, \pi}$ and, by the magic of AC, this extends to a Hamel basis. Setting up the right Cauchy function then resolves the problem. For more on this, see "Conjecture and Proof" by Miklós Laczkovich. - I suppose there's a natural way to give a type of global quantative answer to this question. A vector bundle is a family of vector spaces over a base space, $f : E \to B$. $f$ is a continuous function, $B$ is a topological space and $f^{-1}(b)$ is a vector space for all $b\in B$. Moreover it is a continuous family of vector spaces in the sense that vector addition $E \oplus E \to E$ and scalar multiplication $\mathbb R \times E \to E$ are continuous. If vector spaces typically had natural basis, vector bundles would typically be trivial. i.e. $E \simeq V \times B$ and under that homeomorphism, $f$ would be conjugate to projection $\pi : V \times B \to B$, $\pi(v,b) = b$, since choosing such a conjugation is equivalent to choosing (continuously) a basis for each vector space $f^{-1}(b)$. But this generally can't be done. The Moebius band being the first interesting counter-example. The non-triviality of the Moebius band from this perspective would be a reflection of the difficulty choosing a basis for 1-dimensional vector spaces. - 1 Nature could be evil and provide us with natural basis which do not depend continuously on the basis :) – Mariano Suárez-Alvarez Jul 19 2010 at 2:20 on the basis of your map $f$, that is. – Mariano Suárez-Alvarez Jul 19 2010 at 2:21 The vector space of polynomials (possibly of some fixed degree). This is a case where many students, I think, are tempted to privilege the basis ${ 1, x, x^2, ... }$, but to do so is to 1) privilege evaluation at $0$ over evaluation at other points, and 2) miss out on the utility of other bases like ${ 1, x, {x \choose 2}, ... }$. - 4 Well the same could be said about any other "natural basis". It always comes down to privileging one basis over all other basis. From the viewpoint of pure linear algebra, there are no natural basis at all (excluding the empty set as the only and hence natural basis of the zero vector space). All basis are equal if one doesn't specify a point of view that comes from outside linear algebra. To decide whether some item is "natural" always requires knowledge on what you want to do with it. If I only want to test equality of explicitly given polynoms the basis (x^k) could be called natural. – Johannes Hahn Jul 18 2010 at 23:44 Very good point. Perhaps I should have said "the vector space of regular functions on the affine line." Then the point I'm trying to make with comment 1) is that the affine line is homogeneous with respect to its automorphism group. – Qiaochu Yuan Jul 18 2010 at 23:57 Once again,better mathematical living through The Axiom of Choice. – Andrew L Jul 20 2010 at 9:42 Hilbert spaces don't generally have nice bases in the sense of linear algebra. Neither does the ring of formal power series $k[[X]]$ over a field $k$. (These have "bases" with "infinite linear combinations" that only make sense because of completeness.) - Let $K$ be a field, let $S$ be a set, and consider the $K$-vector space $\operatorname{Map}(S,K)$ of all functions from $S$ to $K$. When $S$ is finite, $\operatorname{Map}(S,K)$ has a natural basis: for each $x \in S$, let $\delta_x$ be the function which takes $1$ at $x$ and $0$ otherwise. However, when $S$ is infinite, these "Dirac" functions span only the set of finitely nonzero functions. In this case, the idea that there is no "natural basis" can probably be stated and proven in categorical language. (If you wish to do so as an addendum to this answer, please feel free!) Note that one may also look at this construction in terms of the distinction between direct products and direct sums. - The vector space $\mathbb C / \mathbb R$ does not have a preferred basis. Among the two bases ${1, i}$ and ${1, -i}$, there is no reason to prefer one over the other. The choice of one of these amounts to a choice of an orientation for the plane. - As a physicist, I would say the most obvious example is $n$-dimensional Euclidean space, with $n > 1$. Since a few people have mentioned casually that Euclidean spaces do have natural bases, I should explain myself... Informally, a Euclidean space is supposed to be an idealization of something like a giant sheet of paper with an origin marked in pencil, or interstellar space with an origin marked by a certain star. If you're in the habit of carrying around a tape measure, a space like this has a natural metric, and you can turn it into a vector space in the obvious way (using the metric to define scalar multiplication and the parallelogram rule to define addition). From this point of view, Euclidean space clearly has no natural basis, because if you're stranded on a giant sheet of paper, or floating in interstellar space, there's no natural set of "special" directions. Unfortunately, I don't know offhand how to formalize this argument. My guess is that you would start with Hilbert's axioms for Euclidean $n$-space, and choose an arbitrary point to be the origin. Hartshorne mentions in Geometry: Euclid and Beyond that in Hilbert's framework, the congruence classes of line segments naturally become the positive elements of an ordered field, which is of course isomorphic to $\mathbb{R}$. Choosing an arbitrary congruence class of line segments to be the "unit segments," you get a metric on your space. You can then turn the set of points into a vector space, using the metric to define scalar multiplication and the parallelogram rule to define addition (just like before, but now rigorously). It seems obvious to me that this vector space will have no natural basis. - Indeed, every Desarguesian projective geometry (i.e. a set with an incidence relation satisifying the axioms for a projective space together with Desargues' theorem) is generated by a vector space over a skew field. (This skew field is a field if Pappus' theorem holds) Furthermore there is a isomorphism that turns any given basis of that vector space into another basis, so all bases are equally natural, provided that you do not distinguish isomorphic geometries. – Gabriel Ebner Aug 4 2010 at 1:06 @Gabriel Ebner: Cool! What kind of textbook would cover this stuff? How could I figure out which module generates, say, the real projective plane? – Vectornaut Aug 5 2010 at 16:58 @Vectornaut I learned it from Beutelspacher's Projective Geometry (German; English translation available). To actually construct the vector space you need to pick a hyperplane of points at infinity, and an origin. The scalings centered at the origin correspond to elements of the skew field; the translations form the underlying vector space of the affine space (the complement of the hyperplane). (E.g. you can define a translation as an automorphism that has all points at infinity as fixed points, and is invariant on all lines through a point at infinity (the direction of the translation)). – Gabriel Ebner Aug 6 2010 at 16:31 This example generalizes some of the others already mentioned: Take an infinite family of vector spaces $(V_i)_{i \in I}$. Now what about `$\prod_{i \in I} V_i$`, can you write down a basis? Also, it is easy to construct an infinite multilinear tensor product $\bigotimes_{i \in I} V_i$. However, writing down a basis is equivalent to find a set of representatives of $\prod_{i \in I} V_i \setminus \{0\} / \sim$, where $\sim$ identifies families of elements, which differ only at finitely many indices. And this cannot be done explicitely. - Cohomology with coefficients in $\mathbb{Q}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272079467773438, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/42259/list
Return to Answer 3 deleted 76 characters in body One way to think of the operation $\mathbb{J}_D^f$ {J}_D^f$is as follows:$M_n$has the structure of a Hilbert space by the inner product$\langle X, Y \rangle = {\rm Tr}(X^* Y) = {\rm Tr}(Y X^*)$. Then the transformations$\mathbb{L}D \mathbb{R}{D^{-1}}$L_D R_{D}^{-1}$ and $\mathbb{R}_D$ {R}_D$are positive operators on this Hilbert space. Since they commute, you may take the functional calculus$f(s) \otimes g(t) \mapsto f(\mathbb{L}D \mathbb{R}{D^{-1}}) g(\mathbb{R}_D)$f(L_D R_{D}^{-1}) g({R}_D)$ from the space of functions on ${\rm Sp}(\mathbb{L}D \mathbb{R}{D^{-1}}) Sp}(L_D R_D^{-1}) \times {\rm Sp}(\mathbb{R}_D)$ Sp}(R_D)$into the space of bounded operators on$M_n$(since the spectrum${\rm Sp}(\mathbb{L}D\mathbb{R}{D^{-1}})$Sp}(L_D{R}_{D^{-1}})$ of $\mathbb{L}_D$ L_D$is a finite subset of$\mathbb{R}$, {R}$, there is no need to worry about the regularity of $f$. Ditto for $g$). Then $\mathbb{J}_D^f$ {J}_D^f$is simply the image of$f(s) \otimes t$. Since the functional calculus is an algebra homomorphism, the inverse of$\mathbb{J}_D^f${J}_D^f$ is represented by $(1/f(s)) \otimes (1/t)$, which equals $\frac{1}{f}(\mathbb{L}_D\mathbb{R}_D^{-1})\mathbb{R}_D^{-1}$. \frac{1}{f}(L_D{R}_D^{-1}){R}_D^{-1}$. When$E \subset M_n$is the joint eigenspace for$\mathbb{L}D \mathbb{R}L_D {D^{-1}}$R}_{D^{-1}}$ and $\mathbb{R}_D$ {R}_D$with the corresponding eigenvalues$a b^{-1}$and$b$(i.e.$\mathbb{L}_D$L_D$ has eigenvalue $a$ on this subspace), $\mathbb{J}_D^f$ {J}_D^f$acts by$f(a/b)b$on$E\$. Any book containing the spectral theory of self adjoint operators on Hilbert spaces will do, like Pedersen's Analysis Now (GTM 118). About the computation of $\mathbb{J}_D$ for the case of $f(t) = \frac{t+1}{2}$, the integral formula follows from the identity $\int_0^\infty exp(-t a / 2) exp(-t b / 2) = \frac{2}{a+b} = \frac{2}{ab^{-1}+1} \frac{1}{b}$. Edit: I made a stupid mistake in the first version. This , and this is the correctioncorrected version. Sorry for the change of notation from $\mathbb{L}_D$ to $L_D$, etc. I somehow couldn't make it to work. Post Undeleted by Makoto Yamashita 2 added 640 characters in body One way to think of the operation $\mathbb{J}_D^f$ is as follows: $M_n$ has the structure of a Hilbert space by the inner product $\langle X, Y \rangle = {\rm Tr}(X^* Y) = {\rm Tr}(Y X^*)$. Then the transformations $\mathbb{L}_D$ \mathbb{L}D \mathbb{R}{D^{-1}}$and$\mathbb{R}_D$are positive operators on this Hilbert space. Since they commute, you may take the functional calculus$f(s) \otimes g(t) \mapsto f(\mathbb{L}_D) f(\mathbb{L}D \mathbb{R}{D^{-1}}) g(\mathbb{R}_D)$from the space of functions on${\rm Sp}(\mathbb{L}_D) Sp}(\mathbb{L}D \mathbb{R}{D^{-1}}) \times {\rm Sp}(\mathbb{R}_D)$into the space of bounded operators on$M_n$(since the spectrum${\rm Sp}(\mathbb{L}_D)$Sp}(\mathbb{L}D\mathbb{R}{D^{-1}})$ of $\mathbb{L}_D$ is a finite subset of $\mathbb{R}$, there is no need to worry about the regularity of $f$ and f$. Ditto for$g$). Then$\mathbb{J}_D^f$is simply the image of$f(s) \otimes (t/f(t))$. t$. Since the functional calculus is an algebra homomorphism, the inverse of $\mathbb{J}_D^f$ is represented by $(1/f(s)) \otimes (f(t)/t)$, 1/t)$, which equals$\frac{1}{f}(\mathbb{L}_D\mathbb{R}_D^{-1})\mathbb{R}_D^{-1}$. When$E \subset M_n$is the joint eigenspace for$\mathbb{L}D \mathbb{R}{D^{-1}}$and$\mathbb{R}_D$with the corresponding eigenvalues$a b^{-1}$and$b$(i.e.$\mathbb{L}_D$has eigenvalue$a$on this subspace),$\mathbb{J}_D^f$acts by$f(a/b)b$on$E\$. Any book containing the spectral theory of self adjoint operators on Hilbert spaces will do, like Pedersen's Analysis Now (GTM 118). About the computation of $\mathbb{J}_D$ for the case of $f(t) = \frac{t+1}{2}$, the integral formula follows from the identity $\int_0^\infty exp(-t a / 2) exp(-t b / 2) = \frac{2}{a+b} = \frac{2}{ab^{-1}+1} \frac{1}{b}$. Edit: I made a stupid mistake in the first version. This is the correction. Post Deleted by Makoto Yamashita 1 One way to think of the operation $\mathbb{J}_D^f$ is as follows: $M_n$ has the structure of a Hilbert space by the inner product $\langle X, Y \rangle = {\rm Tr}(X^* Y) = {\rm Tr}(Y X^*)$. Then the transformations $\mathbb{L}_D$ and $\mathbb{R}_D$ are positive operators on this Hilbert space. Since they commute, you may take the functional calculus $f(s) \otimes g(t) \mapsto f(\mathbb{L}_D) g(\mathbb{R}_D)$ from the space of functions on ${\rm Sp}(\mathbb{L}_D) \times {\rm Sp}(\mathbb{R}_D)$ into the space of bounded operators on $M_n$ (since the spectrum ${\rm Sp}(\mathbb{L}_D)$ of $\mathbb{L}_D$ is a finite subset of $\mathbb{R}$, there is no need to worry about the regularity of $f$ and $g$). Then $\mathbb{J}_D^f$ is simply the image of $f(s) \otimes (t/f(t))$. Since the functional calculus is an algebra homomorphism, the inverse of $\mathbb{J}_D^f$ is represented by $(1/f(s)) \otimes (f(t)/t)$, which equals $\frac{1}{f}(\mathbb{L}_D\mathbb{R}_D^{-1})\mathbb{R}_D^{-1}$. Any book containing the spectral theory of self adjoint operators on Hilbert spaces will do, like Pedersen's Analysis Now (GTM 118).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 91, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8518915176391602, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/34993/reversing-gravitational-decoherence/35067
# Reversing gravitational decoherence [Update: Thanks, everyone, for the wonderful replies! I learned something extremely interesting and relevant (namely, the basic way decoherence works in QFT), even though it wasn't what I thought I wanted to know when I asked the question. Partly inspired by wolfgang's answer below, I just asked a new question about Gambini et al.'s "Montevideo interpretation," which (if it worked as claimed) would provide a completely different sort of "gravitational decoherence."] This question is about very speculative technology, but it seems well-defined, and it's hard to imagine that physics.SE folks would have nothing of interest to say about it. For what follows, I'll assume that whatever the right quantum theory of gravity is, it's perfectly unitary, so that there's no problem at all creating superpositions over different configurations of the gravitational metric. I'll also assume that we live in de Sitter space. Suppose someone creates a superposition of the form (1) $\frac{\left|L\right\rangle+\left|R\right\rangle}{\sqrt{2}},$ where |L> represents a large mass on the left side of a box, and |R> represents that same mass on the right side of the box. And suppose this mass is large enough that the |L> and |R> states couple "detectably differently" to the gravitational field (but on the other hand, that all possible sources of decoherence other than gravity have been removed). Then by our assumptions, we ought to get gravity-induced decoherence. That is, the |L> state will get entangled with one "sphere of gravitational influence" spreading outwards from the box at the speed of light, and the |R> state will get entangled with a different such sphere, with the result being that someone who measures only the box will see just the mixed state (2) $\frac{\left|L\right\rangle\left\langle L\right|+\left|R\right\rangle\left\langle R\right|}{2}.$ My question is now the following: Is there any conceivable technology, consistent with known physics (and with our assumption of a dS space), that could reverse the decoherence and return the mixed state (2) to the pure state (1)? If so, how might it work? For example: if we'd had sufficient foresight, could we have surrounded the solar system with "gravity mirrors," which would reflect the outgoing spheres of gravitational influence back to the box from which they'd originated? Are exotic physical assumptions (like negative-energy matter) needed to make such mirrors work? The motivation, of course, is that if there's no such technology, then at least in dS space, we'd seem to have a phenomenon that we could justifiably call "true, in-principle irreversible decoherence," without having to postulate any Penrose-like "objective reduction" process, or indeed any new physics whatsoever. (And yes, I'm well aware that the AdS/CFT correspondence strongly suggests that this phenomenon, if it existed, would be specific to dS space and wouldn't work in AdS.) [Note: I was surprised that I couldn't find anyone asking this before, since whatever the answer, it must have occurred to lots of people! Vaguely-related questions: Is decoherence even possible in anti de Sitter space?, Do black holes play a role in quantum decoherence?] - 1 @MattReece: My question should also make sense in flat space. But I did need to rule out AdS, which (as I understand it) has a "reflecting boundary" pushing stuff back to the middle, which in turn is what allows AdS/CFT to exhibit equivalence to a unitary and finite-dimensional theory. – Scott Aaronson Aug 27 '12 at 13:35 1 @JimGraber: Then the obvious question becomes, when do the von Neumann projections happen and what causes them? The whole point of this question was to explore how far decoherence can really be used to sidestep the famous difficulties associated with the measurement problem. But for this question, there's really no need to get into the measurement problem itself. – Scott Aaronson Aug 27 '12 at 13:43 2 @MattReece, wrong - measurement is always nonunitary, as well as nondeterministic from the point of view of the observer. decoherence just gives you a classical probability distribution; that describes many experiments, but single experiments still give single eigenvalues. There is no way that decoherence or any unitary process will produce collapse – lurscher Aug 27 '12 at 22:28 1 Sigh. No. This is off-topic here, but it's really a pity that quantum mechanics tends to be taught in a way that obscures this point. – Matt Reece Aug 27 '12 at 23:15 1 – Matt Reece Aug 27 '12 at 23:41 show 11 more comments ## 8 Answers If we do an interference experiment with a (charged) particle coupled to the electromagnetic field or a massive particle coupled to the gravitational field, we can see interference if no information gets stored in the environment about which path the particle followed (or at least, if the states of the environment corresponding to the two paths through the interferometer have a large overlap --- if the overlap is not 1 the visibility of the interference fringes is reduced). The particle is "dressed" by its electromagnetic or gravitational field, but that is not necessarily enough to leave a permanent record behind. For an electron, if it emits no photon during the experiment, the electromagnetic field stays in the vacuum state, and records no "which-way" information. So two possible paths followed by the electron can interfere. But if a single photon gets emitted, and the state of the photon allows us to identify the path taken with high success probability, then there is no interference. What actually happens in an experiment with electrons is kind of interesting. Since photons are massless they are easy to excite if they have long wavelength and hence low energy. Whenever an electron gets accelerated many "soft" (i.e., long wavelength) photons get emitted. But if the acceleration is weak, the photons have such long wavelength that they provide little information concerning which path, and interference is possible. It is the same with gravitons. Except the probability of emitting a "hard" graviton (with short enough wavelength to distinguish the paths) is far, far smaller than for photons, and therefore gravitational decoherence is extremely weak. These soft photons (or gravitons) can be well described using classical electromagnetic (or gravitional) theory. This helps one to appreciate how the intuitive picture --- the motion of the electron through the interferometer should perturb the electric field at long range --- is reconciled with the survival of interference. Yes, it's true that the electric field is affected by the electron's (noninertial) motion, but the very long wavelength radiation detected far away looks essentially the same for either path followed by the electron; by detecting this radiation we can distinguish the paths only with very poor resolution, i.e. hardly at all. In practice, loss of visibility in decoherence experiments usually occurs due to more mundane processes that cause "which-way" information to be recorded (e.g. the electron gets scattering by a stray atom, dust grain, or photon). Decoherence due to entanglement of the particle with its field (i.e. the emission of photons or gravitons that are not very soft) is always present at some level, but typically it is a small effect. - pretty concise answer, +1 – lurscher Aug 27 '12 at 20:41 3 I'm sorry I was unclear. I meant to consider an interference experiment in which a particle can travel along either path 1 or path 2 between spacetime points A and B, where neither path is a geodesic. Then the particle emits radiation, but my point is that if the radiation is very soft we won't be able to learn whether the particle followed path 1 or path 2 by measuring the radiation field. Therefore, the two paths can interfere. – John Preskill Aug 27 '12 at 22:36 1 Thanks so much, John!! In case it's helpful to others, here's my personal doofus model for what you're saying: it would be like you had a qubit in state a|0>+b|1> which then became entangled with some other degree of freedom (in this case, a long-wavelength photon). But because of the photon's limited ability to resolve the 0 and 1 states, you just get "soft" decoherence, as would happen if you mapped the state a|0>+b|1> to a|0>(|0>+eps|1>)+b|1>(|0>+eps|2>) for some small eps>0. – Scott Aaronson Aug 28 '12 at 13:54 1 Not only does this clear up my confusion about how decoherence in QFT could work, but in retrospect, I see why this is really how it had to work -- the alternatives would either leak out the "which-path" information immediately or else never leak it out at all (without violating causality). Just one remaining sanity check: I suppose this weak coupling is the reason why Penrose has to postulate strange new physics for his gravitational collapse, rather than just saying that a massive object's which-path information should "leak irreversibly" into the gravitational field? – Scott Aaronson Aug 28 '12 at 14:03 2 Yes, I think that is correct. Penrose's proposal is very speculative. His estimate of the decoherence rate is far higher than standard gravitational theory would indicate. – John Preskill Aug 28 '12 at 16:02 show 5 more comments I'm probably straying into dangerour territory here, but let me venture an answer. Doing so is probably just asking to be shot down by John Preskill, or some other such expert, but let me stick my neck out. Despite Ron's comments, gravity and EM are different in this context, in the sense that you can't flip the sign of the gravitational interaction the way you can with EM. On a deeper level, they should behave in a similar way, though: The only way to get decoherence (without assuming additional baggage from some particular interpretation of QM) is to create a non-local state, such that the reduced density matrix for a local observation is mixed. This is essentially at the heart of things like the Unruh effect, where an accelerating observer observes a mixed state. The difficulty about talking about unitary operations is that they will be that this means taking a spacelike slice of the state of the universe, and this is going to introduce all sorts of observer effects. In particular the main problem is going to be horizons, since information will have leaked beyond the event horizon for some observers. So for some observers there will be no unitary which reverses the unitarity while for others there will be. This isn't that weird. Even in Minkowski space, when we lose a photon, we can never hope to catch it again (ignoring the slight slowing induced by the earths atmosphere, and the even slighter effects in interplanetary and interstellar space). So there is no unitary we can ever perform which could reverse this. On the other hand, we can make a transformation of frames to that of an observer who perceives the process as unitary, and the same can be the case in more general space times (although I am not convinced this is always true). For example the decoherence induced in the frame of a continuously accelerating observer disappears if the observer stops accelerating. - 2 @RonMaimon: You seem to think something different to what I am actually saying. My point was that in coupling to a field you can excite the field. This is pretty much what you say in you comment, but you phrase it in a weirdly adversarial way. My comments were aimed at pointing out that you can't simply assert that the field is static, but rather that you need to take account also of the effect the particle has on the field. – Joe Fitzsimons Aug 28 '12 at 7:31 2 @RonMaimon: No, I understand perfectly well that you can't have decoherence unless there's some difference in the field depending on which path the particle takes. That's just basic QM! My confusion arose from the fact that it seems perfectly obvious that there will be a difference in the field, depending on whether the particle is at location A or location B. (Otherwise, what does it even mean for this field to be "sourced" by the particle?) This is what John Preskill has addressed directly, through his comment about long-wavelength photons from A or B being hard to distinguish. – Scott Aaronson Aug 28 '12 at 13:42 1 @ScottAaronson: For decoherence, it is absolutly not enough for there to be a difference in the long range field. There is no decoherence just from different field. There is just a superposed long-range field. The decoherence happens when some other particle swerves a different way in response to the field, or when the field emits quanta that point out the location of the emitter. John Preskill's answer is not saying what I just said, and it is just reinforcing your confusion (although I am sure he is not confused on this). – Ron Maimon Aug 28 '12 at 19:15 1 @user56771: yes, but when asymptotic scattering states are entangled, we usually call that collapse, since nobody is going to recohere the waves by reversing the collision. – Ron Maimon Aug 29 '12 at 3:15 1 @RonMaimon: In your above argument with Scott you are making an error. Is not directly related to measuring a which way information at some other point in space. Rather, since you care only about the reduced density matrix for the system in question, you get this by taking the partial trace over the environment. In a sense this incapsulates all possible measurements, but no measurement need ever be made. If the field and the particle are even weakly entangled, then this reduces the purity of the local system, so if you consider only the state of the particle it appears to decohere. – Joe Fitzsimons Aug 29 '12 at 5:18 show 24 more comments Gambini and Pullin have developed what they call the "Montevideo interpretation" of quantum theory in a series of papers. See e.g. arxiv.org/abs/0903.2438 While their paper(s) may not answer the exact question Scott asked, they do adress the underlying question how gravitation affects decoherence (and thus the interpretation of quantum theory). - Thanks, Wolfgang! Now that I look, I actually saw that paper a while ago, and it might have subconsciously influenced my asking of this very physics.SE question... :-) – Scott Aaronson Aug 28 '12 at 13:35 I think you're getting a bit ahead of yourself. This seems to be a variation of the "Schrodinger's lump" thought experiment discussed by Penrose[1] as a motivation for his own theory of gravitational objective collapse. I think he makes an important point which is relevant to your example also, namely, the state that you write down in your Eq.(1) is not well-defined. Before we can ask questions about reversibility and dynamics in such a thought experiment, we need to explain what be mean by `a superposition of space-times'. In particular, superpositions of matter at different positions in quantum mechanics is only understood with reference to some background metric. If each of the terms in your superposition, $|L\rangle$ and $|R\rangle$, themselves correspond to different metrics, then with respect to whose time co-ordinate do they evolve (or remain static, as the case may be)? With respect to what background structure do we compare the two different metrics, each of which corresponds to the different positions of the mass? I challenge you to re-write the state of Eqn.(1) making the dependence on space-time co-ordinates explicit. I share your surprise that relatively little attention seems to have been given to such thought experiments. It seems to me that coming up with toy models to give consistent answers to questions such as this is a logical starting point in searching for a deeper theory. [1]: Gen. Rel. Grav. 28,5, 581-600 (1996) EDIT: (in light of Scott's comment below) Okay, let us see how far we can get without worrying about the finer details. We set up a gravitational decoherence experiment a la Preskill, with the decoherence occuring on detection of a "hard" graviton by a detector. Since our unspecified theory of QG is unitary, there ought to be some way in principle for us to reverse the decoherence. A necessary condition is that the system + detector (S+D) must be enclosed within a boundary such that no which-path information can leak outside the boundary. We need to effectively isolate the system and detector from the environment. While it is possible to shield the S+D from electromagetic leakage using mirrors, it is not obvious that we can stop the gravitons from leaking out. Trivially, we could do this by taking S+D to include the entire universe, but the lack of any external observer is problematic for the operational meaning of the experiment. Instead, let us simply assume that a gravitational mirror-box can be constructed. Would this solve our problem? It seems that it would. The combined system S+D would be effectively isolated, hence its evolution would be, by assumption, unitary and thus reversible. In particular, it would return to its initial state after the Poincare recurrence time, leaving the detector disentangled from the system once more. The question, therefore, is whether a "gravitational shield" can be constructed in principle. At a glance, it appears not, since the equations of GR do not permit us to exclude any part of the energy-momentum tensor when using it to determine the (global) metric - at least as far as I know. Note that this would not be an argument against "truly irreversible" gravitational decoherence, since we have excluded that possibility by the assumption of unitarity. - Thanks, Jacques! I agree that I was "getting ahead of myself," in the sense that, in retrospect, there were strictly more basic issues that I was already confused about. I also agree that my state (1) might not be well-defined in QG---yes, I've read Penrose about this, and his remarks were very much part of the motivation for this question! Thus, when I wrote equation (1), I really meant the following: "suppose we performed an experiment involving a beamsplitter, a large mass, etc., that, within the framework of conventional QM, would be expected to lead to the state (1)..." – Scott Aaronson Aug 28 '12 at 13:47 There is no decoherence from the near-field static gravitational field by itself, the static field is just superposed coherently along with the box mass distribution. The decoherence only comes when you have some quantum particle interacting with the gravitational field and deflected by a different amount for the two different fields, so that that different position of the mass leads to a different deflection for the particle. Then the two deflection states are entangled with the two different position states, and you lose coherence between the two. The same thing happens when you have a particle with an electrostatic field. The near field is superposed along with the particle when you superpose two position states, so you get a superposition of fields with two different centers. This superposition is not decohered, even though the field potentially extends arbitrarily far out. It becomes decohered when you shoot a particle through the electrostatic field which deflects by a different amount depending on which field is which, then the position superposition turns into a deflection superposition, and the deflection reduces the wavefunction. - 3 Thanks! But if the gravitational field can't decohere anything "on its own", why do people go on about it containing its own vast number of degrees of freedom in QG, which need to be counted to get anywhere close to saturating the holographic bound? – Scott Aaronson Aug 27 '12 at 4:21 1 @ScottAaronson: Are you talking about the gravitational field, or the entropy of the cosmological horizon? I don't understand the comment. A static field can't decohere anything, it is just sourced by the superposed thing so it ends up in a superposition. The degrees of freedom of a black hole or cosmological horizon are irrelevant. – Ron Maimon Aug 27 '12 at 8:46 2 Look, IANAP, but there's something strange about the view that fields are "just" sourced by particles and can never decohere anything on their own. For example, suppose the field disturbance is already out to Alpha Centauri, and only then do I move the objects back to their original state. Without violating causality, how do the objects "know" whether there were any particles in Alpha Centauri whose interaction with the field should have decohered them? Doesn't it at least take time to "uncompute" the field propagation? (And aren't the fields the basic DoFs in QFT anyway?) – Scott Aaronson Aug 27 '12 at 10:36 I freely confess that the alternative, that fields can decohere stuff all by themselves, doesn't make sense either, since it would suggest that interference experiments with (say) electrons ought to be impossible: after two electron states have generated different EM fields, no matter how much time elapses, the "correction" to the field from bringing the electron states back together can never propagate fast enough to get rid of the outermost shell of field disturbance, and thereby reverse the decoherence. Hence my confusion about the entire subject of decoherence and fields. – Scott Aaronson Aug 27 '12 at 10:58 1 @ScottAaronson: Moving the particle produces gravitons that decohere the particle's position. You don't need to uncompute anything. – Ron Maimon Aug 27 '12 at 21:01 Yes, you can get gravity induced decoherence for a massive body provided it takes at least two different trajectories, and then both path come back again to the same location (otherwise, how can we tell interference has vanished?). But the paths have to differ for at least as long as the decoherence time, which can be very very long for bodies with low mass. In practice, decoherence by other sources will dominate. The real problem comes when you have massive matter with many microstates. Gravity can decohere maybe the center-of-mass position and velocity, and maybe some coarse grained energy-momentum distribution, but there are many finer details which aren't decohered by gravity, but are still decohered by other more mundane mechanisms, like collisions with environmental photons and molecules. - Here is an extended answer that concludes Summary   On entropic grounds, gravitational radiative decoherence is similarly irreversible to all other forms of radiative decoherence, and in consequence, Nature's quantum state-spaces are effectively low-dimension and non-flat. Update B  For further discussion and references, see this answer to the CSTheory.StackExchange question "Physical realization of nonlinear operators for quantum computers." Update A  This augmented survey/answer provides an entropically naturalized and geometrically universalized survey of the physical ideas that are discussed by Jan Dereziski, Wojciech De Roeck, and Christian Maes in their article Fluctuations of quantum currents and unravelings of master equations (arXiv:cond-mat/0703594v2).  Especially commended is their article's "Section 4: Quantum Trajectories" and the extensive bibliography they provide. By deliberate intent, this survey/answer relates also to the lively (and ongoing) public debate that is hosted on Gödel's Lost Letter and P=NP, between Aram Harrow and Gil Kalai, regarding the feasiblity (or not) of scalable quantum computing. ### Naturalized survey of thermodynamics We begin with a review, encompassing both classical and quantum thermodynamical principles, following the exposition of Zia, Redish, and McKay's highly recommended Making sense of the Legendre transform (AJP, 2009). The fundamental thermodynamical relations are specified as $$\Omega(E)=e^{\mathcal{S}(E)}\,, \quad\qquad Z(\beta)=e^{-\mathcal{A}(\beta)}\,,\\[2ex] \frac{\partial\,\mathcal{S}(E)}{\partial\,E} = \beta\,, \quad\qquad \frac{\partial\,\mathcal{A}(\beta)}{\partial\,\beta}= E\,,\\[3ex] \mathcal{S}(E) + \mathcal{A}(\beta) = \beta E\,.$$ In these relations the two conjugate thermodynamic variables $$E := \text{total energy}\,, \quad\qquad \beta := \text{inverse temperature}\,,$$ appear as arguments of four fundamental thermodynamic functions $$\mathcal{S} := \text{entropy function}\,, \quad\qquad \mathcal{A} := \text{free energy function}\,, \\ {Z} := \text{partition function}\,, \quad\qquad {\Omega} := \text{volume function}\,.$$ Any one of the four thermodynamic potentials $(\mathcal{S},\mathcal{A},Z,\Omega)$ determines the other three via elementary logarithms, exponentials, Laplace Transforms, and Legendre transforms, and moreover, any of the four potentials can be regarded as a function of either of the two conjugate variables. Aside  The preceding relations assume that only one quantity is globally conserved and locally transported, namely the energy $E$.  When more than one quantity is conserved and transported — charge, mass, chemical species, and magnetic moments are typical examples — then the above relations generalize naturally to a vector space of conserved quantities and a dual vector space of thermodynamically conjugate potentials. None of the following arguments are fundamentally altered by this multivariate thermodynamical extension. ### Naturalized survey of Hamiltonian dynamics To make progress toward computing concrete thermodynamic potential functions, we must specify a Hamiltonian dynamical system.  In the notation of John Lee's Introduction to Smooth Manifolds we specify the Hamiltonian triad $(\mathcal{M},H,\omega)$ in which $$\begin{array}{rl} \mathcal{M}\ \ :=&\text{state-space manifold}\,,\\ H\,\colon \mathcal{M}\to\mathbb{R}\ \ :=&\text{Hamiltonian function on $\mathcal{M}$}\,,\\ \omega\,\llcorner\,\colon T\mathcal{M}\to T^*\mathcal{M}\ \ :=& \text{symplectic structure on $\mathcal{M}$}\,. \end{array}\hspace{1em}$$ The dynamical flow generator $X\colon \mathcal{M}\to T\mathcal{M}$ is given by Hamilton's equation $$\omega\,\llcorner\,X = dH\,.$$ From the standard (and geometrically natural) ergodic hypothesis — that thermodynamic ensembles of Hamiltonian trajectories fill state-spaces uniformly, and that time averages of individual trajectories equal ensemble averages at fixed times — we have ${\Omega}$ given naturally as a level set volume $$\text{(1a)}\qquad\qquad\quad\quad \Omega(E) = \int_\mathcal{M} \star\,\delta\big(E-H(\mathcal{M})\big)\,, \qquad\qquad\qquad\qquad\qquad$$ where "$\star$" is the Hodge star operator that is associated to the natural volume form $V$ on $\mathcal{M}$ that is given as the maximal exterior power $V=\wedge^{(\text{dim}\,\mathcal{M})/2}(\omega)$.  This expression for $\Omega(E)$ is the geometrically naturalized presentation of Zia, Redish, and McKay's equation (20). Taking a Laplace transform of (1a) we obtain an equivalent (and classically familiar) expression for the partition function $Z(\beta)$ $$\text{(1b)}\qquad\qquad\qquad Z(\beta) = \int_\mathcal{M} \star\exp\big({-}\beta\,H(\mathcal{M})\big)\,, \qquad\qquad\qquad\qquad$$ The preceding applies to Hamiltonian systems in general and thus quantum dynamical systems in particular.  Yet in quantum textbooks the volume/partition functions (1ab) do not commonly appear, for two reasons.  The first reason is that John von Neumann derived in 1930 — before the ideas of geometric dynamics were broadly extant — a purely algebraic partition function that, on flat state-spaces, is easier to evaluate than the geometrically natural (1a) or (1b). Von Neumann's partition function is $$\text{(2)}\qquad Z(\beta) = \text{trace}\,\exp{-\beta\,\mathsf{H_{op}}} \quad\text{where}\quad [\mathsf{H_{op}}]_{\alpha\gamma} = \partial_{\,\bar\psi_\alpha}\partial_{\,\psi_\gamma} H(\mathcal{M})\,. \qquad\qquad$$ Here the $\boldsymbol{\psi}$ are the usual complete set of (complex) orthonormal coordinate functions on the (flat, Kählerian) Hilbert state-space $\mathcal{M}$.  Here $H(\mathcal{M})$ is real and the functional form of $H(\mathcal{M})$ is restricted to be bilinear in $\boldsymbol{\bar\psi},\boldsymbol{\psi}$; therefore the matrix $[\mathsf{H_{op}}]$ is Hermitian and uniform on the state-space manifold $\mathcal{M}$.  We appreciate that $Z(\beta)$ as defined locally in (2) is uniform globally iff $\mathcal{M}$ is geometrically flat; thus von Neumann's partition function does not naturally extend to non-flat complex dynamical manifolds. We naively expect (or hope) that the geometrically natural thermodynamic volume/partition functions (1ab) are thermodynamically consistent with von Neumann's elegant algebraic partition function (2), yet — surprisingly and dismayingly — they are not. Surprisingly, because it is not immediately evident why the geometric particion function (1b) should differ from von Neumann's partition function (2). Dismayingly, because the volume/partition functions (1ab) pullback naturally to low-dimension non-flat state-spaces that are attractive venues for quantum systems engineering, and yet it is von Neuman's partition function (2) that accords with experiment. We would like to enjoy the best of both worlds: the geometric naturality of the ergodic expressions (1ab) and the algebraic naturality of von Neumann's entropic expression (2). The objective of restoring and respecting the mutual consistency of (1ab) and (2) leads us to the main point of this answer, which we now present. ### The main points:  sustaining thermodynamical consistency Assertion I  For (linear) quantum dynamics on (flat) Hilbert spaces, the volume function $\Omega(E)$ and partition function $Z(\beta)$ from (1ab) are thermodynamically inconsistent with the partition function $Z(\beta)$ from (2). Here by "inconsistent" is meant not "subtly inconsistent" but "grossly inconsistent".  As a canonical example, the reader is encourage to compute the heat capacity of an ensemble of weakly interacting qubits by both methods, and to verify that the (1ab) predict a heat capacity for an $n$-qubit system that is superlinear in $n$. To say it another way, for strictly unitary dynamics (1ab) predict heat capacities that are non-intensive. So the second — and most important — reason that the volume/partition functions (1ab) are not commonly given in quantum mechanical textbooks is that strictly unitary evolution on strictly flat quantum state-spaces yields non-intensive predictions for thermodynamic quantities that experimentally are intensive. Fortunately, the remedy is simple, and indeed has long been known: retain the geometric thermodynamic functions (1ab) in their natural form, and instead alter the assumption of unitary evolution, in such a fashion as to naturally restore thermodynamic extensivity. Assertion II  Lindbladian noise of sufficient magnitude to spatially localize thermodynamic potentials, when unraveled as non-Hamiltonian (stochastic) quantum trajectories, restores the thermodynamical consistency of the volume/partition functions $(\Omega(E),Z(\beta))$ from (1ab) with the partition function $Z(\beta)$ from (2). Verifying Assertion II is readily (but tediously) accomplished by the Onsager-type methods that are disclosed in two much-cited articles: Hendrik Casimir's On Onsager's Principle of Microscopic Reversibility (RMP 1945) and Herbert Callen's The Application of Onsager's Reciprocal Relations to Thermoelectric, Thermomagnetic, and Galvanomagnetic Effects (PR, 1948).  A readable textbook (among many) that covers this material is Charles Kittel's Elementary Statistical Physics (1958). To help in translating Onsager theory into the natural language of geometric dynamics, a canonical textbook is John Lee's Introduction to Smooth Manifolds (2002), which provides the mathematical toolset to appreciate the research objectives articulated in (for example) Matthias Blau's on-line lecture notes Symplectic Geometry and Geometric Quantization (1992). Unsurprisingly, in light of modern findings in quantum information theory, the sole modification that naturality and universality require of Onsager's theory is this: the fluctuations that are the basis of Onsager's relations must be derived naturally from unravelled Lindblad processes, by the natural association of each Lindbladian generator to an observation-and-control process. We note that it is neither mathematically natural, nor computationally unambiguous, nor physically correct, to compute Onsager fluctuations by non-Lindbladian methods. For example, wrong answers are obtained when we specify Onsager fluctuations as operator expectation fluctuations, because this procedure does not account for the localizing effects of Lindbladian dynamics. Concretely, the fluctuating quantities that enter in the Onsager formulation are given as the data-streams that are naturally associated to Lindbladian observation processes … observation processes that are properly accounted in the overall system dynamics, in accord with the teaching of quantum information theory. Thereby Onsager's classical thermodynamical theory of global conservation and local transport processes straightforwardly naturalizes and universalizes — via the mathematical tool-set that quantum information theory provides — as a dynamical theory of the observation of natural processes. Physical summary  Consistency of the geometrically natural thermodynamic functions (1ab) with the algebraically natural thermodynamic function (2) is restored because the non-unitary stochastic flow associated to unraveled Lindbladian noise reduces the effective dimensionality of the quantum state-space manifold, and also convolutes the quantum state-space geometry, in such a fashion that as to naturally reconcile geometric descriptions of thermodynamics (1ab) with von Neumann-style algebraic descriptions of thermodynamics (and information theory) on Hilbert state-spaces (2). Assertion III  The thermodynamic consistency requires that, first, quantum dynamical flows be non-unitary and that, second, the resulting trajectories be restricted to non-flat state-spaces of polynomial dimensionality. We thus appreciate the broad principle that quantum physics can make sensible predictions regarding physical quantities that are globally conserved and locally transported only by specifying non-unitary dynamical flows on non-flat quantum quantum spaces. Duality of classical physics versus quantum physics  The above teaching regards "classical" and "quantum" as well-posed and mutually consistent limiting cases of a broad class of naturalized and universalized Hamiltonian/Kählerian/Lindbladian dynamical frameworks.  For practical purposes the most interesting dynamical systems are intermediate between fully classical and fully quantum, and the thrust of the preceding analysis is that the thermodynamical properties of these systems are naturally and universally defined, calculable, and observable. Duality of fundamental physics versus applied physics  The fundamental physics challenge of constructing a thermodynamically and informatically consistent description of non-unitary quantum dynamics on non-flat complex state-spaces — a challenge that is widely appreciated as difficult and perhaps even impossible — is appreciated as dual to the practical engineering challenge of efficiently simulating noisy quantum system dynamics … a challenge that is widely appreciated as feasible. Remarks upon gravitational decoherence  The above analysis establishes that decoherence associated to gravitational coupling — and more broadly the ubiquity of the superradiant dynamics that is associated to every bosonic field of the vacuum — and further supposing this decoherence to be "irreversible" (in Scott's phrase), would have the following beneficent implications: • the naturality and universality of thermodynamics is thereby preserved, and • quantum trajectories are effectively restricted to low-dimension non-flat state-spaces, and • the efficient numerical simulation of generic quantum systems is thus permitted. From a fundamental physics point-of-view, the converse hypothesis is attractive: Kählerian hypothesis  Nature's quantum state-spaces are generically low-dimension and non-flat in consequence of irreversible decoherence mechanisms that are generically associated to bosonic vacuum excitations. ### Conclusions As with the ergodic hypothesis, so with the Kählerian hypothesis, in the sense that regardless of whether the Kählerian hypothesis is fundamentally true or not — and regardless of whether gravitation radiation accounts for it or not — for practical quantum systems engineering purposes experience teaches us that the Kählerian hypothesis is true. The teaching that the Kählerian hypothesis is effectively true is good news for a broad class of 21st century enterprises that seek to press against quantum limits to speed, sensitivity, power, computational efficiency, and channel capacity … and it is very good news especially for the young mathematicians, scientists, engineers, and entrepreneurs who hope to participate in creating these enterprises. Acknowledgements  This answer benefited greatly from enjoyable conversations with Rico Picone, Sol Davis, Doug and Chris Mounce, Joe Garbini, Steve Flammia, and especially Aram Harrow; any remaining errors and infelicities are mine alone. The answer is also very largely informed by the ongoing debate of Aram Harrow with Gil Kalai, regarding the feasibility (or not) of scalable quantum computing, that has been hosted on the web page Gödel's Lost Letter and P=NP, regarding which appreciation and thanks are extended. - 1 Sorry John, but I'm stuck on a very basic point. Suppose it were true that one could use thermodynamic arguments to show that radiative decoherence was "irreversible." Then why wouldn't the same arguments work in cases like Zeilinger et al's buckball experiment, where we know that "decoherence" CAN be reversed? You might answer: my arguments only work for systems for which thermodynamics is relevant. But that brings us to the crux: for which systems IS thermodynamics relevant? Thermodynamics is an effective theory, and invoking it here seems to presuppose the answer you want. – Scott Aaronson Aug 28 '12 at 20:08 1 (To illustrate, suppose that Zeilinger et al succeed in recohering the two paths of a buckyball. Then we can conclude, after the fact, that the experiment did NOT increase the buckyball's entropy, so thermodynamics wasn't the right language to describe what was happening. But this seems to reduce your argument to a tautology: decoherence is irreversible whenever it can't be reversed!) – Scott Aaronson Aug 28 '12 at 20:13 Scott, I think we may even agree, though we prioritize our main points differently. For me, the main point is that any quantum theory that provides thermodynamically consistent descriptions of localized transport of globally conserved quantities must entail non-unitary flow on a non-flat low-dim Kahlerian state-space. Your point too is valid --- even equivalent! -- namely Zeilinger-type buckyball experiments succeed iff transport of the conserved quantity (mass) is not spatially localizable. And this accords with our everyday experience that QM is locally Hilbert, globally not, eh? – John Sidles Aug 29 '12 at 0:01 LOL ... maybe I'd better say too, that I told Aram Harrow yesterday that I'd let these ideas lie fallow for a few days ... on the grounds that some tricky practical considerations regarding the efficient simulation of quantum transport are associated to them! And so, there is a pretty considerable chance that in the next week or so, some of the above points will be reconsidered (by me) and extended or rewritten. Therefore Scott, please consider your question to be answered in the same spirit it was asked. That is why both your question and your comments (above) are greatly appreciated. :) – John Sidles Aug 29 '12 at 0:15 1 I'm glad you appreciate my comments! But I'm stuck on the fact that your arguments seem to give no concrete guidance about which sorts of systems can be placed in coherent superposition and which ones can't. For example: can a virus be placed in superposition of two position states 1 meter apart? How about a 1kg brick? A human brain? Saying QM is "locally Hilbert, globally not" doesn't help me too much unless you can say where the boundary between "local" and "global" resides, and why no technology will ever be able to cross that boundary! – Scott Aaronson Aug 29 '12 at 0:28 show 2 more comments In order for gravity to decohere a quantum system, that system has to emit at least a graviton. Let's say the graviton is emitted in a certain direction at a certain time, up to the limits of resolution given by the spread in the graviton wavepacket. Now suppose there is another quantum system lying in the same direction which could also have emitted a graviton in the same direction at a time lag later given by the time it takes for light (speed of light = speed of graviton) to travel from the first to the second system. The point is, detecting a graviton moving in that direction at some time still doesn't enable us to distinguish which of the two quantum systems emitted the graviton. It could have been the first, as matter, i.e. the second system interacts so weakly with gravitons that it's transparent to them. It could also have been the second. The resolution is poor. In general, the amount of information decohered by outgoing information — which can include gravitons, photons, or more massive matter — only scales as the area of the enclosing boundary, while the number of events inside scales as the volume. This limits the "decoherence resolution" by outgoing signals far away, assuming there is matter distributed all over the interior volume. If there is only one quantum system of size L in the middle surrounded by a vacuum all the way all around it, this ambiguity problem wouldn't exist, but our universe isn't like that, at least, not in FRW models. As noted by other posters, in order to demonstrate the suppression of interference, some matter has to take a superposition of at least two different paths, but then merge back to the same location after a time period $T$. Any decohering emitted graviton has to have a frequency of at least $1/T$. This means we can disregard soft gravitons with frequencies much less than $1/T$. All the other answers which mention soft gravitons are missing the point. Also, as noted by others, decoherence by other sources dominate over gravitational decoherence by far because gravity is the weakest force at distance scales relevant to us. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9235116243362427, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/5357/theorems-for-nothing-and-the-proofs-for-free/15959
Theorems for nothing (and the proofs for free) [closed] Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Some theorems give far more than you feel they ought to: a weak hypothesis is enough to prove a strong result. Of course, there's almost always a lot of machinery hidden below the waterline. Such theorems can be excellent starting-points for someone to get to grips with a new(ish) subject: when the surprising result is no longer surprising then you can feel that you've gotten it. Let's have some examples. - 15 I like the Dire straits reference ;) – Grétar Amazeen Nov 13 2009 at 14:58 2 Time to put this one to bed. (ie, time to close it, I deem.) – Andrew Stacey Jun 23 2010 at 18:06 3 What happened between November and June that rendered this question no longer relevant? – I. J. Kennedy Oct 9 2010 at 23:52 1 Wow Andrew. I am not normally surprised when I see you on the list of closers of a question (and most times it is actually there for a good reason), but I did not expect you to go that far, particularly leaving us the mystery of how your question all of a sudden became no longer relevant. Did you intend to use the theorems-for-nothing list, e. g. a popular talk? If yes, what did you use and how well was it absorbed? But even then, I am not sure whether the question is no longer relevant just because it is not relevant to you... – darij grinberg May 4 2011 at 7:06 1 Darij: Vaguely relevant meta threads: meta.mathoverflow.net/discussion/210 and meta.mathoverflow.net/discussion/459 (If you really want to discuss this, start a thread on meta about it) – Andrew Stacey May 4 2011 at 9:01 show 1 more comment 25 Answers Every compact metric space is (unless it's empty) a topological quotient of the Cantor set. What, every compact metric space? Yes, every compact metric space. - That's quite surprising! What are some good references for this? – Justin DeVries Nov 13 2009 at 16:59 5 Surprising, yes, but once you know about it, it seems easy enough to cook up a proof. Just write the set as a union of two closed subsets, decide to map the left half of the Cantor set onto one and the right half to the other, then do the same to each of these two sets, and so on. In the limit you have the map you want, provided you have arranged for the diameters of the parts to go to zero. – Harald Hanche-Olsen Nov 13 2009 at 19:04 6 Right! Once you know it, it's fine. But I think it's capable of changing one's intuition on what spaces and maps are. After all, the Cantor set is just a sprinkling of dust; how could it be capable of covering a big fat space like the 3-ball? – Tom Leinster Nov 14 2009 at 0:25 6 After this theorem has done its job changing your intuition, though, it's pretty easy to believe. A surjective continuous map glues stuff together. And the Cantor set is not "just" a sprinkling of dust; it's a sprinkling of a whole lot of rather clumpy dust. So it shouldn't be surprising that you can glue all this clumpy dust into many different forms. – Mark Meckes Dec 2 2009 at 14:36 1 I also like the fact that every countable compact Hausdorff space is homeomorphic to a countable successor ordinal equipped with its order topology. This is the Mazurkiewicz-Sierpinski theorem, published originally in French (I think) but also available in English in Z. Semadeni's book 'Banach spaces of continuous functions' in section 8 (the chapter on compact 0-dimensional spaces). A proof of the Alexandroff-Hausdorff theorem (i.e., every compact metric space is a continuous image of the Cantor set) is also there, as well as a bunch of other tasty topology. – Philip Brooker Feb 21 2010 at 12:04 show 1 more comment You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For me, the theorem that every subgroup of a free group is free is a good example of this: it seems to come for free from covering spaces and the fundamental group, but really all the heavy machinery is just moved underground. - Wedderburn's theorem: "Every finite division ring is a field." This is really astonishing if you think of quaternions: nothing analogous in the finite case. Then of course the classification of finite fields is also very beautiful: exactly one with p^n elements (p a prime and n an integer) and no others. And as a bonus, Wedderburn's theorem is one of the crispest in all of mathematics: seven words ( or six and a half if you replace division ring by skew-field). - You can save one word by replacing "a field" by "commutative" (but maybe we should count syllables rather than words). – Andreas Blass Jun 20 2011 at 19:44 Great idea, Andreas, thanks! – Georges Elencwajg Jun 21 2011 at 17:23 I had that feeling of getting more than you ought to a couple of weeks ago when reading the first chapter of Rota and Klain's Introduction to Geometric Probability. In particular, I was familiar with the usual derivation of the probability of Buffon's needle crossing a line. So it was amazing to read the solution to a harder problem, Buffon's noodle, which is solved by appealing to a much simpler seeming general symmetry argument. And like you describe, it forms a kind of teaser trailer to draw you you into the rest of the subject. - 1 I agree, this is a completely wonderful argument. It's also a spectacular example of a more general theorem that's easier to prove. – Tom Leinster Nov 13 2009 at 18:00 3 Here is a related discussion gilkalai.wordpress.com/2009/08/03/… – Gil Kalai Nov 21 2009 at 11:15 Isn't almost every theorem in mathematics an example of a theorem "for free"? One defines natural numbers, and then it follows each of them is a sum of four squares; one defines a notion of a continuous function and of Euclidean space, and Brouwer's fixed point theorem follows. Surely, that is amazing! With that said here are a handful of the example that lie closer to the surface: 1) Complex-differentiable functions are infinitely-differentiable, and in fact analytic. 2) A function of several complex variables that is holomorphic in each variable is holomorphic in all of them (if it reminds you of 'theorem' that a function that is continuous in each variable separately is continuous... well, then, it should). That is Hartogs' theorem. 3) Any bound on the error term in primes number theorem of the form $\psi(x)=x+O_{\varepsilon}(x^{a+\varepsilon})$ implies the bound $\psi(x)=x+O(x^a \log x)$. 4) Morally related to (3) is the tensor power trick, of which the earliest widely-known example is perhaps the proof of Cotlar-Stein lemma. One of my favorite examples is lemma 2.1 from a paper of Katz and Tao on Kakeya's conjecture. - Four squares is nothing, every natural number is also the sum of three triangular numbers. – Zsbán Ambrus Apr 16 2010 at 19:32 Faithfully-flat descent: It tells you that you can construct quasicoherent sheaves locally on a faithfully-flat cover. This is pretty amazing, because quasicoherent sheaves are, a priori, only Zariski local. So to specify a sheaf it requires a lot less data than it initially appears. - My "canonical example" is Banach-Steinhaus in functional analysis: that, in nice locally convex topological spaces (Banach will do), weakly bounded (or pointwise bounded) implies bounded. The machinery is quite technical, usually involving the Baire category theorem, but the result is very simple and very surprising. One especial point I like about this is that when you compare normed vector spaces with Banach spaces, then the process of adding more stuff (i.e. completion) actually limits the things that can go wrong. My intuition is that if you want to limit the bad behaviour then you need to work in smaller spaces rather than larger. - 2 My intuition (for this kind of issue, anyway) is actually the opposite. If you work with a larger space, then there's more "stuff" that nice things (functions, sequences, whatever) have to play nicely with. So the bigger the space, the nicer they must be. – Mark Meckes Nov 13 2009 at 15:12 I agree with Mark: adding stuff tends to rigidify things, think for example of localization. – Alex Collins Nov 13 2009 at 15:18 I agree. It always does seem like you get something for nothing. – Dinakar Muthiah Nov 13 2009 at 15:59 There is a Zabreiko's theorem which extracts the juice of Baire's Category and by invoking it, the Banach-Steinhaus, Open Mapping and Closed Graph theorems come just easily. It says: Every countably subadditive seminorm on a Banach space is continuous. Unfortunately I don't know a good reference. – Abhishek Parab Feb 21 2010 at 2:47 2 @Abhishek Parab: Zabreiko's theorem is proved in Megginson's book 'An Introduction to Banach Space Theory'. It is near the beginning of Section 1.6, which is entitled 'Three Fundamental Theorems'. – Philip Brooker Feb 21 2010 at 11:45 Kuratowski's theorem is a great example of a theorem of the form "the only obstructions are the obvious ones," which are always fun to learn about. - I can´t resist to mention the Cayley-Hamilton theorem. Something intuitively correct turns out to be mathematically correct too, but for non-intuitive reasons! I still remember, its proof (I´m here referring to the one using the correspondence between operation and representation) worked from my perspective like a magic, clear, simple, non-trivial and beautiful, and it also made me interested in algebra, beyond the lecture in linear algebra for first-year students. It was nice time... - Indeed this is a wonderful theorem. Why is it intuitive correct? From all the first year algebra theorems it was the one where I had no intuition whatsoever. – Gil Kalai Nov 22 2009 at 6:48 3 The cheezy-easy proof that works over the real or complex numbers is to observe diagonalizable matrices are dense in the space of matrices, and the theorem is true for diagonalizable matrices (by computation) then notice the set of matrices that satisfy the theorem are closed. If you want to avoid this kind of argument you can enhance your intuition with the Jordan Canonical Form. :) – Ryan Budney Nov 22 2009 at 10:19 9 Then you just realize that det(tI-A) evaluated at A is some matrix whose entries are monstrously complicated polynomials in the n^2 entries of the matrix A, and since they're identically 0 on C^{n^2} each of those entries must be the zero polynomial; thus the theorem holds over any commutative ring as well. – Steven Sivek Nov 22 2009 at 13:46 1 Gil: maybe what was meant was the following: consider det(tI-A), and plug in A for t. Personally I wouldn't say this makes C-H "intuitively correct"; instead C-H is suggested by this simple heuristic. – Mark Meckes Nov 23 2009 at 14:03 Tychonoff's theorem — product of any collection of compact spaces is still compact — is amazing and incredibly useful. - 1 it is not surprising thinking of net convergence and that the product topology is not the box topology (which is not compact). – Martin Brandenburg Dec 30 2009 at 2:35 The Kline sphere characterization, proven by Bing: A compact connected metric space (with at least two points) is the 2-sphere if and only if every circle separates and no pair of points does. - Nitpick: A singleton set seems to be an exception. The wikipedia article seems to have missed that. I'll edit it later if nobody beats me to it. (I have a bus to catch.) – Harald Hanche-Olsen Nov 13 2009 at 19:11 Thanks. Corrected. – Richard Kent Nov 13 2009 at 19:24 Once the machinery of (co)homology is developed, Brouwer's Fixed Point seems to come for free, it's extremely straightforward to prove and has quite a lot of important consequences. - I'm not sure I really understand the question though; do you just mean surprisingly easy to prove results (that have many substantial consequences)? – Sam Derbyshire Nov 13 2009 at 20:50 Unfortunately, a lot of these kinds of statements in combinatorics are only conjectural. One example (again, only conjectural) that came up in conversation the other day doesn't give a particularly natural result, but it's hugely surprising: the Erdos-Gyarfas conjecture in graph theory, which has pretty much the weakest possible condition for any statement of its form. Now that I think about it, though, Ramsey theory is all about "theorems for nothing." I'm a big fan of the sunflower lemma when it comes to Ramsey-theoretical statements that deserve to be better known -- the only condition there is that your sets have to be relatively small, and there have to be a lot of them. (And that second part is conjecturally not even necessary...) - To me, the canonical example is the Poincare Conjecture. Why SHOULD a three dimensional manifold with trivial fundamental group actually be the sphere? In higher dimensions, there are LOTS of simply connected things, but in two and three, simply connected and compact manifold determines the manifold uniquely. - 6 The proof in this case seems rather pricey. – Ryan Budney Nov 13 2009 at 18:59 1 Well, there's a lot of machinery hidden underneath it, yeah. But the statement looks like you're getting a huge amount of specificity from just a small hypothesis. – Charles Siegel Nov 13 2009 at 19:26 That there are infinitely many primes has some simple proofs, but I remember being shown that the sum of the reciprocals of the primes diverges which had some more machinery in it that was kind of neat to my mind. - I am not sure I fully understand the question. Is it the case that the theorem itself gives you a huge mileage while its proof is extremely difficult, (Characterization of finite simple group is an ultimate example; the Atiyah-Singer index theorem and the BBD(G)-decomposition theorem are other examples; or is it a case that understanding the proof (which is feasible) gives you a lot of mileage and a feeling that you got grip with the subject. Anyway, a theorem which, to some extent, has both these features is Adams's theorem asserting that d-dimensional vectors form an algebra (even non-associative) in which division (except by 0) is always possible only for , 2, 4, and 8. (In these cases there are examles: the Complex, Quaternions and Cayley algebras.) - Although not exactly what you're after, the question reminds me of Reynolds' parametricity theorem, or as Philip Wadler puts it: Theorems for Free! The basic idea is that a polymorphic construction (in a polymorphic lambda calculus) must behave uniformly, and so must preserve relations. For example, any term of type $\Pi X. X\to X$ must be the identity function, and every term of type $\Pi X Y. X\times Y\to X$ must be the first projection. - The Gauss-Bonnet theorem is a deep result relating the geometry of a surface to its topology, and its proof is very simple (the local version comes almost from nothing, and the main difficulties for the global one are topological results about triangulations). Also, it has some amazing corollaries: the integral of the gaussian curvature over a compact orientable surface is a topological invariant (${\int\int}_{S}{K}d\sigma = 2\pi\chi(S)$, where $\chi(S)$ is the Euler-Poincaré characteristic of $S$); every compact regular surface with positive gaussian curvature is homeomorphic to the sphere $S^2$; and so on. - The only group with order $p$ a prime is $\mathbb{Z}/p\mathbb{Z}$ - I'd say the Tutte-Berge formula, which is a wonderful result that tells you (almost) everything you want to know about matchings in graphs. Although there are many proofs of this theorem, there is a beautiful proof for free using matroids. Strictly speaking, there is a proof for free of Gallai's Lemma (from which Tutte-Berge follows easily). Gallai's Lemma. Let $G$ be a connected graph such that $\nu(G-x)=\nu(G)$, for all $x \in V(G)$. Then $|V(G)|$ is odd and def$(G)=1.$ Remark: $\nu(G)$ is the size of a maximum matching of $G$, and def$(G)$ denotes the number of vertices of $G$ not covered by a maximum matching. Proof for free. In any matroid $M$ define the relation $x \sim y$ to mean $r(x)=r(y)=1$ and $r({x,y})=1$ or if $x=y$. (Here, $r$ is the rank function of $M$). We say that $x \sim^* y$ if and only if $x \sim y$ in the dual of $M$. It is trivial to check that $\sim$ (and hence also $\sim^*$) defines an equivalence relation on the ground set of $M$. Now let $G$ satisfy the hypothesis of Gallai's Lemma and let $M(G)$ be the matching matroid of $G$. By hypothesis, $M(G)$ does not contain any co-loops. Therefore, if $x$ and $y$ are adjacent vertices we clearly have $x \sim^* y$. But since $G$ is connected, this implies that $V(G)$ consists of a single $\sim^*$ equivalence class. In particular, $V(G)$ has co-rank 1, and so def$(G)$=1, as required. Edit. For completeness, I decided to include the derivation of Tutte-Berge from Gallai's lemma. Choose $X \subset V(G)$ maximal such that def$(G-X) -|X|=$ def$(G)$. By maximality, every component of $G-X$ satisfies the hypothesis in Gallai's lemma. Applying Gallai's lemma to each component, we see that $X$ gives us equality in the Tutte-Berge formula. - The Riesz-Thorin interpolation theorem; the complex analysis behind it never fails to surprise me. - Artin-Schreier Theorem: If k is a field of characteristic p and strictly contained in its algebraic closure K and such that [K:k] is finite THEN (was surprising for me..) p is actually 0 and K = k(sqrt(-1)) and k is a real closed field! A not so well known but deserving result from the "failed" thesis of Abhyankar: If K and L are algebraically closed fields contained in another algebraically closed field, then the compositum KL is not necessarily algebraically closed. - 1 Abhyankar's result is probably not that surprising to many of us. But I was simply amazed since we take algebra in undergrad and know algebraically closed fields and compositums and we hardly ask that question.. I needed to answer that question later while writing my PhD and to my surprise Abhyankar was doing the same in his thesis. – Jose Capco Nov 21 2009 at 22:01 Oh! From uniqueness of the countable dense linear order without endpoints: take (for instance) a countable ordinal $λ$, and consider the anti-lex order on $\mathbb{Q}\timesλ$. This is a countable dense linear without endpoints, so it's order-isomorphic to $\mathbb{Q}$; in particular, $\mathbb{Q}$ contains a subset with order-type $λ$ --- e.g. the isomorphs of anything $(\frac{5}{8},j)$. The same result for subsets of $\mathbb{R}$ is a more usual application of transfinite induction/AC/Zorn's lemma; here it's all hidden in the $\aleph_0$-categoricity result about dlow/oep. - I like the theorem, I think it's Gallagher's, that says: Most polynomials with integer coefficients are irreducible and have the full symmetric group as Galois group (over the rational numbers). The precise formulation asserts that the number of bad polynomials, i.e., the number of polynomials $X^r + a_1 X^{r-1} + \cdots + a_r$ with $|a_i|\leq N$ that DO NOT have the full symmetric group as Galois group is $$O(r^3(2N+1)^{r-\frac{1}{2}}\log N)$$ (out of $(2N+1)^r$ polynomials). - Another good example is the Johnson-Lindenstrauss Lemma that says that any $n$ points in a Hilbert space can be embedded in a $O(\log n)$-dimensional Euclidean space with distances preserved upto any factor. It turns out that JL-style results crop up in many different versions, the main result itself has proofs ranging from 1 page to 10 pages, and it just keeps on giving :) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940354585647583, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/11/26/limits-of-topological-spaces/
# The Unapologetic Mathematician ## Limits of Topological Spaces We’ve defined topological spaces and continuous maps between them. Together these give us a category $\mathbf{Top}$. We’d like to understand a few of our favorite categorical constructions as they work in this context. First off, the empty set $\varnothing$ has a unique topology, since it only has the one subset at all. Given any other space $X$ (we’ll omit explicit mention of its topology) there is a unique function $\varnothing\rightarrow X$, and it is continuous since the preimage of any subset of $X$ is empty. Thus $\varnothing$ is the initial object in $\mathbf{Top}$. On the other side, any singleton set $\{*\}$ also has a unique topology, since the only subsets are the whole set and the empty set, which must both be open. Given any other space $X$ there is a unique function $X\rightarrow\{*\}$, and it is continuous because the preimage of the empty set is empty and the preimage of the single point is the whole of $X$, both of which are open in $X$. Thus $\{*\}$ is a terminal object in $\mathbf{Top}$. Now for products. Given a family $X_\alpha$ of topological spaces indexed by $\alpha\in A$, we can form the product set $\prod\limits_{\alpha\in A}X_\alpha$, which comes with projection functions $\pi_\beta:\prod\limits_{\alpha\in A}X_\alpha\rightarrow X_\beta$ and satisfies a universal property. We want to use this same set and these same functions to describe the product in $\mathbf{Top}$, so we must choose our topology on the product set so that these projections will be continuous. Given an open set $U\subseteq X_\beta$, then, its preimage $\pi_\beta^{-1}(U)$ must be open in $\prod\limits_{\alpha\in A}X_\alpha$. Let’s take these preimages to be a subbase and consider the topology they generate. If $X$ is any other space with a family of continuous maps $f_\alpha:X\rightarrow X_\alpha$, then the universal property in $\mathbf{Set}$ gives us a unique function $f:X\rightarrow\prod\limits_{\alpha\in A}X_\alpha$. But will it be a continuous map? To check this, remember that we only need to verify it on a subbase for the topology on the product space, and we have one ready to work with. Each set in the subbase is the preimage $\pi_\beta^{-1}(U)$ of an open set in some $X_\beta$, and then its preimage under $f$ is $f^{-1}(\pi_\beta^{-1}(U))=(\pi_\beta\circ f)^{-1}(U)=f_\beta^{-1}(U)$, which is open by the assumption that each $f_\beta$ is continuous. And so the product set equipped with the product topology described above is the categorical product of the topological spaces $X_\alpha$. What about coproducts? Let’s again start with the coproduct in $\mathbf{Set}$, which is the disjoint union $\biguplus\limits_{\alpha\in A}X_\alpha$, and which comes with canonical injections $\iota_\beta:X_\beta\rightarrow\biguplus\limits_{\alpha\in A}X_\alpha$. This time let’s jump right into the universal property, which says that given another space $X$ and functions $f_\alpha:X_\alpha\rightarrow X$, we have a unique function $f:\biguplus\limits_{\alpha\in A}X_\alpha\rightarrow X$. Now we need any function we get like this to be continuous. The preimage of an open set $U\subseteq X$ will be the union of the preimages of each of the $f_\alpha$, sitting inside the disjoint union. By choosing $X$, the $f_\alpha$, and $U$ judiciously, we can get the preimage $f_\alpha^{-1}(U)$ to be any open set we want in $X_\alpha$, so the open sets in the disjoint union should consist precisely of those subsets $V$ whose preimage $\iota_\alpha^{-1}(V)\subseteq X_\alpha$ is open for each $\alpha\in A$. It’s easy to verify that this collection is actually a topology, which then gives us the categorical coproduct in $\mathbf{Top}$. If we start with a topological space $X$ and take any subset $S\subseteq X$ then we can ask for the coarsest topology on $S$ that makes the inclusion map $i:S\rightarrow X$ continuous, sort of like how we defined the product topology above. The open sets in $S$ will be any set of the form $S\cap U$ for an open subset $U\subseteq X$. Then given another space $Y$, a function $f:Y\rightarrow S$ will be continuous if and only if $i\circ f:Y\rightarrow X$ is continuous. Indeed, the preimage $(i\circ f)^{-1}(U)=f^{-1}(S\cap U)$ clearly shows this equivalence. We call this the subspace topology on $S$. In particular, if we have two continuous maps $f:X\rightarrow Y$ and $g:X\rightarrow Y$, then we can consider the subspace $E\subseteq X$ consisting of those points $x\in X$ satisfying $f(x)=g(x)$. Given any other space $Z$ and a continuous map $h:Z\rightarrow X$ such that $f\circ h=g\circ h$, clearly $h$ sends all of $Z$ into the set $E$; the function $h$ factors as $e\circ h'$, where $e:E\rightarrow X$ is the inclusion map. Then $h'$ must be continuous because $h$ is, and so the subspace $E$ is the equalizer of the maps $f$ and $g$. Dually, given a topological space $X$ and an equivalence relation $\sim$ on the underlying set of $X$ we can define the quotient space $X/\sim$ to be the set of equivalence classes of points of $X$. This comes with a canonical function $p:X\rightarrow X/\sim$, which we want to be continuous. Further, we know that if $g:X\rightarrow Y$ is any function for which $x_1\sim x_2$ implies $g(x_1)=g(x_2)$, then $g$ factors as $g=g'\circ p$ for some function $g':X/\sim\rightarrow Y$. We want to define the topology on the quotient set so that $g$ is continuous if and only if $g'$ is. Given an open set $U\in Y$, its preimage $g'^{-1}(U)$ is the set of equivalence classes that get sent into $U$, while its preimage $g^{-1}(U)$ is the set of all points that get sent to $U$. And so we say a subset $V$ of the quotient space $X/\sim$ is open if and only if its preimage — the union of the equivalence classes in $V$ is open in $X$. In particular, if we have two maps $f:Y\rightarrow X$ and $g:Y\rightarrow X$ we get an equivalence relation on $X$ by defining $x_1\sim x_2$ if there is a $y\in Y$ so that $f(y)=x_1$ and $g(y)=x_2$. If we walk through the above description of the quotient space we find that this construction gives us the coequalizer of $f$ and $g$. And now, the existence theorem for limits tells us that all limits and colimits exist in $\mathbf{Top}$. That is, the category of topological spaces is both complete and cocomplete. As a particularly useful example, let’s look at an example of a pushout. If we have two topological spaces $U$ and $V$ and a third space $A$ with maps $A\rightarrow U$ and $A\rightarrow V$ making $A$ into a subspace of both $U$ and $V$, then we can construct the pushout of $U$ and $V$ over $A$. The general rule is to first construct the coproduct of $U$ and $V$, and then pass to an appropriate coequalizer. That is, we take the disjoint union $U\uplus V$ and then identify the points in the copy of $A$ sitting inside $U$ with those in the copy of $A$ sitting inside $V$. That is, we get the union of $U$ and $V$, “glued” along $A$. ### Like this: Posted by John Armstrong | Category theory, Topology ## 25 Comments » 1. In particular, if we have two maps f:Y\rightarrow X and g:Y\rightarrow X we get an equivalence relation on X by defining x_1~x_2 if there is a y\in Y so that f(y)=x_1 and g(y)=x_2. No you don’t, but if you identify X under the smallest equivalence relation that includes that relation you just defined, then you get the coequalizer you want. Comment by Jeremy Henty | November 26, 2007 | Reply 2. [...] a lot about the category of topological spaces and continuous maps between them. In particular we’ve seen that it’s complete and cocomplete — it has all limits and colimits. But we’ve [...] Pingback by | November 27, 2007 | Reply 3. Jeremy, I thought I’d discussed completing a relation to an equivalence relation, hadn’t I? Okay, maybe I need to go back and cover that. Comment by | November 27, 2007 | Reply 4. John, sorry if my comment was overly blunt. I read “… we get an equivalence relation on X by defining x_1~x_2 if P” as saying that x_1~x_2 is equivalent by definition to P . You may reasonably declare that I misread you since you wrote “if”, not “if and only if” but I think it would be harder to misread if you made it explicit you were completing a relation to an equivalence relation. After all, you were quite explicit whenever you completed a subbase to a topology so I was expecting you to be similarly explicit when constructing the relation. I concede that it’s a judgement call, maybe I was too nitpicky. Comment by Jeremy Henty | November 28, 2007 | Reply 5. On a separate point, you have a slight LaTeX glitch. You need to write \~ to get a ~ because LaTeX interprets ~ as “unbreakable whitespace”. Comment by Jeremy Henty | November 28, 2007 | Reply 6. [...] sets are open subsets of , but they may not be open as subsets of . But by the definition of the subspace topology, each one must be the intersection of with an open subset of . Let’s just say that each is [...] Pingback by | January 15, 2008 | Reply 7. [...] One of the biggest results in point-set topology is Tychonoff’s Theorem: the fact that the product of any family of compact spaces is again compact. Unsurprisingly, the really tough bit comes in [...] Pingback by | January 17, 2008 | Reply 8. Dear sir: I am student,could you help me please, I need some examples about continuous functions between {regular spaces,normal spaces,competely normal,conncted spaces}i.e let x={a,b,c} and Y={d,e,f,g} such that f-1(f(a))= a,f-1(f(b))={b},f-1(f(c))={c},but . f-1(f(a,b))=/={a,b},I mean some thing like this. And I want the proof the invrese image of compact set is compact. regards Comment by Gazala | July 2, 2008 | Reply 9. I’m really not sure what you’re going for. Where’s the topology in your example? And what does it have to do with any of those classes of spaces? As for the inverse image of a compact set being compact, it’s clearly false. Just take a constant function from the reals to the reals. The single point is compact, but its inverse image is the whole line, which isn’t compact. Comment by | July 2, 2008 | Reply 10. [...] is obtained by taking the usual topology on , taking its product with itself, and then taking the quotient topology) by is the divisor of [...] Pingback by | July 14, 2008 | Reply 11. dear sir if :X—->Y is continuous mapping, if A is a subspace of Y then f-1(A) normal in X regards Comment by Gazala | August 4, 2008 | Reply 12. Gazala: again, your setup is clearly false. I’m sure wherever you got the problem from says a lot more, but you don’t seem to understand enough to know what’s part of the statement and what’s not. Either way, I’m not here to do your homework. Comment by | August 4, 2008 | Reply 13. Though ubiquitous, it’s still a remarkable phenomenon: isn’t it odd how so many flailing students apparently assume the Internet is just bursting with people keen to do others’ tedious homework? Comment by Sridhar Ramesh | August 4, 2008 | Reply 14. Well, I’m willing to help, but you at least have to pose a problem that makes some sort of sense. As it stands, this is not a problem that any professor would have posed. Comment by | August 4, 2008 | Reply 15. Of course. What stands out to me is that, often as not, what’s being asked for seems to transcend “Could someone help me understand this?” into “Could someone do this for me?”. But then, I guess that’s always the line one has to watch when giving help, whether asked for or not. Comment by Sridhar Ramesh | August 4, 2008 | Reply 16. [...] know about products of topological spaces. We can take products of metric spaces, too, and one method comes down to us all the way from [...] Pingback by | August 19, 2008 | Reply 17. [...] one circle with a marked point for each natural number and quotient by an equivalence relation declaring that all those marked points are really the same point. And [...] Pingback by | September 12, 2008 | Reply 18. [...] line and “wrapping” it around itself periodically. I haven’t really mentioned the topologies, but the first approach inherits the subspace topology from the topology on the complex numbers, [...] Pingback by | May 27, 2009 | Reply 19. [...] metric spaces is that we get the same topology as if we’d forgotten the metric and taken the product of topological spaces. This will actually be useful to us, in a way, so I’d like to explain it [...] Pingback by | September 15, 2009 | Reply 20. [...] of vectors so that , and for each such define , so . As a slice of the open set in the product topology on , the set is open in . Further, is continuously differentiable on since is continuously [...] Pingback by | November 20, 2009 | Reply 21. [...] such a shape in -dimensional space is the product of closed [...] Pingback by | December 1, 2009 | Reply 22. [...] space in a natural way, and if this constitutes a subobject in the category. Unfortunately, unlike we saw with topological spaces, it’s not always possible to do this with measurable spaces. But [...] Pingback by | April 28, 2010 | Reply 23. [...] Topological Vector Spaces, Normed Vector Spaces, and Banach Spaces Before we move on, we want to define some structures that blend algebraic and topological notions. These are all based on vector spaces. And, particularly, we care about infinite-dimensional vector spaces. Finite-dimensional vector spaces are actually pretty simple, topologically. For pretty much all purposes you have a topology on your base field , and the vector space (which is isomorphic to for some ) will get the product topology. [...] Pingback by | May 12, 2010 | Reply 24. [...] – and -dimensional smooth manifolds, respectively, then we can come up with an atlas that makes the product space into an -dimensional smooth manifold, and that it satisfies the conditions to be a product object [...] Pingback by | March 7, 2011 | Reply 25. [...] we define an “embedding” to be an immersion where the image — endowed with the subspace topology — is homeomorphic to itself by . This is closer to the geometrically intuitive notion of a [...] Pingback by | April 18, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 134, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389128684997559, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/16227/what-is-the-kinematics-of-a-particle-with-complex-mass
# what is the kinematics of a particle with complex mass? • particles with real-mass have time-like kinematics ($ds^2 > 0$). • particles with zero-mass have light-like kinematics ($ds^2 = 0$). • particles with imaginary-mass have space-like kinematics ($ds^2 < 0$) (tachyons). So the question is pretty simple: What would be the kinematics of a particle with both non-zero real and imaginary parts? - Maybe such a particle is decaying or being born? – Vladimir Kalitvianski Oct 26 '11 at 18:54 This is something people doing PT quantum mechanics study. Complex classical mechanics is a field which is about 5 years old. – Ron Maimon Oct 30 '11 at 19:57 ## 2 Answers I think the question has no meaningful answer, at least in our universe. If you look at $$E^2 - p^2 = m^2$$ then if $m$ is complex with non-zero real and imaginary components, then $m^2$ is also complex with non-zero real and imaginary components and therefore either $E$ or $p$ (or both) must also be complex with non-zero real and imaginary components. I don't think there is any meaningful description of the kinematics of a particle with complex energy or momentum. - thanks for the answer. Yeah i've thought this as well, since in the lorentz transform expression, fixing the $\beta=\frac{v}{c}$ factor and $E$ to be real implies that $m$ must be either real or immaginary, but since in twistor geometry one might want to study complexified poincare geometries (where the above asumptions about $\beta$ and $E$ being real do not necessarily hold anymore), i wondered if in a twistor description a complex mass would have a meaningful kinematics – lurscher Oct 31 '11 at 3:41 In AWT (dense aether model) all particles have complex mass terms due the quantum fluctuations. A the case of photons and neutrinos the complex mass becomes pronounced. With respect to high density of atom nuclei the mesons are have complex mass too. These particles are doing tachyonic "jumps" in space-time and they undergo quantum decoherence and oscillations. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926173985004425, "perplexity_flag": "middle"}
http://docs.sympy.org/dev/modules/physics/mechanics/index.html
# Classical Mechanics¶ Authors: Gilbert Gede, Luke Peterson, Angadh Nanjangud Abstract In this documentation many components of the physics/mechanics module will be discussed. mechanics has been written to allow for creation of symbolic equations of motion for complicated multibody systems. ## Mechanics¶ In physics, mechanics describes conditions of rest (statics) or motion (dynamics). There are a few common steps to all mechanics problems. First, an idealized representation of a system is described. Next, we use physical laws to generate equations that define the system’s behavior. Then, we solve these equations, sometimes analytically but usually numerically. Finally, we extract information from these equations and solutions. The current scope of the module is multi-body dynamics: the motion of systems of multiple particles and/or rigid bodies. For example, this module could be used to understand the motion of a double pendulum, planets, robotic manipulators, bicycles, and any other system of rigid bodies that may fascinate us. Often, the objective in multi-body dynamics is to obtain the trajectory of a system of rigid bodies through time. The challenge for this task is to first formulate the equations of motion of the system. Once they are formulated, they must be solved, that is, integrated forward in time. When digital computers came around, solving became the easy part of the problem. Now, we can tackle more complicated problems, which leaves the challenge of formulating the equations. The term “equations of motion” is used to describe the application of Newton’s second law to multi-body systems. The form of the equations of motion depends on the method used to generate them. This package implements two of these methods: Kane’s method and Lagrange’s method. This module facilitates the formulation of equations of motion, which can then be solved (integrated) using generic ordinary differential equation (ODE) solvers. The approach to a particular class of dynamics problems, that of forward dynamics, has the following steps: 1. describing the system’s geometry and configuration, 2. specifying the way the system can move, including constraints on its motion 3. describing the external forces and moments on the system, 4. combining the above information according to Newton’s second law ($$\mathbf{F}=m\mathbf{a}$$), and 5. organizing the resulting equations so that they can be integrated to obtain the system’s trajectory through time. Together with the rest of SymPy, this module performs steps 4 and 5, provided that the user can perform 1 through 3 for the module. That is to say, the user must provide a complete representation of the free body diagrams that themselves represent the system, with which this code can provide equations of motion in a form amenable to numerical integration. Step 5 above amounts to arduous algebra for even fairly simple multi-body systems. Thus, it is desirable to use a symbolic math package, such as Sympy, to perform this step. It is for this reason that this module is a part of Sympy. Step 4 amounts to this specific module, sympy.physics.mechanics. ## Mechanics API¶ ### Table Of Contents Units #### Next topic Mechanics: Vector & ReferenceFrame ### Quick search Enter search terms or a module, class or function name.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8938723206520081, "perplexity_flag": "head"}
http://mathoverflow.net/questions/91938/perverse-vs-real-formality/92386
## Perverse vs real formality? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\cal A$ be an abelian category, say linear over a field, with enough injectives and $\cal P$ be the heart of a t-structure on the bounded derived category $D^b(\cal A )$. Assume that $\cal P$ also has enough injectives. Suppose that the realization functor $$real:D^b(\cal P)\rightarrow D^b(\cal A)$$ is an equivalence. Now given two objects corresponding through this equivalence, are their Ext-algebras $A_\infty$-quasi-isomorphic? Edit: If I understand it correctly the realization functor is constructed in BBD as follows: 1. They take the homotopy category of injective complexes of objects in $\cal A$, equipped with descending filtration, whose filtration steps lie in $\cal P$. This category is called $DF_{bete}$. 2. Using the boundary map of the triangles $gr^{i+1}F\rightarrow F^{i-1}/F^{i+1} \rightarrow gr^{i} F$ they construct a functor to the category of complexes $DF_{bete}\rightarrow C^b(\cal P)$ and show that it is an equivalence. 3. Forgetting of the filtration on $DF_{bete}$ translates to a functor $C^b(\cal P)\rightarrow D^b(\cal A)$. The derived functor of this is the realization functor. - What's the realization functor? – Fernando Muro Mar 22 2012 at 20:13 Well, in the above situation there is a canonical functor real, extending the inclusion of C in D^b(A).This is not obvious and wrong for an arbitrary triangulated category instead of D^b(A). You can read about it for example in Beilinson, Bernstein, Deligne "Faisceaux pervers" Asterisque 100. – Jan Weidner Mar 23 2012 at 8:05 1 @Jan I see. I had guessed your answer. There's no 'canonical' functor extending the inclusion of the heart unless you enrich $\mathcal{D}^{b}(\mathcal{A})$ with extra structure. The problem is canonicity. Even more, the Ext-algebra of an object in a triangulated category is just a plain algebra. If you want to equip it with a possibly non-trivial A-infinity structure you need more structure again. The triangulated structure cannot remember more than a little bit of the triple product $m_3$. Once you put extra structure so that everything is well defined, the answer may be 'yes'. – Fernando Muro Mar 23 2012 at 8:52 I am aware that the definitions of (an isomorphism class of) A infinity structure on the Ext algebras and the realization functor rely on the fact that we don't have arbitrary triangulated categories here but derived categories. Yet I would say everything can be constructed "without choices" from the data of the abelian category A and the t-structure, so my question does make sense. Am I wrong? – Jan Weidner Mar 23 2012 at 11:28 1 'The' realization functor is actually a choice. If you wish to choose it invoking tilting theory, then it's born as a derived functor, compare Rickard, Jeremy "Derived equivalences as derived functors" J. London Math. Soc. (2) 43 (1991), no. 1, 37–48, "Morita theory for derived categories" J. London Math. Soc. (2) 39 (1989), no. 3, 436–456, and Dugger, Daniel; Shipley, Brooke "K-theory and derived equivalences" Duke Math. J. 124 (2004), no. 3, 587–617. – Fernando Muro Mar 23 2012 at 14:56 show 3 more comments ## 1 Answer OK, I still doubt that if you divide out chain homotopies in (1) in the category of filtered complexes, you can get the category of complexes (and not the homotopy category) as a target in (2). I also doubt that you get an equivalence in this way unless you invert something in source and target. Sorry that I'm too lazy to look now at BBD. Suppose that for any bounded complex $X$ in $\mathcal{P}$ you can construct a filtered complex of injectives $F$ in $\mathcal{A}$ such that each $F^{n}/F^{n+1}$ is an injective object in $\mathcal{P}$ and $gr^{\ast}(F)=F^{\ast}/F^{\ast+1}$ equipped with the differential $\delta$ obtained as in (2) is a bounded below complex quasi-isomorphic to $X$ in $C(\mathcal{P})$. You can probably check in BBD that all this is possible (at leas under reasonable assumptions). Consider the following zig-zag of DG-algebra morphisms: ```$\operatorname{End}_{C(\mathcal{P})}(gr^{\ast}(F),\delta)\leftarrow \operatorname{End}_{\mbox{filtered}}(F)\rightarrow \operatorname{End}_{C(\mathcal{A})}(F)$``` By definition, the $\operatorname{Ext}$ algebra of $X$ is the cohomology of the DG-algebra on the left. According to (3), the $\operatorname{Ext}$ algebra of $real(X)$ is the cohomology of the DG-algebra on the right. Moreover, the $A$-infinity structures on these $\operatorname{Ext}$ algebras are obtained from these DG-algebras by the standard transfer procedure, choosing bases of cohomology vector spaces and representing cocycles for the elements in these bases. By definition, the transfer maps are $A$-infinity quasi-isomorphisms between the $\operatorname{Ext}$ $A$-infinity algebras and these DG-algebras. Your claim in (2) should instead say, I think, that the functor you sketch induces an equivalence after inverting quasi-isomorphisms on the right, and something appropriate on the left (filtered quasi-isos?). Therefore $\leftarrow$ above is a quasi-isomorphism. Moreover, since you assume that $real$ is fully faithful, $\rightarrow$ above is also a quasi-isomorphism. Now you can get your desired $A$-infinity quasi-isomorphism between the $\operatorname{Ext}$ $A$-fininity algebras by inverting and composing. - Thank you for your answer and sorry that it took me so long to reply. I don't however understand how to construct the map $\leftarrow$. Given a filtered endomorphism which is not closed or not of degree $0$ I don't see how to construct something on the left hand side. – Jan Weidner May 7 2012 at 11:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9210799932479858, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/260404/finding-recurrence-and-an-algorithm-to-represent-it
# Finding recurrence and an algorithm to represent it You find yourself in a country with integer coin denominations $c_1 < c_2 < ... < c_r$, where $c_1 = 1$. Unfortunately, the greedy algorithm is not guaranteed to find the optimal way to make change. Let $C(i)$ be the minimum number of coins needed to make change for $i$ cents. (a) Find a recurrence for $C$. (b) Write an algorithm for computing an array OPT$[0...n]$ where OPT$[i]$$= C(i)$. I'm not really sure how to go about doing this. What does the information about the greedy algorithm tell us about the behavior of the problem? How can we use the information given to write a recurrence for $C(i)$? - The information about the greedy algorithm basically just tells you how not to approach the problem. – Brian M. Scott Dec 17 '12 at 2:44 ## 1 Answer It is a typical problem in Dynamic Programming. You can check Dynamic Programming Solution to the Coin Changing Problem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8796607255935669, "perplexity_flag": "middle"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/G08/g08cdc.html
# NAG Library Function Documentnag_2_sample_ks_test (g08cdc) ## 1  Purpose nag_2_sample_ks_test (g08cdc) performs the two sample Kolmogorov–Smirnov distribution test. ## 2  Specification #include <nag.h> #include <nagg08.h> void nag_2_sample_ks_test (Integer n1, const double x[], Integer n2, const double y[], Nag_TestStatistics dtype, double *d, double *z, double *p, NagError *fail) ## 3  Description The data consist of two independent samples, one of size ${n}_{1}$, denoted by ${x}_{1},{x}_{2},\dots ,{x}_{{n}_{1}}$, and the other of size ${n}_{2}$ denoted by ${y}_{1},{y}_{2},\dots ,{y}_{{n}_{2}}$. Let $F\left(x\right)$ and $G\left(x\right)$ represent their respective, unknown, distribution functions. Also let ${S}_{1}\left(x\right)$ and ${S}_{2}\left(x\right)$ denote the values of the sample cumulative distribution functions at the point $x$ for the two samples respectively. The Kolmogorov–Smirnov test provides a test of the null hypothesis ${H}_{0}$ : $F\left(x\right)=G\left(x\right)$ against one of the following alternative hypotheses: (i) ${H}_{1}$ : $F\left(x\right)\ne G\left(x\right)$. (ii) ${H}_{2}$ : $F\left(x\right)>G\left(x\right)$. This alternative hypothesis is sometimes stated as, ‘The $x$'s tend to be smaller than the $y$'s’, i.e., it would be demonstrated in practical terms if the values of ${S}_{1}\left(x\right)$ tended to exceed the corresponding values of ${S}_{2}\left(x\right)$. (iii) ${H}_{3}$ : $F\left(x\right)<G\left(x\right)$. This alternative hypothesis is sometimes stated as, ‘The $x$'s tend to be larger than the $y$'s’, i.e., it would be demonstrated in practical terms if the values of ${S}_{2}\left(x\right)$ tended to exceed the corresponding values of ${S}_{1}\left(x\right)$. One of the following test statistics is computed depending on the particular alternative null hypothesis specified (see the description of the argument dtype in Section 5). For the alternative hypothesis ${H}_{1}$. • ${D}_{{n}_{1},{n}_{2}}$ – the largest absolute deviation between the two sample cumulative distribution functions. For the alternative hypothesis ${H}_{2}$. • ${D}_{{n}_{1},{n}_{2}}^{+}$ – the largest positive deviation between the sample cumulative distribution function of the first sample, ${S}_{1}\left(x\right)$, and the sample cumulative distribution function of the second sample, ${S}_{2}\left(x\right)$. Formally ${D}_{{n}_{1},{n}_{2}}^{+}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{S}_{1}\left(x\right)-{S}_{2}\left(x\right),0\right\}$. For the alternative hypothesis ${H}_{3}$. • ${D}_{{n}_{1},{n}_{2}}^{-}$ – the largest positive deviation between the sample cumulative distribution function of the second sample, ${S}_{2}\left(x\right)$, and the sample cumulative distribution function of the first sample, ${S}_{1}\left(x\right)$. Formally ${D}_{{n}_{1},{n}_{2}}^{-}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{S}_{2}\left(x\right)-{S}_{1}\left(x\right),0\right\}$. nag_2_sample_ks_test (g08cdc) also returns the standardized statistic $Z=\sqrt{\left({n}_{1}+{n}_{2}/{n}_{1}{n}_{2}\right)}×D$ where $D$ may be ${D}_{{n}_{1},{n}_{2}}$, ${D}_{{n}_{1},{n}_{2}}^{+}$ or ${D}_{{n}_{1},{n}_{2}}^{-}$ depending on the choice of the alternative hypothesis. The distribution of this statistic converges asymptotically to a distribution given by Smirnov as ${n}_{1}$ and ${n}_{2}$ increase (see Feller (1948), Kendall and Stuart (1973), Kim and Jenrich (1973), Smirnov (1933) or Smirnov (1948)). The probability, under the null hypothesis, of obtaining a value of the test statistic as extreme as that observed, is computed. If $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({n}_{1},{n}_{2}\right)\le 2500$ and ${n}_{1}{n}_{2}\le 10000$ then an exact method given by Kim and Jenrich is used. Otherwise $p$ is computed using the approximations suggested by Kim and Jenrich (see Kim and Jenrich (1973)). Note that the method used is only exact for continuous theoretical distributions. This method computes the two-sided probability. The one-sided probabilities are estimated by halving the two-sided probability. This is a good estimate for small $p$, that is $p\le 0.10$, but it becomes very poor for larger $p$. ## 4  References Conover W J (1980) Practical Nonparametric Statistics Wiley Feller W (1948) On the Kolmogorov–Smirnov limit theorems for empirical distributions Ann. Math. Statist. 19 179–181 Kendall M G and Stuart A (1973) The Advanced Theory of Statistics (Volume 2) (3rd Edition) Griffin Kim P J and Jenrich R I (1973) Tables of exact sampling distribution of the two sample Kolmogorov–Smirnov criterion ${D}_{mn}\left(m<n\right)$ Selected Tables in Mathematical Statistics 1 80–129 American Mathematical Society Siegel S (1956) Non-parametric Statistics for the Behavioral Sciences McGraw–Hill Smirnov N (1933) Estimate of deviation between empirical distribution functions in two independent samples Bull. Moscow Univ. 2(2) 3–16 Smirnov N (1948) Table for estimating the goodness of fit of empirical distributions Ann. Math. Statist. 19 279–281 ## 5  Arguments 1:     n1 – IntegerInput On entry: the number of observations in the first sample, ${n}_{1}$. Constraint: ${\mathbf{n1}}\ge 1$. 2:     x[n1] – const doubleInput On entry: the observations from the first sample, ${x}_{1},{x}_{2},\dots ,{x}_{{n}_{1}}$. 3:     n2 – IntegerInput On entry: the number of observations in the second sample, ${n}_{2}$. Constraint: ${\mathbf{n2}}\ge 1$. 4:     y[n2] – const doubleInput On entry: the observations from the second sample, ${y}_{1},{y}_{2},\dots ,{y}_{{n}_{2}}$. 5:     dtype – Nag_TestStatisticsInput On entry: the statistic to be computed, i.e., the choice of alternative hypothesis. ${\mathbf{dtype}}=\mathrm{Nag_TestStatisticsDAbs}$ Computes ${D}_{{n}_{1}{n}_{2}}$, to test against ${H}_{1}$. ${\mathbf{dtype}}=\mathrm{Nag_TestStatisticsDPos}$ Computes ${D}_{{n}_{1}{n}_{2}}^{+}$, to test against ${H}_{2}$. ${\mathbf{dtype}}=\mathrm{Nag_TestStatisticsDNeg}$ Computes ${D}_{{n}_{1}{n}_{2}}^{-}$, to test against ${H}_{3}$. Constraint: ${\mathbf{dtype}}=\mathrm{Nag_TestStatisticsDAbs}$, $\mathrm{Nag_TestStatisticsDPos}$ or $\mathrm{Nag_TestStatisticsDNeg}$. 6:     d – double *Output On exit: the Kolmogorov–Smirnov test statistic (${D}_{{n}_{1}{n}_{2}}$, ${D}_{{n}_{1}{n}_{2}}^{+}$ or ${D}_{{n}_{1}{n}_{2}}^{-}$ according to the value of dtype). 7:     z – double *Output On exit: a standardized value, $Z$, of the test statistic, $D$, without any correction for continuity. 8:     p – double *Output On exit: the tail probability associated with the observed value of $D$, where $D$ may be ${D}_{{n}_{1},{n}_{2}},{D}_{{n}_{1},{n}_{2}}^{+}$ or ${D}_{{n}_{1},{n}_{2}}^{-}$ depending on the value of dtype (see Section 3). 9:     fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. NE_BAD_PARAM On entry, argument dtype had an illegal value. NE_G08CD_CONV The iterative procedure used in the approximation of the probability for large n1 and n2 did not converge. For the two-sided test, ${\mathbf{p}}=1.0$ is returned. For the one-sided test, ${\mathbf{p}}=0.5$ is returned. NE_INT_ARG_LT On entry, n1 must not be less than 1: ${\mathbf{n1}}=〈\mathit{\text{value}}〉$. On entry, n2 must not be less than 1: ${\mathbf{n2}}=〈\mathit{\text{value}}〉$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. ## 7  Accuracy The large sample distributions used as approximations to the exact distribution should have a relative error of less than 5% for most cases. ## 8  Further Comments The time taken by nag_2_sample_ks_test (g08cdc) increases with ${n}_{1}$ and ${n}_{2}$, until ${n}_{1}{n}_{2}>10000$ or $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({n}_{1},{n}_{2}\right)\ge 2500$. At this point one of the approximations is used and the time decreases significantly. The time then increases again modestly with ${n}_{1}$ and ${n}_{2}$. ## 9  Example The following example computes the two-sided Kolmogorov–Smirnov test statistic for two independent samples of size 100 and 50 respectively. The first sample is from a uniform distribution $U\left(0,2\right)$. The second sample is from a uniform distribution $U\left(0.25,2.25\right)$. The test statistic, ${D}_{{n}_{1},{n}_{2}}$, the standardized test statistic, $Z$, and the tail probability, $p$, are computed and printed. ### 9.1  Program Text Program Text (g08cdce.c) ### 9.2  Program Data Program Data (g08cdce.d) ### 9.3  Program Results Program Results (g08cdce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 93, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7426406741142273, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/190531/what-is-the-use-of-moments-in-statistics/190598
# What is the use of moments in statistics Can any one give an "simple" explaination about what is the use of moments in statistics.Why we need moments? what we can learn from it? if possible please use less equations. - ## 4 Answers The central question in statistics is that given a set of data, we would like to recover the random process that produced the data (that is, the probability law of the population). This question is extremely difficult in general and in the absence of strong assumptions on the underlying random process you really can't get very far (those who work in nonparametric statistics may disagree with me on this). A natural way to approach this problem would be to look for simple objects that do identify the population distribution if we do make some reasonable assumptions. The question then becomes what type of objects should we search for. The best arguments I know about why we should look at the Laplace (or Fourier; I'll show you what this is in a second if you don't know) transform of the probability measure are a bit complicated, but naively we can draw a good heuristic from elementary calculus: given all the derivatives of an analytic function evaluated at zero we know everything there is to know about the function through its Taylor series. Suppose for a moment that the function $f(t) = E[e^{tX}]$ exists and is well behaved in a neighborhood of zero. It is a theorem that this function (when it exists and behaves nicely) uniquely identifies the probability law of the random variable $X$. If we do a Taylor expansion of what is inside the expectation, this becomes a power series in the moments of $X$: $f(t) = \sum_{k=0}^\infty \frac{1}{k!} t^k E[X^k]$ and so to completely identify the law of $X$ we just need to know the population moments. In effect we reduce the question above "identify the population law of $X$" to the question "identify the population moments of $X$". It turns out that (from other statistics) population moments are extremely well estimated by sample moments when they exist, and you can even get a good feel on how far off from the true moments it is possible to be under some often realistic assumptions. Of course we can never get infinitely many moments with any degree of accuracy from a sample, so now we would really want to do another round of approximations, but that is the general idea. For nice random variables, moments are sufficient to estimate the sample law. I should mention that what I have said above is all heuristic and doesn't work in most interesting modern examples. In truth, I think the right answer to your question is that we don't need moments because for many relevant applications (particularly in economics) it seems unlikely that all moments even exist. The thing is that when you get rid of moment assumptions you lose an enormous amount of information and power: without at least two, the Central Limit Theorem fails and with it go most of the elementary statistical tests. If you do not want to work with moments, there is a whole theory of nonparametric statistics that make no assumptions at all on the random process. - +1 for the interesting, concise answer! – Shaktal Sep 3 '12 at 18:47 The first moment is the mean, like the average. - We can know the mean, the standard deviation, the skewness and kurtosis of a distribution from moments. - The normal distribution is determined by the first two moments. Other families of distribution can be determined by their moments. One method for estimating parameters is to equate moments (called the method of moments). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946147084236145, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/112752/prove-binomn0f-0-binomn1f-1-binomn2f-2-cdots-binomnnf-n-f
# Prove: $\binom{n}{0}F_0+\binom{n}{1}F_1+\binom{n}{2}F_2+\cdots+\binom{n}{n}F_n=F_{2n}$ Prove: $\binom{n}{0}F_0+\binom{n}{1}F_1+\binom{n}{2}F_2+\cdots+\binom{n}{n}F_n=F_{2n}$; I was stuck with this question for a while... Help me please!!! Thanks!!! - Induction is sufficient. – mezhang Feb 10 at 13:41 ## 3 Answers Hint: Binet's formula + Binomial formula. Also, $\varphi^2=\varphi+1$ and $\varphi^{-2}=2-\varphi$. $$\hskip 1.5in \displaystyle \sum_{\ell=0}^n \binom{n}{\ell}\frac{\varphi^\ell-(1-\varphi)^\ell}{\sqrt5}=\frac{(1+\varphi)^{\ell}-(2-\varphi)^\ell}{\sqrt5}=F_{2n}$$ - A counting argument: The number of ways of climbing $n$ stairs, taking $1$ or $2$ steps at a time is $F_n$ (Try proving it). Now suppose we had to climb $2n$ stairs. Note that we need to take at least $n$ moves. We now consider the position after taking exactly $n$ moves. For each such position, we consider where we are and how many ways we can cover the rest. This we do by considering the number of steps of $2$ we take. If we take $k$ steps of $2$, then we take $n-k$ steps of $1$ for the first $n$ moves. We end up at step $n+k$, thus leaving $n-k$ steps to cover. These $n-k$ steps can be covered in $F_{n-k}$ ways and the number of ways of getting there is same as the number of ways of choosing $k$ moves of $2$ from $n$, which is $\binom{n}{k}$. Thus as $k$ ranges from $0$ to $n$, we have that the number of ways of covering $2n$ stairs is $$F_{2n} = \sum_{k=0}^{n} \binom{n}{k} F_{n-k}$$ Since $\binom{n}{k} = \binom{n}{n-k}$ we get $$\binom{n}{0}F_0 + \binom{n}{1}F_1 + \dots + \binom{n}{n}F_n = F_{2n}$$ A simple generalization of this argument gives us, for $2n \le m$ $$\binom{n}{0} F_{m-2n} + \binom{n}{1} F_{m-2n+1} + \dots + \binom{n}{n} F_{m-n} = F_m$$ - I accidentally stumbled across this old question while studying some facts about generating functions and could not resist posting this answer. Take an ordinary generating function of the LHS: $$G(x)=\sum_{n\ge0}\sum_{0\le k\le n}\left(\begin{array}{c} n\\ k \end{array}\right)F_kx^n$$ Change the order of summation to obtain $$G(x)=\sum_{k\ge0}\sum_{n\ge k}\left(\begin{array}{c} n\\ k \end{array}\right)F_kx^n=\sum_{k\ge0}F_k\sum_{n\ge0}\left(\begin{array}{c} n+k\\ k \end{array}\right)x^{n+k}\\=\sum_{k\ge0}F_kx^k\sum_{n\ge0}\left(\begin{array}{c} n+k\\ n \end{array}\right)x^n=\sum_{k\ge0}F_kx^k\frac{1}{\left(1-x\right)^{1+k}}\\=\frac{1}{1-x}\sum_{k\ge0}F_k\left(\frac{x}{1-x}\right)^k$$ Now the expression under the sum is just the ordinary generating $F(z)$function for $F_n$ $$F(z)=\frac{z}{1-z-z^2}$$ where $z=\frac{x}{1-x}$. Substituting we obtain $$G(x)=\frac{x}{1-3x-x^2}$$ The last expression is the generating function for $F_{2n}$ as can be ascertained by calculating $\frac{1}{2}\left[F(x)+F(-x)\right]$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267271161079407, "perplexity_flag": "head"}
http://mathoverflow.net/questions/26385/when-factors-may-be-cancelled-in-homeomorphic-products
## When factors may be cancelled in homeomorphic products? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is easy to see that if $A\times B$ is homeomorphic to $A\times C$ for topological spaces $A$, $B$, $C$, then one may not conclude that $B$ and $C$ are homeomorphic (for example, take $C=B^2$, $A=B^{\infty}$). The question is: for which $A$ such conclusion is true? I saw long ago a problem that for $A=[0,1]$ it is not true, but could not solve it, and do not know, where to ask. Hence am asking here. The same question in other categories (say, metric spaces instead topological) also seems to have some sense. - 3 Just some references on cancellation in other categories: ams.org/mathscinet-getitem?mr=1319005, ams.org/mathscinet-getitem?mr=1843913, ams.org/mathscinet-getitem?mr=1383621, ams.org/mathscinet-getitem?mr=2507731 – Jonas Meyer May 30 2010 at 3:50 See also mathoverflow.net/questions/26001/… – Victor Protsak Jun 2 2010 at 9:50 ## 5 Answers For $A=[0,1]$, let $B$ be the 2-torus with one hole and $C$ be the 2-disc with two holes. The products $B\times[0,1]$ and $C\times[0,1]$ can be realized in $\mathbb R^3$: the former as a thickening of the torus, the latter in a trivial way. Each of these products is a handlebody bounded by the pretzel surface (the sphere with two handles). It is easy to deform one to the other "by hand". - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In the category of topological spaces, you can't in general even cancel the two-point discrete space. The only counterexample I know is a bit complicated though: Start with the disjoint union of two copies of the Stone-Cech compactification $\beta N$ of a countably infinite discrete space N, and glue them together along their remainders (i.e., identify each point p in one copy with the corresponding point in the other copy, except that you don't do this identification when p is in N). The resulting space, which I'll call B to match the notation in the question, has the curious property that, if you add one more isolated point to B, you get a space C that is not homeomorphic to B, but if you add a second isolated point then the result is homeomorphic to B. (The part about adding two isolated points is easy; just apply the successor map on N and its continuous extension on $\beta N$ to everything in sight, to make room for the two extra points --- just as in Hilbert's hotel. The part about adding one point is not so easy. I believe it's done (in dual form) in Halmos's "Notes on Boolean Algebras.") Once you have these curious facts about B, it's easy to check that B and C, though not homeomorphic, become homeomorphic when multiplied by a 2-point discrete space; the one extra point in C becomes two points, which can be absorbed into one of the copies of B. - How about Bing's example: The cartesian product of a certain nonmanifold and a line is $E^4$. The `other' factor is the dog-bone decomposition of three-space. - It is easy to see that if $A\times B$ is homeomorphic to $A\times C$ for topological spaces $A$, $B$, $C$, then one may not conclude that $B$ and $C$ are homeomorphic (for example, take $C=B^2$, $A=B^∞$). The question is: for which $A$ such conclusion is true? Witold Rosicki has a lot of results of this sort (usually under some conditions on $B$ and $C$). For instance, On decomposition of polyhedra into a Cartesian product of 1-dimensional and 2-dimensional factors On uniqueness of decomposition of 4-polyhedron into Cartesian product of the 2-dimensional factors On uniqueness of Cartesian products of surfaces with boundary (with J. Malešič, D. Repovš, A. Zastrow) There also exist papers of a different flavor on this subject All lens spaces have diffeomorphic squares (S. Kwasik, R. Schultz) Non-cancellation and a related phenomenon for the lens spaces (A. J. Sieradski) As for nice examples, there exist manifolds $M$ such that $M\times I$ is homeomorphic a ball. For instance, Mazur's 4-manifold, as described by Zeeman: Start with $S^1\times I^3$. In the boundary $S^1\times S^2$, choose a knotted $S^1$ homologous to the first factor. Knotted means that $S^1$ is not isotopic to a 1-sphere $S^1\times y$, $y\in S^2$. Form $M^4$ from $S^1\times I^3$ by attaching a handle to $S^1$ (i.e., attach a disk to $S^1$ and then fatten the disk so that its fattened boundary is identified with some chosen tubular neighbourhood of $S^1$ in $S^1\times S^2$). Form the cube $I^4$ by the same process, only omitting the knotting. The knotting ensures that $M^4\not\cong I^4$. But one extra dimension permits unknotting $M^4\times I\cong I^4\times I$ (by just untwisting the handle). Zeeman also notes a parallel construction of Whitehead's example with surfaces $\times I$ (mentioned above by Sergei Ivanov): `Start with $S^0\times I^2$. In the boundary $S^0 \times S^1$, choose three linked $S^0$'s, each homologous to the first factor', etc. A really cool cancellation theorem is about joins of polyhedra, rather than products (H. Morton): If $A*B\cong A*C$, then either $B\cong C$ or else $A\cong pt*A'$, $B\cong pt*X$ and $C\cong S^0*X$ for some polyhedra $A'$ and $X$. - The question's been studied in the category of groups, too. R. Hirshon proved in [1] that finite groups can always be cancelled in direct products. Hirshon mentions some other sufficient conditions for the cancellation theorem to hold. For instance, he says that in the treatise by L. Fuchs there is a proof of the fact that infinite cyclic abelian groups can also be cancelled (provided that either $B$ or $C$ is a commutative group). References [1] R. Hirshon, On Cancellation in Groups, Amer. Math. Monthly. 76 (9) (1969), pp. 1037-1039. - 3 For canceling a finite group, isn't this just Krull–(Remak–)Schmidt theorem? (en.wikipedia.org/wiki/Krull-Schmidt_theorem) By the way, according to MR, in the same paper Hirshon gave an example where infinite cyclic group $\mathbb{Z}$ cannot be canceled. – Victor Protsak May 30 2010 at 5:13 2 The review of Walker, Elbert A. Cancellation in direct sums of groups. Proc. Amer. Math. Soc. 7 (1956), 898--902, ams.org/mathscinet-getitem?mr=81440, written by Kaplansky, gives the history. – Victor Protsak May 30 2010 at 5:26 1 Krull-Schmidt applies when all the groups involved are finite. Hirshon's result is about just the canceled group being finite. He proves more in his later paper "Cancellation of Groups with Maximal Condition", Proc. AMS 24(2), 401--403. – Steve D Jun 5 2010 at 9:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271113276481628, "perplexity_flag": "middle"}
http://dsp.stackexchange.com/questions/8007/what-distance-metric-can-i-use-for-comparing-image-features-like-elongation-and
# What distance metric can I use for comparing image features like elongation and solidity of each image? What distance metric can I use for comparing image features like elongation and solidity of a contour of each image? Except Least Square and without using a support vector machine because i do not know at which class images belong. - Have you tried using the first two eigenvectors of the contour matrix (or its covariance matrix) as a measure for elongation? – Junuxx Feb 27 at 9:59 ## 1 Answer If I understood you correctly, each contour is described by a 2-element feature vector $f = [e, s]$, where $e$ is elongation and $s$ is solidity. In that case, you might want to try the Mahalanobis distance, which is defined as follows: $$d(f_1, f_2) = \sqrt{(f_2 - f_1)C^{-1}(f_2 - f_1)}$$ where $f_1$ and $f_2$ are the feature vectors that you are comparing, and $C$ is the covariance matrix of your data set. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420327544212341, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/228403/combinations-help-please/228412
# Combinations help please A coin is flipped eight times where each flip comes up either heads or tails. How many possible outcomes a) are there in total? b) contain exactly three heads? c) contain at least three heads? d) contain the same number of heads and tails? the order does matter here! - I’ve changed the title: the question isn’t about permutations. – Brian M. Scott Nov 3 '12 at 19:56 3 Since this is homework, have you tried anything that you could show us? – Jean-Sébastien Nov 3 '12 at 19:56 the order does matter here! – Alpha Nov 3 '12 at 19:56 I have no idea how to solve... – Alpha Nov 3 '12 at 19:58 7 – Brian M. Scott Nov 3 '12 at 19:58 show 1 more comment ## 1 Answer I'm doing $b)$ for you, in two different, yet similar way. You want $3$ heads, which implicitly means you will have $5$ tails. We can see this as permutations of the words $HHHTTTTT$. Since we have similar objects, this is done in $$\frac{8!}{3!5!}$$ ways. I believe this is where you see permutations Here is another method that yields the same result. You have $3$ heads to place into $8$ slots, the remaining $5$ must be tails. The number of ways to choose where the heads go is given by $${8\choose 3}=\frac{8!}{3!5!}.$$ Note that we could have chosen where we want to place the tails, in ${8\choose 5}$ ways, which gives the same thing. This is where Brian says it is not really permutations, but combinations. Can you figure out the rest now? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9516989588737488, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/31042/how-to-calculate-speed-difference-between-objects-close-to-the-speed-of-light/31043
# How to calculate speed difference between objects close to the speed of light? If two different objects (for example two rockets) move in opposite direction at close to the speed of light (for example 0.8c and 0.9c), how do I calculate the difference in speed between the two (which in classical physics would be 0.8c + 0.9c)? - – Qmechanic♦ Jun 30 '12 at 13:12 – Argus Jun 30 '12 at 18:44 Am I confusing The grammar here such as asking what is the rate of egress and not the difference in speed? – Argus Jun 30 '12 at 18:47 @Argus Ah, yes, I meant the total speed of the rockets closing in, not the difference of speed. Thanks for pointing that out. – Quispiam Jul 1 '12 at 18:25 @alfredCentauri: yeah alfred seems to be on a roll that last few days kudos for your dedication to our site :) – Argus Jul 1 '12 at 18:31 ## 2 Answers First, let's be clear on the physical setup here. Suppose that, in the reference frame of the Earth, there is a rocket moving in one direction at 0.8 c and another rocket moving in the opposite direction at 0.9 c. In this reference frame, the distance between the two rockets is increasing at a rate of 1.7 c. That's OK because, in this frame, no thing is observed to be traveling faster than light. However, to determine the speed of one rocket, as observed in the reference frame of the other rocket, we must use the relativistic velocity addition formula since we are combining speeds from two different reference frames: $s = {v+u \over 1+(vu/c^2)}$ Where, in this case, $v$ is the velocity of one rocket in Earth's reference frame and $u$ is the velocity of Earth in the other rocket's reference frame. - Thank you Alfred Centauri, your answer helped a lot! – Quispiam Jul 1 '12 at 18:28 Great! Thanks for the feedback! – Alfred Centauri Jul 2 '12 at 1:34 ## Did you find this question interesting? Try our newsletter email address A very useful concept in thinking about this issue is "rapidity". There is a slightly opaque (IHMO) article on this subject in Wikipedia, but in one dimension, it seems intuitively rather close to the old Star Trek "warp" notion used to describe speed. The great thing is that it is additive between frames, and at low speeds v (as compared to the speed of light c) it reduces to (v/c), usually denoted beta. The rapidity u of an object traveling at velocity v with respect to an observer is: u = arctanh(v/c) = arctanh(beta). Thus, v = c*tanh(u). Here tanh is the hyperbolic tangent, and arctanh is its inverse function. As for tha analogous trig functions, tanh = sinh/cosh. The rapidity u of a rocket (or other object) in a given frame is just the speed of that rocket, as would be calculated non-relativistically by an inertial guidance system on board the rocket itself, after it accelerated from rest to its given velocity v. It is the speed you would naively calculate if you were on the ship, using just an accelerometer and the ship's clock, and knowing nothing of Relativity. If your rapidity is 1, you are are traveling at ~0.761*c; this is very nearly the speed you would reach if you accelerated in a straight line at 1g for 1 year. Rapidity 2 is nearly what you would reach if you accelerated at 1g for 2 years, when v would be ~ 0.964*c; etc. Call it "warp 2"! Etc. - Thanks Bill Wheaton, good to know. – Quispiam Jul 1 '12 at 18:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390938878059387, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/8893/volume-bounded-by-cylinders-x2-y2-r2-and-z2-y2-r2
# Volume bounded by cylinders x^2 + y^2 = r^2 and z^2 + y^2 = r^2 I am having trouble expressing the titular question as iterated integrals over a given region. I have tried narrowing down the problem, and have concluded that the simplest way to approach this is to integrate over the XZ plane in the positve octant and multiply by 8, but I am having trouble identifying the bounding functions. - +1 for saying what you've tried. – J. M. Nov 4 '10 at 11:30 ## 2 Answers The solid lies above the region $D$ in the $xy$-plane bounded by the circle $x^{2} + y^{2} = r^{2}$, so the volume is given by the integral $$\int\int\limits_{D} f(x,y) \ dA = \int\limits_{-r}^{r}\int\limits_{-\sqrt{r^{2}-y^{2}}}^{\sqrt{r^{2}-y^{2}}} f(x,y) \ dx dy$$ Therefore the required volume of the solid is: $$\int\limits_{-r}^{r}\int\limits_{-\sqrt{r^{2}-y^{2}}}^{\sqrt{r^{2}-y^{2}}} 2\sqrt{r^{2}-y^{2}} \ dx dy = \frac{16}{3}r^{3}$$ - 1 Perfect! Short. Succint. Beautiful : ) Would have given +1 had I had the rep! – Kris Nov 4 '10 at 11:54 This is one of those results in calculus which were anticipated by Archimedes. He gave a correct formula for the volume but it is not known exactly how Archimedes solved this problem. There is, however, a simple way to obtain the answer without much calculus. Let me quote from late Gardner's The Unexpected Hanging and Other Mathematical Diversions (Gardner considers the case $r=1$ but this is not essential, of course): Imagine a sphere of unit radius inside the volume common to the two cylinders and having as its center the point where the axes of the cylinders intersect. Suppose that the cylinders and sphere are sliced in half by a plane through the sphere's center and both axes of the cylinders. The cross section of the volume common to the cylinders will be a square. The cross section of the sphere will be a circle that fills the square. Now suppose that the cylinders and sphere are sliced by a plane that is parallel to the previous one but that shaves off only a small portion of each cylinder (have a look at the picture on the left). This will produce parallel tracks on each cylinder, which intersect as before to form a square cross section of the volume common to both cylinders. Also as before, the cross section of the sphere will be a circle inside the square. It is not hard to see (with a little imagination and pencil doodling) that any plane section through the cylinders, parallel to the cylinders' axes, will always have the same result: a square cross section of the volume common to the cylinders, enclosing a circular cross section of the sphere. Think of all these plane sections as being packed together like the leaves of a book. Clearly, the volume of the sphere will be the sum of all circular cross sections, and the volume of the solid common to both cylinders will be the sum of all the square cross sections. We conclude, therefore, that the ratio of the volume of the sphere to the volume of the solid common to the cylinders is the same as the ratio of the area of a circle to the area of a circumscribed square. A brief calculation shows that the latter ratio is $\pi/4$. This allows the following equation, in which $x$ is the volume we seek: $$\frac{4\pi r^3/3}{x}=\frac{\pi}{4}.$$ The $\pi$'s drop out, giving $x$ a value of $16r^3/3$. The radius in this case is 1, so the volume common to both cylinders is $16/3$. As Archimedes pointed out, it is exactly $2/3$ the volume of a cube that encloses the sphere; that is, a cube with an edge equal to the diameter of each cylinder. - +1 because Gardner is, and forever will be, awesome. – J. M. Nov 4 '10 at 14:08 That's a very pretty argument! – Hans Lundmark Nov 4 '10 at 15:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433700442314148, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=2207213
Physics Forums ## Scattering Length what is the physical meaning of the scattering length of neutrons? What does mean -ve scattering length( for instance for 'H' !!!)?? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus mmm scattering length, perhaps i am having a language problem but to be honest i do not really know what you mean. The angle along which an incident particle is scattered off at some target particle gives you an idea of the strength of the interaction going on. Info on mass, charge and nuclear composition can be gained through such experiments. In QFT things like scattering amplitude are used very often, but i do not know whether you are referring to this specific concept... marlon Recognitions: Science Advisor I'm going to take a jab at this, but maybe what you want is the concept of a mean free path. Typically this is proportional to the avg statistical length a particle can traverse in a medium before its absorbed or emitted. Of course the constant of proportionality depends on what medium you are talking about (say lead) There are other closely related lengths particle experimentalists use. Like the avg length before a hadronic shower, etc etc ## Scattering Length I am sorry.. I think you people didn't understand my question.. If you have a beam of nuetrons falling on a thinfilm the neutrons will get scattered. The strength of the scattering is proposional to the scattering length(the neutrons feels a Fermi potential, which is propotional to the scttering length of the neucleus, which is a constant for an atom). But the value of the scattering length for neutrons is different for the isotopes of the same element...for instance for H and for D the values are different. This difference is used in the isotopic substitution for contrast matching in neutron scattering experiments. ..more comments about scattering lengths are welcome..like the sign etc.. cheers Recognitions: Homework Help Science Advisor Quote by manesh what is the physical meaning of the scattering length of neutrons? What does mean -ve scattering length( for instance for 'H' !!!)?? If I recall, the sign of the scattering length is related to the existence of bound states. A large positive scattering length signals the presence of a shallow (E near zero) bound state. A negative scattering length means that there is no bound state. The enrgy of the bound state can be found from the scattering length (it goes like $- {\hbar \over m^{} sl^2}$, if I recall correctly. As for the meaning of the scattering length "ls" , if one considers low energy scattering, then the cross section (which will be isotropic, i.e. an S-wave) will be entirely determined by the energy of a shallow bound state. The cross section is essentially $4 \pi^{} sl^2$, iirc. Anyway, I haven't looked at this concept in ages so take all this with a grain of salt! Pat Physical meaning of scattering length, (nothing to do with mean free path) For low energies of the incident particle the details of the scattering potential are unimportant, only how the potential looks from far away. This is because at low energies the particle is not going to actually touch the object producing the scattering potential. The scattering length is a measure of how far from the potential the details become important. This is similar to multipole expansion in electrodynamics. Two positive charges from far away will look like a single particle with twice the charge. In regards to the other responses, with all due respect, if you don't think you have the answer you really shouldn't post, because the original person is going to have to look somewhere else anyways to be sure. At least try and look it up the answer first to at least know if your answer is not the correct one. Thanks. Mentor Quote by sarhas with all due respect, if you don't think you have the answer you really shouldn't post, Your lecture is coming five years too late. Take a look at the date of these messages. Thread Tools | | | | |----------------------------------------|-------------------------------|---------| | Similar Threads for: Scattering Length | | | | Thread | Forum | Replies | | | General Physics | 47 | | | Linear & Abstract Algebra | 4 | | | Advanced Physics Homework | 0 | | | Special & General Relativity | 0 | | | Introductory Physics Homework | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925041139125824, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/66035?sort=newest
## The Vitali Covering Theorem and use of closed balls ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Concerning the Vitali Covering Theorem, what is the significance of closed balls in the hypothesis? In particular wouldn't open balls also work? - There are so many theorems going under this name or variants, and so many proofs, that it's hard to say this for sure; but in most of the theorems of this sort, and most of the proofs of those theorems, that I've seen, one uses the fact that a point that hasn't been covered by a finite collection of balls is at a positive distance from them. Of course, this requires closed-ness. – L Spice May 26 2011 at 13:08 ## 1 Answer Assuming that you are considering the Vitali covering theorem for Radon measures in $\mathbb{R}^n$ the answer is that open balls don't work. I'll leave the task of finding a counterexample (for example already in $\mathbb{R}$) to you, as I remember it being a homework assignment on our real analysis course. (Notice that open balls work trivially for any measure that gives zero measure to the sphere.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9649474620819092, "perplexity_flag": "head"}
http://mathoverflow.net/questions/4699?sort=newest
## Examples of left reversible semigroups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am looking for concrete examples of cancellative, left reversible semigroups. Left reversible semigroups are also called "Ore semigroups". See this wikipedia page for the definition of a left reversible semigroup. Of course, commutative semigroups are automatically left reversible, and I am looking for non-commutative examples. Please also mention if these semigroups arise in an interesting setting. - Thanks for the answers. I am quite satisfied now. – Orr Shalit Nov 12 2009 at 2:16 ## 5 Answers Some examples and further references (and an interesting setting) are given in this paper by Laca: http://arxiv.org/abs/math/9911135 See section 1.1, pages 2-3. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The semigroup $S= \langle a,b\mid a^2=b^2\rangle$ is two-sided reversible. - Here is an example that I found in the book "Algebraic Theory of Semigroups, vol. I" by Clifford and Preston (the exercise on p. 36): The universal semigroup generated by two elements $a,b$ such that $ab = ba^k$. This semigroup can be concretely described as the set of of pairs $(i,j)$, with $i,j$ nonnegative integers, with multitplication $$(i,j)(m,n) = (i+m, k^m + n) .$$ - I had thought of this but wasn't sure about left reversibility. It might be worth observing, if Clifford & Preston don't, that the group completion of this guy is an example of a Baumslag-Solitar group. BS-groups are in general of interest to geometric group theorists, perhaps mined to exhaustion, perhaps not (I honestly don't know). – Yemon Choi Nov 10 2009 at 22:54 Thanks, that's interesting. – Orr Shalit Nov 11 2009 at 2:03 I suppose that the non-zero elements of a left Ore domain would work --- presumably this is why they are sometimes called Ore semigroups. To expand: a ring is a left Ore domain if it has no non-trivial zero-divisors and for every non-zero element s of the ring and every other element r of the ring one can find r' in the ring and s' non-zero and in the ring such that rs'=sr'. Goldie's theorem says every left Noetherian ring without zero-divisors is a left Ore domain so the non-zero elements will form a left-reversible cancellative semigroup. In fact Goldie's theorem says a little more than this but I don't have time to check if the non-zero divisors will always give what you want in any left Goldie ring. (It is possible I have my left and rights mixed-up here if so just swap them around!). - I just did the check I said I didn't have time for, and yes, the non-zero divisors in a left Goldie ring (and therefore any left Noetherian ring) will be a left-reversible cancellative semigroup – Simon Wadsley Nov 10 2009 at 13:33 I realised on the way home that my brain failed to move the statement of Goldie's theorem from its back to its front correctly. It only applies to semiprime (left) Goldie rings. i.e. there should be no nilpotent ideals. – Simon Wadsley Nov 10 2009 at 18:54 Thanks for the answer. Naturally, I did not know of many concrete examples of Ore left domains, either. – Orr Shalit Nov 12 2009 at 2:15 Do you know examples of left Noetherian rings? There are many. I can give some if you would like. – Simon Wadsley Nov 12 2009 at 8:34 Handle with care - see comments: Imagine some system with states and transitions between them, e.g. an automaton as in computation theory or a Markov chain as in stochastics, for simplicity assume that you have an assigned initial state. Then you can label the transitions by letters and record your transition history, starting from the initial state, by forming words out of them. This gives you a semigroup. Now declare two words to be equal if they lead you from the initial state to the same state (careful - this is not the way you usually use automata in group theory). Then left reversibility means that you can, from two given states always go on to reach one same state, i.e. any two starts of your program can lead to the same outcome. A non-deterministic terminating algorithm gives a meaningful example of such a thing. Non-deterministic means that there actually are several ways through your state diagram, terminating means that you always end in the final state. Another (boring) example: The multiplicative semigroup of a ring - the zero always does the job. This is like an automaton which always has a one-step way from any state to the unique final state. - These examples don't appear to be cancellative. – Jonas Meyer Nov 10 2009 at 0:40 Absolutely true - I overread "cancellative"! The answer is bullshit - I will delete it tomorrow, if no one objects... – Peter Arndt Nov 10 2009 at 2:18 Thanks. Please don't erase, that's an interesting example and explanation, and somebody might find it useful. – Orr Shalit Nov 10 2009 at 20:39 Ok, I leave it, but it really has more flaws than just the cancellativeness issue. To make the semigroup multiplication globally defined (you have to be able to multiply with a given element, no matter in which state you are), you need a special kind of automaton... – Peter Arndt Nov 11 2009 at 0:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312711358070374, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/199951-perpendicular-distance-point-arc.html
2Thanks • 1 Post By ebaines • 1 Post By ebaines # Thread: 1. ## Perpendicular Distance from Point to Arc I am sure the major portion of this has been covered, but I have an exception that I need to hash out. I want to find the intersection of two great circle arcs (numerically). I have already determined the location of the two intersection points of the two corresponding great circles. However, this leave me with the problem of determining which point lies on the great circle arc. Letlet $p_1,\,q_1, p_2, q_2$ define the great circle arcs, $A_1,A_2$, respectively, and let $I_1,\, I_2$ correspond to the intersections of the great circles. These intersections are antipodal of eachother, which is what causes my problem. To calculate the shortest distance between a point, $X$ and a great circle I use the following $n = p \times q$ then the perpendicular distance is given by $\theta = \cos^{-1}(n \cdot X) - \frac{\pi}{2}$. From here I check to see if $\theta \le \epsilon$. But, because of $0 \le \cos^{-1} X \le \pi$ I get that both $I_1,\, I_2$ are on the arc. How do I get the proper angle from $\cos^{-1}$ using the information given? 2. ## Re: Perpendicular Distance from Point to Arc Is this a poorly asked question? It seems strange that no one has been able to answer this. 3. ## Re: Perpendicular Distance from Point to Arc It is a little difficult to follow. I assume that one great circle arc is defined by two vectors p1 and q1, and the other by p2 and q2, correct? So you are trying to find the angular separation between X and either one of the great circles. I don't understand what you mean by "I get that both I_1 and I_2 are on the arc" -- they are the intersection points of the two great circle arcs, so yes, they are both "on the arc." I also don't get the formula for angular separation: $\theta = \cos^{-1}(n \cdot X) - \frac {\pi}{2}$: it seems with this formula that if X is on the arc then X is perpendicular to n, the dot product of n and X is 0, and hence the formula yields $\theta = \frac{-\pi}2$ instead of 0 as expected. I would think $\theta = \sin^{-1}(n \cdot X)}$ would work better? One other thing - the values for theta ought to lie between 0 and pi/2, if the dot product yields a negative number I think you want to use its absolute value: $\theta = \sin^{-1}(|n \cdot X|)}$ 4. ## Re: Perpendicular Distance from Point to Arc I define two Arcs $A_1,\, A_2$ by the points $\left\{p_1,\,q_1\right\},\left\{p_2,\,q_2\right\},$ respectively, just as you said. What I want is $A_1 \cap A_2$. However, the way I have done this is by finding the intersection of the of two great circles corresponding to the given points. Since I find the intersection of two great circles, I get two intersections $I_1,\, I_2$. Only one of these two points actually lies on $A_1 \cap A_2$, so I figured I could just find the distance of each point to one of the arcs; the point with an angular distance closest to 0 would be the intersection of the two arcs. However, I get the same angular distance for both points. I hope this description is clearer. EDIT: The problem is that both $I_1$ and $I_2$ lie on the great circle, however only one lies on the arc. The question is how do I determine which lies on the arc? 5. ## Re: Perpendicular Distance from Point to Arc Originally Posted by lvleph The problem is that both $I_1$ and $I_2$ lie on the great circle, however only one lies on the arc. The question is how do I determine which lies on the arc? OK, I see. Two great circles will intersect at two points (assuming they are not coplanar), whereas two circle segments may have 0, 1 or 2 intersection points. How about this - if the angular distance from p to I_1 plus the angular distance from I_1 to q equals the angular distance from p to q then I_1 is between p and q and hence on the arc. If I_1 is not on the arc then the sum of these angles would be greater than the arc. You would have to use positive angles only, and control for the case where the arc is greater than 180 degrees. 6. ## Re: Perpendicular Distance from Point to Arc That sounds plausible. The other solution I was thinking might works is to look at $\|p_1 - I_1\|_0$ and $\|p_1 - I_2\|_0$ the smaller distance is the point I want. This will work because I know my arcs are small. 7. ## Re: Perpendicular Distance from Point to Arc If the arcs are always less than 90 degrees in length and if there is always an intersect point on the arcs then yes, it would work. 8. ## Re: Perpendicular Distance from Point to Arc Good point, part of what I am checking is if there is even an intersection, so my method won't work. 9. ## Re: Perpendicular Distance from Point to Arc Numerically, your test has a problem. If the intersection is one of the end points of an arc it can cause acos to throw NaN. But, I can detect this.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951533317565918, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/156709-derive-equation-runner.html
# Thread: 1. ## Derive equation of runner Hi, An athlete usually runs 80km at a steady speed of $v$km per hour. He decides to reduce his speed by 2.5km per hour resulting in his run taking an extra 2 hours and 40 minutes. Derive $\frac{80}{v}+\frac{8}{3}=\frac{160}{2v-5}$ Here's what I did which unfortunately is wrong: $\frac{80}{60(v-2.5)}-\frac{80}{60v}=160$ (Converted it to minutes) $\frac{80}{60(v-2.5)}=160+\frac{80}{60v}$ $\frac{80}{60v-150}=160+\frac{80}{60v}$ $80=160(60v-150)+\frac{80(60v-150)}{60v}$ $80=9600v-24000+\frac{80(60v-150)}{60v}$ $24080-9600v=\frac{80(60v-150)}{60v}$ $60v(24080-9600v)=80(60v-150)$ $1444800v-576000v^2=4800v-12000$ $576000v^2-1440000v-12000=0$ $48v^2-120v-1=0$ How on Earth did I get it so far from the answer? Any help is appreciated 2. Originally Posted by webguy Hi, An athlete usually runs 80km at a steady speed of $v$km per hour. He decides to reduce his speed by 2.5km per hour resulting in his run taking an extra 2 hours and 40 minutes. Derive $\frac{80}{v}+\frac{8}{3}=\frac{160}{2v-5}$ (time in hrs it normally takes) + (2 hrs + 40 min) = (longer time when he runs slower) $\frac{80}{v} + \frac{8}{3} = \frac{80}{v-2.5}$ clear the decimal in the last fraction ... $\frac{80}{v} + \frac{8}{3} = \frac{160}{2v-5}$ common denominator is $3v(3v-5)$ ... $\frac{80 \cdot 3(2v-5)}{3v(2v-5)} + \frac{8v(2v-5)}{3v(2v-5)} = \frac{160 \cdot 3v}{3v(2v-5)}$ numerators form the equation ... $240(2v-5) + 8v(2v-5) = 480v$ $30(2v-5) + v(2v-5) = 60v$ $60v - 150 + 2v^2 - 5v = 60v$ $2v^2 - 5v - 150 = 0$ $(2v + 15)(v - 10) = 0$ only solution which works in the context of the problem is v = 10 km/hr at 10 km/hr, takes 8 hrs to run 80 km at 7.5 km/hr, takes 10 hrs 40 min 3. Originally Posted by skeeter $\frac{80}{v} + \frac{8}{3} = \frac{80}{v-2.5}$ clear the decimal in the last fraction ... $\frac{80}{v} + \frac{8}{3} = \frac{160}{2v-5}$ How did you do that step? It looks like you only multiplied one side by $\frac{2}{2}$. Also, why is my answer so incorrect. The logic of my first step seems correct in my mind; is it not? 4. Originally Posted by webguy How did you do that step? It looks like you only multiplied one side by $\frac{2}{2}$. correct ... 2/2 = 1 ... multiplication by 1 changes nothing. Also, why is my answer so incorrect. The logic of my first step seems correct in my mind; is it not? your "conversion" to minutes (which is not necessary to begin with) is incorrect. note that every term in the first equation represents time in hrs. conversion to minutes requires multiplying each term by 60 min/hr. ... 5. But I converted every hour to minutes. Even though I was working in minutes I don't see how what I did was wrong. Would you mind pointing out the specific flaws of my answer? 6. Originally Posted by webguy But I converted every hour to minutes. Even though I was working in minutes I don't see how what I did was wrong. Would you mind pointing out the specific flaws of my answer? no, you didn't ... I told you in my previous post that each term is a time in hours. specifically, using the first term of the original equation ... $\displaystyle \frac{80 \, km}{v \, km/hr}$ = a specific amount of time in hours $\displaystyle \frac{80 \, km}{v \, km/hr} \cdot \frac{60 \, min}{hr}$ = that same time above in minutes 7. So it should be $\frac{4800}{60v}$? 8. Originally Posted by webguy So it should be $\frac{4800}{60v}$? no 9. Originally Posted by skeeter no Could you elaborate? 10. Originally Posted by webguy Could you elaborate? Maybe someone with the time and inclination will elaborate in the fullness of time. In the meantime, you need to carefully reflect on the help you've already been given and show that you have taken on board this help. 11. Originally Posted by skeeter $\displaystyle \frac{80 \, km}{v \, km/hr} \cdot \frac{60 \, min}{hr}$ = that same time above in minutes $60\times(\frac{80}{v})$ which gives $\frac{4800}{v}$. So my original should be: $\frac{4800}{v-2.5}-\frac{4800}{v}=160$ If this is wrong feel free to say with no explanation. I'll end this thread here at that point since I doubt I'll ever understand. Unfortunately I fail to understand why multiplying the hours by 60 minutes and dividing into the distance is incorrect, but multiplying the distance by the minutes is correct.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9610649943351746, "perplexity_flag": "middle"}
http://nrich.maths.org/337/index?nomenu=1
$$\begin{eqnarray} a(n) &= &1 + 2 + 3 + ... + n \\ b(n) &= &1^2 + 2^2 + 3^2 + ... + n^2\\ c(n) &= &1^3 + 2^3 + 3^3 + ... + n^3. \end{eqnarray}$$ It is well known that $c(n) = a(n)^2$ . What are the relationships between $a(n)$ and $b(n)$ and between $b(n)$ and $c(n)$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.997689425945282, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/104723-independent-identically-distributed-r-v-s.html
# Thread: 1. ## independent identically distributed R. V's If ${X_i}$ for $i=1,...n$ are $i.i.d$, then can we show that ${X_i ^2}$ for $i=1,..n$ are also $i.i.d$? 2. yup Whatever you had for the distribution of X, your new random variables $Y_i=X^2_i$ will all have that new distribution. And also if $X_i$ is independent of $X_j$ for $i\ne j$ then $X_i^2$ is independent of $X_j^2$ for $i\ne j$ 3. ## iid is there a way to show that $X_i^2$ are independent? 4. do you want a measure theoretical proof or for just continuous or discrete rvs? 5. ## iid any one of those is fine. thank you very much 6. Think about the independence of the sigma-algebras generated by each of the random variable
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287089705467224, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/6082/which-fortuneteller-is-better/6193
## Which fortuneteller is better ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) We have a probability game, where we have $N$ number of events, each of which outcome can be $A,B$ or $C$. We do/will NOT know real probabilities afterwards: only the discrete outcome ($A, B$ or $C$) of each event. Player 1 forecasts these events with certain probabilities (not only guess what is the outcome, but gives probabilities for each outcome option), and Player 2 as well with own probability estimates. How we can know how accurate Player 1 and Player 2's predictions were (relation to reality) and how to measure the accuracy? I have heard that one can use Akaike's information criterion to solve the problem. I was wondering another way but I need expert's opinion if this works: I heard that one can start solving the problem by modeling the process by multinomic distribution and then take its Dirichlet's distribution. But how this leads to a solution? Okay, I agree that one can write the solution like "Take some Dirichlet's distribution. Now use Akaike's information criterion". But I would like to know if this problem can be solved by using Dirichlet's distribution in some relative reasonable way so that you can't remove the distribution argument and the solution is still valid. - ## 4 Answers I could be missing something here but I'd compute P[These events occur|Fortune teller 1 is telling the truth] and P[These events occur|Fortune teller 2 is telling the truth]. More explicitly Let N_a, N_b and N_c be the number of times outcomes A,B and C occured. Then P(This happened given fortune teller 1 told the truth)= (N choose a,b,c)[(P_1,a)^N_a][(P_1,b)^N_b][(P_1,c)^N_c] Where P_1,a is the probabiliy fortune teller one assigns to event a. (similiar for b and c). Do the same computation for fortuneteller 2 and compare these 2. This tells us who assigned a higher probability to the outcome that came out. Suppose we might have some other criteria for "better" though (who is more often "close" for example but doubt it'll change the answer for "reasonable" definitions of "close"). - I was wondering how can this method works if one gives many correct predictions for easily guessed outcomes but another gives fewer correct predictions for some harder guessed predictions. – Student Nov 19 2009 at 13:12 I might have been misreading here. Do the fortune tellers give a new distribution (probabilities of A B and C) before each event? Or do they just say here is the probability breakdown and it doesn't change. i.e. do we assume the results are IID? I assumed they where. – Jonathan Kariv Nov 19 2009 at 14:11 I understood that they just say here is the probability breakdown and it doesn't change. – Student Nov 19 2009 at 16:44 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I was going to post almost this exact question today. Weird. Anyway, I think we need some kind of model for the cost of getting a wrong probability before we can decide which fortune teller to use. Here's an approach that does that. Suppose that we want to send a long N character message made up of symbols A, B and C over a noiseless channel but we don't know what the message is. Suppose a fortune teller tells us the probabilities of each symbol (possibly allowing for different (but still independent) probabilities for the same symbol in different positions in the message). We can use that information to construct a compression system for sending the message (supposing we can agree the compression system with the people at the other end before we know what the message is). We then get the actual message and send it. The cost is the number of bits required to send the message. The better the distribution fits the outcome, the fewer the bits we need. In the limit where the fortune teller gets every individual bit right, we don't have to actually send any message, it's already contained in the compression system description. (We need large N so that we can design a coding system that is close to the Shannon limit.) This is the Kullback–Leibler divergence which is used to compare probability distributions. We're comparing the fortune teller's distribution with the probability distribution that is 1 for the actual outcome and 0 for everything else. Put another way it's the amount of extra information we learn on seeing the outcomes compared to what we knew based solely on the fortune teller's claims. We'd like this difference to be as small as possible. - There is a problem in evaluating fortunteller or weather forcasters which is that when N is large they can demonstrate (almost) perfect fortcasting without knowing anything about the fortunes or weather, but only by calibrating their pervious answers. This is a theorem of Dean Foster and Rakesh Vohra The Annals of Statistics, 19, (1991), 1084 - 1090. See also the review paper by the same authors which discusses various related results "Regret in the On-line Decision Problem," Games and Economic Behavior, 1999, 7 - 36. - Hi, Jonathan, to answer your question: The probability distribution of each event, is unique (but now known). Also players will make predictions for each event. Events are 100% independent from the other events. My method has been the following: 1. The general idea has been to generate "bets" between Player 1 and Player 2. For example, if Player 1 has higher probability on A (lets say 50%) than Player 2 (lets say 40%), Player 1 place a bet against Player 2 on event's outcome A. 2. In order to weight different variations of the probabilities (so that "bet" will be bigger then Player 1 predicts 90% rather than 50% against Player 2's 40%) and different probability of the outcome, I have used unique "bet size" for each even. I have followed "cash box" idea, where Players try to maximize the growth of their cashbox, when the "bet size" can be calculated with Kelly's formula, which is bet = (p1/p2-1) / (1/p2-1), where p1 and p2 are players 1 and player 2's probabilities for event A (and 1/p2 is the inverse probability of the event i.e. "odd"). 3. In simulation, I have substracted player 1's account by "bet" if event does NOT happen, and added by "bet" x (1-p2-1) if event occured 4. This is looped 1..N (number of events) However, one probleme is how to know what is the reliability of the simulation. How about if we have just N=3, and in simplistic case lets assume we have "coin flipping" as event where in each event we have 50%/50% probability. If it happens, that a person predicts 70% for "heads" (stupid!!), but it happens so that 2 coins out of 3 are heads, the player 1 wins. But of course this is pure luck. But any way we can proof it? Olli -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462391138076782, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/lebesgue-integral?page=5&sort=newest&pagesize=15
# Tagged Questions For question about integration, where the theory is based on measures. So it's almost always used together with the tag [measure-theory], and its aim is to specify questions about integral, not only properties of the measure. 2answers 161 views ### Measurable Functions How do we prove that a function $f$ is measurable if and only if $\arctan(f)$ is measurable? If I use the definition of measurable functions, that is, a function is measurable if and only if its ... 1answer 65 views ### integrals and characteristic functions $f$ is Lebesgue integrable over $A$, and $B$ is a measurable subset of $A$. I want to show $$\int_B f=\int_Af\chi_B$$, where $\chi_B$ is the characteristic function of $B$ (it is 1 on B and 0 ... 1answer 277 views ### Lebesgue Convergence using The General Lebesgue Dominated Convergence Theorem Let ${f_n}$ be a sequence of integrable functions on E for which $f_n \to f$ a.e. on E and f is integrable over E. Show that $\int_E |f-f_n| \to 0$ if and only if \$\lim_{ n\to\infty} \int_E |f_n| = ... 2answers 180 views ### The General Lebesgue Integral For a measurable function, $f$, on $[1, \infty)$ which is bounded on bounded sets, define $a_n = \int_n^{n+1} f$ for each natural number $n$. Is it true that $f$ is integrable over $[1, \infty)$ if ... 4answers 139 views ### Measure and Lebesgue Integral I got this exercise as homework and I found some problems in solving it. So I hope that someone can help me. Let $f:[0,1] \rightarrow R$ Lebesgue measurable and $S=\{x \in [0,1]:f(x) \in Z\}$. Show ... 1answer 321 views ### {$\int_{[1/n,1]}f$} to converge and yet $f$ is not $L$-integrable over $[0,1]$ Let $f$ be a function on $[0,1]$ and continuous on $(0,1]$. I want to find a function $f$ s.t. {$\int_{[1/n,1]}f$} converges and yet $f$ is not $L$-integrable over $[0,1]$. My attempts: I've found ... 1answer 61 views ### Uniform integrablity of measurable functions How can I show that if family of $f$ is uniformly integrable then so is {$|f|$}? $($by uniformly integrablity: $\forall \epsilon>0 \ \exists \delta>0: |\int_Ef|<\epsilon,\mu(E)<\delta)$ ... 1answer 94 views ### Limit of a measurable function and the Lebesgue integral Suppose $\{f_n\}$ is a sequence of lebesgue measurable functions such that $f_n\rightarrow f$, except on a set of measure $0$, as $n\rightarrow\infty$, and $|f_n(x)|\leq g(x)$, where $g$ is ... 2answers 99 views ### Trying to show that a function is zero almost everywhere given a constraint on its Lebesgue measure. We have that $g$ is a measurable and bounded function on $[a,b]$. I have $\int_a^cg=0$ for every $c\in[a,b]$. I want to show $g=0$ on $[a,b]$ except possibly on a subset of measure zero. Proof. By ... 2answers 94 views ### Limit and Lebesgue integral in a compact I have problem with the exercise that follows. Let $(z_m)_m \in R^n$ so that $\Vert z_m \Vert \rightarrow \infty$ when $m\to \infty$. Let $f:R^n \rightarrow [-\infty;+\infty]$ integrable. Show ... 1answer 319 views ### Finding Lebesgue Integral of $\frac{1}{\sqrt{x}}$ over $(0,1]$ How do I rigorously discover what $$\int_{(0,1]} \frac{1}{x^{1/2}} = \underset{0 \le \phi \le \frac{1}{\sqrt{x}}}{\sup} \int_{(0,1]} \phi$$ (for $\phi$ a simple function) is? Note that I have ... 0answers 70 views ### $\lim_{n \to \infty} \int^n_{-n}fdm=\int fdm$ Let $f:\mathbb{R} \to \mathbb{R}$ such that $f$ is integrable over $[-n,n]$ for every $n \in \mathbb{R}$ and assume that $$\lim_{n \to \infty} \int^n_{-n}fdm < \infty.$$ Proposition: $f$ is ... 2answers 582 views ### Showing that $1/x$ is NOT Lebesgue Integrable on $(0,1]$ I aim to show that $\int_{(0,1]} 1/x = \infty$. My original idea was to find a sequence of simple functions $\{ \phi_n \}$ s.t $\lim\limits_{n \rightarrow \infty}\int \phi_n = \infty$. Here is a ... 1answer 165 views ### Extension of Fatou's lemma let $X$ be a finite measure space and $\{f_n\}$ be a sequence of integrable functions, $f_n \rightarrow f\text{ a.e.}$ on $X$. I want to show if (1) holds, then (2) holds too. \lim_{n \rightarrow ... 1answer 158 views ### Lebesgue Integral on a set of measure zero I need to show that if $f$ is an integrable function on $X$ and $\mu(E)=0 ,\ E\subset X$; then $\int _E f(x) d\mu(x)=0$ . In my attempts I've showed that \$\forall \epsilon > 0 \ \ \exists ... 1answer 96 views ### using sup of an unbounded function Is what I'm doing valid if we don't have any information on boundedness of $f$ or $f_n$? let $X$ be a finite measure space and $\{f_n\}$ be a sequence of nonnegative integrable functions, \$f_n ... 1answer 108 views ### Uniform integrability and Lebesgue convergence A). Given that $|X_n| \leq Y$ and $Y \in L$. Try to show $X_n$ is lebesgue integrable. b). Try to give any example for which $X_n \longrightarrow^{L} X$ yet $\not\exists Y \in L$ with \$|X_n| \leq ... 1answer 246 views ### Integration by parts and Lebesgue-Stieltjes integrals I want to use Integration by parts for general Lebesgue-Stieltjes integrals. The following theorem can be found in the literature: Theorem: If $F$ and $G$ are right-continuous and non-decreasing ... 3answers 165 views ### Non-Lebesgue Integrability of $1/|x|$ over $[1, \infty)$ How does one show that $\int_\mathbb{[1, \infty)}1/|x|$ is not (Lebesgue) integrable? What I could think of is as follows: Letting $f(x)=1/|x|$ (defined for $|x|\geq 1$), define \$f_n(x)=f\chi_{[1, ... 1answer 68 views ### What can we tell about a sequence of measurable functions on a finite measure space such that $\sup_n \int_X |f_n(x)|^2 d\mu < \infty$? I found this on a qualifier exam, and I think it will help me understand $L^p$ spaces better. Let $f_n$ be a sequence of measurable function on a finite measure space. Suppose that \sup_n \int_X ... 1answer 108 views ### to show a function is Lebesgue integral I need to show that $f=\frac{1}{\sqrt x}$ is Lebesgue integrable on [0,1]. My attempt: I need to show $\sum_{m=n}^\infty \frac{m}{n} \mu(E_m^{(n)})$ converges absolutely $\forall n$. ... 1answer 71 views ### Convergence of functions in $L^1$ I am trying to prove a theorem, and I have been able to reduce it to the following question. I feel that this should be easy, but I can't see the solution. If $(g_n)_{n\geq 1}$ is a sequence of ... 1answer 266 views ### improper Riemann integral and Lebesgue integral Let $f$ be a continuous function on $(0,1]$ and is defined as $f: [0,1] \to \mathbb R$. Show that if $f$ is lebesgue integrable on $[0,1]$, the improper Riemann integral \$\lim_{\epsilon \to 0} ... 1answer 84 views ### What does Luzin's theorem imply? Luzin's theorem states that: let $f:[a,b]\rightarrow R$ be an a.e. finite function, $f$ is measurable iff $\forall \epsilon \geq 0: \exists \phi_\epsilon$ continuous on $[a,b]$ and \$\mu\{x: f(x)\neq ... 3answers 121 views ### Lebesgue- integrability of roots and powers of a function If the powers of a function $f$ are Lebesgue integrable what can we say about the original function? for example if we take $f=\frac{1}{x} on [1, \infty]$, it is not integrable but $f^2$ is. Is there ... 2answers 73 views ### Prove $\int_{cX} \frac{dt}{t} = \int_{X} \frac{dt}{t}$ for every Lebesgue measurable set $X$ Let $c>0$. Let $X \subseteq (0,\infty)$ be a Lebesgue measurable set. Define $$cX := \{ cx \mid x \in X \}.$$ Then $$\int_{cX} \frac{dt}{t} = \int_{X} \frac{dt}{t}$$ Now I can prove this for ... 2answers 600 views ### Generalisation of Dominated Convergence Theorem Wikipedia claims, if $\sigma$-finite the Dominated convergence theorem is still true when pointwise convergence is replaced by convergence in measure, does anyone know where to find a proof of this? ... 3answers 114 views ### Is $C_0^\infty$ dense in $L^p$? I have a question concerning the Lebesgue spaces: Is $C_0^\infty$ dense in $L^p$ ? And if yes, why? Thanks! 1answer 183 views ### Easy application of the Dominated Convergence Theorem? I am struggling with an application of the Dominated Convergence Theorem (DCT) which has cropped up a few times in various proofs I have been studying, in particular a proof about approximating ... 2answers 67 views ### Expectation and Lebesgue integration question How might I show that if a random variable (call it Z) is such that EZ (expectation of Z) is finite (i.e. it is Lebesgue integrable), then nP(|Z|>n) tends to 0? 2answers 352 views ### Lebesgue measure sigma algebra Lebesgue measure on sigma algebra, help ........... Which of the following are sigma algebras? reply with justification please. All subsets in rational numbers { {0},{1},{0,1} }in space {0,1} all ... 2answers 182 views ### constructing a sequence of simple functions with Lebesgue measure approaching the riemann integral Let $\lambda$ denote the Lebesgue measure on the Borel sets of [0,1]. Let $f: [0,1] \rightarrow \mathbb{R}$ be continuous. I know that the Riemann integral $I:=\int_{0}^{1} f(x)dx$ exists. I also know ... 2answers 113 views ### Cantor ternary set problem Let C be a cantor ternary set If $x,y \in C,$ then obviously $x-y \in [-1,1]$ Conversely I want to prove that if $w \in [-1,1],$ then there exists $x,y \in C$ such that $x-y=w$ How to prove this ... 1answer 80 views ### Relation among $L^{p}(\mathbb{R}^d)$? Let $L^{p}(\mathbb{R}^d)$ be the linear space consists of $L^p$-integrable functions on $\mathbb{R}^d$ for $1\le p \le \infty$. Are there any relation among these spaces? 2answers 87 views ### Equicontinuous, differentiable continuous problem Assume that each of {$f_n : [0, 1] \rightarrow R$} is continuously differentiable I know that if {$f_n'$} is uniformly bounded, {$f_n$} is equicontinuous. However, the converse is NOT true. I want ... 1answer 61 views ### Superposition operator in Sobolev spaces While working on an elliptic problem in $\mathbb{R}^N$, I met an issue that I cannot work out clearly. Assume that we have a continuous function $g \colon \mathbb{R} \to \mathbb{R}$ such that ... 1answer 112 views ### Compact set in all $L_p$, $1\leq p<\infty$ Suppose $X\subseteq L_\infty$ is a compact subset of $L_p$ for all $1\leq p<\infty$. Does this mean that for every $\epsilon>0$ there exists a measurable set $E\subseteq [0,1]$ with ... 0answers 120 views ### Lebesgue Integration fundamental questions My question involves the definition of the Lebesgue integral. Most colloquial definitions I've read follow (2), in that f*(t) is the "length" of one of the horizontal rectangles and dt is the ... 0answers 186 views ### Extended Riemann integrability of a non-negative function implies Lebesgue integrability? Let $f$ be a bounded function on a finite interval $[a, b]$ of the real line. If $f$ is Riemann integrable, we denote its Riemann integral by $\mathcal{R}(f , [a, b])$. It is well known that $f$ is ... 1answer 158 views ### Yet another definition of Lebesgue integral Let $[a, b]$ be a finite interval of the real line. A partition $P$ of $[a, b]$ is a finite sequence of numbers of the form $a = t_0 < t_1 <\cdots < t_{k-1} < t_k = b$ Let $(X, \mu)$ be ... 1answer 158 views ### Another definition of Lebesgue integral Let $(X, \mu)$ be a measure space. Let $X = A_1 \cup\cdots\cup A_k (A_i \cap A_j = \emptyset$ for $i \neq j)$, where each $A_i$ is measurable. We say $\pi = \{A_1,\dots,A_k\}$ is a finite measurable ... 2answers 115 views ### Lebesgue generalizations of Hilbert spaces? Is an L[p] space a generalization of Hilbert spaces using Lebesgue integration? And if this is the case, is it true that Holder's and Minkowski's Inequalities are generalizations of the ... 2answers 63 views ### Lebesgue integration for $u \in C^{\infty}_c$ Let $u \in C^{\infty}_c(\Bbb{R}^d)$, where $C^{\infty}_c(\Bbb{R}^d)$ is the family of infintly differentiable functions with a compact support. Is $u$ in $L^2(\Bbb{R}^d)$? I think that $u$ is in ... 2answers 235 views ### Application of Radon Nikodym Theorem on Absolutely Continuous Measures I have the following problem: Show $\beta \ll \eta$ if and only if for every $\epsilon > 0$ there exists a $\delta>0$ such that $\eta(E)<\delta$ implies $\beta(E)<\epsilon$. For the ... 1answer 118 views ### Does $f_n \to 0$ in $L^1(\mathbb R^2)$ imply that $f_{n_k}(x,\cdot)\to 0$ in $L^1(\mathbb R)$ for almost every $x \in \mathbb R$? I would like to know what you think about this question. It is a "self-posed" question: I formulated it while I was doing an exercise. Suppose you have \$(f_n)_{n\ \in \mathbb N}\subset ... 2answers 91 views ### Convergence in $L^1$ problem. Problem: Let $f \in L^1(\mathbb{R},~\mu)$, where $\mu$ is the Lebesgue measure. For any $h \in \mathbb{R}$, define $f_h : \mathbb{R} \rightarrow \mathbb{R}$ by $f_h(x) = f(x - h)$. Prove that: ... 1answer 192 views ### A question about integral operator I have a question: Prove or disprove that: for every $f\in L^{1}\left(\mathbb{R}\right)$, \sup\left\{ { ... 2answers 290 views ### Derivative of step functions I was reading up on the Lebesgue integral and how it is computed. And since it is a generalization of the Riemann integral in a more theoretic framework, the same fundamental principle holds, only for ... 0answers 153 views ### Lebesgue Integration of Measurable Function Can I ask a homework question here? Let $f$ be measurable and nonnegative in $\mathbb{R}^n$ Define a radial function $f^*(|x|)=\inf\{t:\lambda(\{x:f(x)>t\})\leq|x|\}$. Show that ... 3answers 145 views ### Lebesgue Integration problem Can I ask a homework question here? Assume that $f \in L^q(\mathbb R^d)$ for some $q < \infty$ . show that $\mathrm{lim}_{p \to \infty}||f||_p = ||f||_{\infty}$ $p$ conjugate of $q$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 192, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193135499954224, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/59732/brownian-motion-characteristic-function?answertab=oldest
# Brownian motion - characteristic function Let me remind first the construction of Brownian motion. Fix a vector $x \in \mathbb{R}^n$ and define $p(t,x,y) := (2\pi t )^{-\frac{n}{2}} \cdot \exp{\left( - \frac{|x-y|^2}{2t} \right)},$ for $y \in \mathbb{R}^n$, $t >0$. Then for $0 \leq t_1 \leq t_2 \leq \ldots \leq t_k$ we define a measure $\nu_{t_1, \ldots, t_k}$ on $\mathbb{R}^{nk}$ by $$\nu_{t_1, \ldots, t_k}(F_1 \times \ldots \times F_k)= \int \limits_{F_1 \times \ldots \times F_k}^{} p(t_1, x, x_1)\prod_{j=1}^{k-1}p(t_{j+1}-t_j,x_{j}, x_{j+1}) dx_1\ldots dx_k,$$ where the following conventions are used $dy=dy_1\ldots dy_k$ for Lebesgue measure and $p(0, x, y)dy=\delta_x(y)$ (the unit point mass at $x$). By Kolmogorov's extension theorem applied to the probability measures $\nu_{t_1, \ldots, t_k}$ (which easily satisfies all the assumtions of that theorem) there exists a probability space $(\Omega, \mathcal{F}, P^x)$ and a stochastic process $\{B_t\}_{t \geq 0}$ on $\Omega$ such that the finite distributions of $B_t$ are given by $$(\star) \ \ P^x(B_{t_1} \in F_1, \ldots, B_{t_k} \in F_k)= \int \limits_{F_1 \times \ldots \times F_k}^{} p(t_1, x, x_1)\prod_{j=1}^{k-1}p(t_{j+1}-t_j,x_{j}, x_{j+1}) dx_1\ldots dx_k.$$ Problem I want to show that for the random variable $Z = (B_{t_1}, \ldots, B_{t_k}) \in \mathbb{R}^{nk}$ there exist a vector $M \in \mathbb{R}^{nk}$ and a non-negative definite matrix $C \in \text{M}_{nk \times nk}(\mathbb{R})$ such that $$E^x\left[\exp\left(i\left<u, Z \right> \right)\right]= \exp\left( -\frac{1}{2} \left<Cu, u\right> + i \left<u, M\right> \right),$$ for all $u=(u_1, \ldots, u_{nk}) \in \mathbb{R}^{nk}$ (left hand side stands for the characteristic function of the random variable $Z$). Moreover, $M$ is the mean value of $Z$ and $C$ is a covariance matrix of $Z$. I was trying to calculate it explicitly by writing the left hand side by its definition and applying the ($\star$) formula, but the integral which I've got is not so nice and I cannot conclude what I want to. This exercise is nothing else, but showing that $B_t$ is a Gaussian process (so it is not hard to guess what should be $M$ and $C$ in this case). I hope you know some tricks how to compute such an integral in an easy way. Thanks in advance for any help. - I think it suffices to show that the integral will indeed be of the form $\exp( -\frac{1}{2} \langle C u, u \rangle + i \langle u, M \rangle)$. It would then follow that $C$ and $M$ are covariance and mean. This is a trivial excercise in integration of a Gaussian. To carry it out, write $\langle u, Z \rangle = \sum_{i=1}^n u_i \sum_{k=1}^i X_k$ where $X_k$ are corresponding independent normal variates corresponding to $B_{t_i-t_{i-1}}$. It would then follow that c.f. is a product of exponentials of quadratics. – Sasha Aug 25 '11 at 16:17 If I show the the characteristics function of $Z$ is of above form it implies that $B_t$ is a Gaussian process. By using the form of $M$ and $C$ we easily got that $E^x(B_t)=x$ and $E^x((B_t-x)(B_s-x))=n \mbox{min}\{s,t\}$ which gives us that $B_t$ has independent increments (equivalently, uncorrelated for normal distribution are independent). I think that your hint already involves that $B_t$ is normal distributed and it has independent increments, which is a further consequence of the fact I want to show. Am I right? – Franz Aug 25 '11 at 17:27 I mean consider integration of $\exp( i \sum_{k=1}^n u_k t_k )$. Because $t_i-t_{i-1}$ occur naturally in your measure, change variables, by letting $t_i = \sum_{m=1}^i s_m$. Your measure then becomes a product of independent Gaussians which you can carry out. Integration w.r.t. each $s_m$ will produce $e^{\text{quadratic in} u}$, hence their product will again be quadratic in $u$. – Sasha Aug 25 '11 at 17:58 Ok, I understand. Probably, above you had some notation issue. We should denote $u_j := (u_1^j, \ldots, u_n^j) \in \mathbb{R}^n$ and now $$E(\exp(i \left< u, Z\right>) = \int_{\mathbb{R}^{nk}} \exp(i \sum_{j=1}^k u_jx_j) p(t_1, x, x_1) \prod_{j=1}^{k-1}p(t_{j+1}-t_j, x_j, x_{j+1})dx_1 \ldots dx_k.$$ Now, we should substitute $x_j = \sum_{m=1}^{j}\hat{x}_m$ and with this substitution we can easily separate the integrals. I think you can write this as the answer. – Franz Aug 25 '11 at 20:46 ## 1 Answer Denote $t_0 = 0$ and $x_0 = 0$ for notational convenience. Then $$\mathbb{E}( \exp(i \langle u, Z \rangle ) = \int_{\mathbb{R}^{n k}} \exp( i \langle u, x \rangle ) \prod_{j=1}^k p(t_{j}-t_{j-1}, x_{j} - x_{j-1}) \mathrm{d}x_1 \cdots \mathrm{d} x_k$$ Now change variables $x_k = \sum_{i=1}^k s_k$. This has unit Jacobian, you get $$\mathbb{E}( \exp(i \langle u, Z \rangle ) = \int_{\mathbb{R}^{n k}} \prod_{k=1}^k \exp( i \, s_j \sum_{m=j}^k u_q ) ) \prod_{j=1}^k p(t_{j}-t_{j-1}, s_j) \mathrm{d}s_1 \cdots \mathrm{d} s_k$$ Now integration with respect to each $s_j$ can be carried out independently and will produce $\exp( Q_j(u))$, where $Q_j$ is a quadratic multivariate polynomial in $u$. Hence the characteristic function of k-dimensional time-slice distribution of this Brownian motion process is going to be $\exp(Q(u))$. Now $Q(u) = \langle C u, u \rangle + i \langle u, M \rangle$, where $C$ is covariance matrix and $M$ is mean vector. The free term will be missing, because c.f. is one for zero vector $u$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463430047035217, "perplexity_flag": "head"}
http://www.speedylook.com/Speed_of_light.html
# Speed of light The speed of light (fixed at: 299,792,458 m/s in 1983 by the International office of the weights and measures, by redefining the Mètre) are a Constante physics whose value specifies is obtained in experiments as of the 17th century by the Astronome Danish Ole Christensen Rømer: in 1676, it proposes a solution with a problem encountered by Cassini, which observes a delay of fifteen minutes in the Occultation predicted Io, a satellite of Jupiter. Rømer allots this delay to a lengthening of the distance Ground-Io about the diameter of the terrestrial orbit, a sufficient distance to influence the course of the light significantly. Speed of light was then estimated at: 200,000 kilometers a second, approximately 35% in lower part of its true value because of uncertainties of the time on the size of the orbit of the Ground. However, Cassini expressed doubts on the validity of the results of his/her colleague. James Bradley proposed then an estimate with: 300,000 km/s (the Expérience of Fizeau is the first measures nonastronomical and gives a result of the same order: : 315,000 km/s). These first experimental tests rested moreover on the standard-meter, an additional source of error. Later, the problems will be reversed when the Mètre is defined to leave the Célérité C (speed of light in the vacuum), which was not possible that when a sufficient precision in the determination of C had been reached. Today, speed of light constitutes one of the pillars of the contemporary Theoretical physics. ## Speed of light in the vacuum According to the theories of the Physical modern, and in particular the Maxwell's equations, the visible Light and all the electromagnetic waves have a constant speed in the Vide, the speed of light . One thus regards it as a Constante physics noted C (of the Latin celeritas , “speed”). But it is not only constant (one thinks) in all the places (and at all the ages) of the universe (cosmological principles weak and strong, respectively); it is it also of an inertial reference mark to another (restricted Principe of equivalence). In other words: whatever the inertial reference mark of reference of an observer or the speed of the object emitting the light, any observer will obtain same measurement. No material object nor no signal can travel more quickly than C within the framework of the existing theories. Only can “travel” more quickly than C (at speed known as Supraluminique) of the virtual faces (shadow at long distance from an object in rotation, for example), and one cannot, of course, make use of it to transmit a signal, nor of energy. It is not even in fact not objects strictly speaking. the experiment of Alain Aspect watch who an observer can be informed instantaneously, by a measurement on a close particle, state of a remote particle, but there is not there either real a transmission of signal. The speed of light in the vacuum is noted C : C =: 299,792,458 meters a second This value is “exact” by definition. Indeed, since 1983, the Mètre is defined starting from speed of light in the vacuum in the international Système of units, as being the length of the way traversed in the vacuum by the light throughout one 1/299792458 of second. With the result that the meter is defined today by the second, via the speed fixed for the light. ## Interaction of the light with the matter • speed of light is always lower than C in a medium which contains matter, that more especially as the matter is denser; • In a medium known as birefringent, speed of light also depends on its plan of polarization; • the difference in speed of light propagation in different mediums is at the origin of the phenomenon of Réfraction. However, the speed of light , without another precision, generally gets along for speed of light in the vacuum. To note that if no object can exceed speed of light in the vacuum in some medium that it is, to exceed speed of light in the same medium is possible: for example in water the neutrinos can go more quickly than the light. ## Why is this more possible high speed? Speed of light is not a speed limits to the conventional direction. We are accustomed to adding speeds, for example we will estimate normal that two cars running to 60 kilometers per hour in opposite directions see one and the other like approaching at 60 km/h + 60 km/h = 120 km/h. And this approximate formula is perfectly legitimate for speeds of this kind (60 km/h = 16,67 m/s). But, when one speeds is close to that of the light, such a traditional calculation deviates too much from the results observed; indeed, as of the end of the 19th century, various experiments (in particular, that of Michelson) and observations let appear a speed of light in the identical vacuum in all the inertial reference marks. Minkowski, Lorentz, Poincaré and Einstein introduced this question into the theory galiléenne, and realized need for replacing an implicit and inaccurate principle by another compatible with the observations: • it was necessary to give up additivity speeds (allowed by Galileo without demonstration) for the light; • to introduce a new concept, the constancy of C (noted by the experiment). After calculative working, it was released that the new formula of composition comprised a corrective term into 1 (1+vw/c ²), about 2,7×10-10 only with the Speed of sound. The effect becomes more visible when speeds exceed c/10, and spectacular as v/c approaches 1: two spaceships travelling one towards the other at the speed of 0,8× C (compared to an observer between the two), will not perceive a speed of approach (or relative speed ) equal to 1,6× C , but only 0,96× C actually (see table opposite). This result is given by the Transformation of Lorentz: $u = \left\{v + W \ over 1 + v W/c^2\right\}$ where v and W are speeds of the spaceships, and U the speed perceived of a vessel since the other. Thus, whatever the speed to which moves an object compared to another, each one will measure the speed of the luminous impulse received like having the same value: speed of light; on the other hand, the frequency of an electromagnetic radiation transmitted between two objects in relative displacement will be modified by Effect Doppler. Albert Einstein unified work of his/her three colleagues in a homogeneous theory of relativity, applying these strange consequences to the traditional Mécanique. The experimental confirmations of the theory of relativity were with go, with the precision of measurements of the time near. Within the framework of the theory of relativity, the particles are classified in three groups: • the Baryon S , particles of Rest mass real and positive, move at speeds lower than C ; • the Luxon S , particles of null rest mass, move only at the speed C in the vacuum; • the Tachyon S , hypothetical particles of rest mass imaginary, would move with high speeds with C ; the majority of the physicists consider that these particles do not exist (for reasons of Causalité), although the question is still not closed. Rest masses combined with the multiplicative factor $\ gamma = 1/\ sqrt \left\{1 - v^2/c^2\right\}$ gives a real energy for each group defined previously.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277573823928833, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/tagged/interest-rates+distribution
# Tagged Questions 5answers 704 views ### What distribution to assume for interest rates? I am writing a paper with a case study in financial maths. I need to model an interest rate $(I_n)_{n\geq 0}$ as a sequence of non-negative i.i.d. random variables. Which distribution would you advise ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413902163505554, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/38280/finding-an-optimal-monotone-function/38313
## Finding an optimal monotone function? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $f$ and $g$ be two discrete signals. I want to find a monotone function h such that $h=argmin_{h}\sum_{n\in[0,N]}{(f(n)-h(g(n)))^2}$ I don't really care about finding the global optimum, I just want a good fit. What would be a good representation of f to achieve that? Thanks! - ## 2 Answers Here is another try: Assume w.l.o.g. that values of $g(n)$ are in increasing order, i.e. $g(0) \le g(1) \le \cdots\le g(n)$. Moreover, assume the first $n_1$ values in that sequence are equal, then the next $n_2$ values are equal, etc. and that there are $m$ distinct values i.e. $$g(0)=g(1)=\cdots=g(n_1-1) < g(n_1)=\cdots=g(n_1+n_2-1) < \cdots < g(n_1+\cdots+n_{m-1}) + \cdots g(n_1+\cdots+n_m-1).$$We have to decide the following $m$ values of $h$: $$h_k = h(g(k)), \quad k=1,2,\ldots m$$ Say we are looking for an increasing $h$ (you can look for a decreasing $h$ in a similar way and take the best of the two solutions). Denote by $H_r(t)$ the value of the solution of the problem limited to the first $r$ groups, where $t$ is an additional upper bound $t$ on the values taken by $h$, i.e. $$H_r(t) = \min_{h_1 \le h_2 \le \cdots \le h_r \le t} \quad \sum_{k=1}^r \quad \sum_{i=n_1+\cdots+n_{r-1}}^{n_1+\cdots+n_r-1} (f(i) - h_k)^2$$ When $r=1$, the value of $H_1(t)$ can now be easily computed: it is easy to check that the optimal $h_1$ is equal to the average $\bar{f_1}=\frac{1}{n_1}(f(0)+f(1)+ \ldots+ f(n_1-1))$ if it is less than $t$, and to $t$ otherwise, and that $$H_1(t)=constant + n_1 (\min (t-\bar{f_1},0))^2 .$$From that we can successively compute the values of $H_2(t)$, $H_3(t)$, etc. and end up with the final solution when $r=n$, using the recurrence $$H_{r}(t) = \min_{s \le t}\ H_{r-1}(s) + constant + n_s(s-\bar{f_r})^2$$ (this is similar to dynamic programming). - Very neat! Thanks a lot. – Grönwall Sep 14 2010 at 11:33 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Edit: the answer below ignores the monotonicity constraint, and refers to a previous version of the problem with $g(h(n))$ instead of $h(g(n))$ You problem is completely separable: for each $k$, choose the value of $h(k)$ such that $g(h(k))$ is as close as possible to $f(k)$, a decision that is independent from the values of $h(n)$ for all other $n \neq k$. Each of these choices is made by simple inspection of the possible values for $g$. - 1 Did you read that Grönwall is looking for a monotone function h. – Someone Sep 10 2010 at 12:14 As if there existed any monotone functions from $[1,N]$ to $[1,N]$ other than $x$ and $N+1-x$. Certainly, something else was meant (say, $g(h)$ should really be $h(g)$), but I'm too lazy to strain my mind-reading abilities. – fedja Sep 10 2010 at 12:34 Sorry about the typo. – Grönwall Sep 10 2010 at 12:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272597432136536, "perplexity_flag": "head"}
http://mathoverflow.net/questions/83313/nonnegative-additive-functions-on-coherent-sheaves
## Nonnegative additive functions on coherent sheaves ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $(X,\mathcal{O}_X)$ be a Noetherian integral scheme and let $g$ be a (numerical) additive nonnegative function from coherent $\mathcal{O}_X$-modules to $[0,\infty)$. This question may be well known to the expert but I couldn't find a reference: is $g$ a constant multiple of generic rank? If true, do you know of any reference for this? Notes: 1. If $X=\mathrm{Spec}\:R$ is affine with $R$ an integral domain, then a proof can be found in Northcott-Reufel, Theorem 2, p. 303. There are also other proofs. 2. If $X$ is a projective variety over a field, I think I can prove it, but I don't know any reference for this case. I have a feeling this question must have been answered in K-theory. - Basically you ask for homomorphisms $G_0(X) \to \mathbb{R}_+$. – Martin Brandenburg Dec 13 2011 at 8:48 4 To Martin: no, he is asking for homomorphisms G0(X)→ℝ that are non-negative on classes of coherent sheaves. – Angelo Dec 13 2011 at 9:40 ## 1 Answer I suppose that "additive" means that "additive over short exact sequences". If so, this is does not seem too hard, at least if $X$ is separated. By noetherian induction, you may assume that for all proper integral subscheme $Y$ of $X$, the restriction of $g$ to $Y$ is given by a multiple of the generic rank at $Y$. But every coherent sheaf with support on $Y$ can be given by a successive extension of coherent sheaves of $\mathcal O_Y$-modules; hence the restriction of $g$ to sheaves supported on $Y$ is given by a multiple of the length of the stalk at the generic point of $Y$. On the other hand, for each $n > 0$ denote by $Y_n$ the subscheme defined by the $n^{\mathrm th}$ power of the sheaf of ideals of $Y$; the length of the stalk of $\mathcal O_{Y_n}$ at the generic point of $Y$ is unbounded, but by the positivity of $g$ the value of $g(\mathcal O_{Y_n})$ is bounded by $g(\mathcal O_X)$. Hence this multiple is 0, and $g$ is 0 on all torsion sheaves. In particular, if $F \to G$ is a generic isomorphism of coherent sheaves, $g(F) = g(G)$. But on the other hand if $F$ is a coherent sheaf of generic rank $r$, there exists homomorphisms $F \to G$ and $\mathcal O_X^r \to G$ of coherent sheaves that are generic isomorphisms; hence $g(F) = rg(\mathcal O_X)$. The conclusion follows. - This is great, thank you! – Mahdi Majidi-Zolbanin Dec 13 2011 at 14:01 2 Perhaps in line 9, you meant "is bounded by $g(\mathcal O_X)$" instead of $\mathcal O_Y$. – Hailong Dao Dec 13 2011 at 16:32 To Hailong: yes, thank you. I edited the post. – Angelo Dec 13 2011 at 18:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916340708732605, "perplexity_flag": "head"}
http://mathoverflow.net/questions/71794?sort=votes
## Where are $+$, $-$ and $\infty$ in bordered Heegaard-Floer theory? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Here goes my first MO-question. I've just read Lipshitz, Ozsváth and Thurston's recently updated "A tour of bordered Floer theory". To set the stage let me give two quotes from this paper. Heegaard Floer homology has several variants; the technically simplest is $\widehat{HF}$, which is sufficient for most of the 3-dimensional applications discussed above. Bordered Heegaard Floer homology, the focus of this paper, is an extension of $\widehat{HF}$ to 3-manifolds with boundary. [...] the Heegaard Floer package contains enough information to detect exotic smooth structures on 4-manifolds. For closed 4-manifolds, this information is contained in $HF^+$ and $HF^-$; the weaker invariant $\widehat{HF}$ is not useful for distinguishing smooth structures on closed 4-manifolds. Since I am mainly interested in closed 4-manifolds, I have not paid too much attention to the developments in bordered Heegaard-Floer thoery. But right from the beginning I have wondered why only $\widehat{HF}$ appears in the bordered context. So my question is: Why are there no $^+$, $^-$ or $^\infty$ flavors of bordered Heegaard-Floer theory? Are the reasons of technical nature or is there an explanation that the theory cannot give more than $\widehat{HF}$? I assume there are issues with the moduli spaces of holomorphic curves that would be relevant to defining bordered versions of the other flavors of Heegaard-Floer theory, but I am neither enough of an expert on holomorphic curves to immediately see the problems nor could I find anything in the literature that pins down the problems. Any information is very much appreciated. - ## 1 Answer A biased answer, based on Auroux's work http://arxiv.org/abs/1003.2962. Auroux makes a connection between bordered Floer theory and an alternative approach, due to Lekili and myself, which is (still) under development, but which should include the $\pm$ and $\infty$ versions. We do have a preliminary paper out: http://arxiv.org/abs/1102.3160. A general set-up: Say you have a compact symplectic manifold $(X,\omega_X)$; and a codim 2 symplectic submanifold $D$, whose complement $M$ is exact: ${\omega_X}|_M=d\theta$, say. Key example: $X=Sym^g(F)$, where $F$ is a compact surface of genus $g$, and $\omega_X$ a suitable Kaehler form; $M=Sym^g(F-z)$, where $z\in F$. Forms of Floer cohomology: There are various forms of Floer cohomology one can consider. (i) As in $\widehat{HF}$ Heegaard theory, one can consider $HF^\ast_M(L_0,L_1)$, the Floer cohomology in $M$ of a pair of (exact) compact Lagrangian submanifolds of $M$. When $L_0$ and $L_1$ are spin, this can be defined as a $\mathbb{Z}$-module. (ii) As in $HF^-$ Heegaard theory, one can consider the filtered Floer cohomology $HF^\ast_{X,D}(L_0,L_1)$ of a pair of compact Lagrangians $L_i\subset M$ as before. The coefficients are in $\mathbb{Z}[[U]]$. The differential counts holomorphic bigons in $X$, weighted by $U^n$ where $n$ is intersection number with $D$. (iii) One can consider non-compact Lagrangians $L_i\subset M$ which go to infinity nicely (following the Liouville flow). These have wrapped Floer cohomology $HW^\ast(L_0,L_1)$, as well as "partially wrapped" variants. Wrapping concerns how one chooses to perturb $L_0$ at infinity. This version takes place in $M$, and (AFAIK) can't naturally be extended to something that takes place in $X$. Invariants for 3-manifolds with boundary. A basic idea is that a 3-manifold $Y$ bounding $F$ should define a (generalized) Lagrangian submanifold $L_Y$ where $X=Sym^{g(F)}F$, as in the "key example" above. The collection of filtered Floer modules $HF^*_{X,D}(\Lambda, L_Y)$ as $\Lambda$ ranges over Lagrangian submanifolds of $M$ (more precisely, the module, over the compact filtered Fukaya category of $(X,D)$, defined by $L_Y$) should be an invariant of $Y$. If one is interested only in the simpler groups $HF^*_M(\Lambda,L_Y)$, one can (in principle) determine these by looking at the finite collection of (partially wrapped) groups $HW^*(W_i,L_Y)$, where $W_i$ ranges over the thimbles for a certain Lefschetz fibration $M\to \mathbb{C}$. That is, one thinks of $L_Y$ as defining a module over the algebra $A_{LOT}$ formed by the sum of groups $HW^*(W_i,W_j)$. This follows from a deep theorem of Seidel about generating Fukaya categories by thimbles, adapted by Auroux. The algebra $A_{LOT}$ is (part of) what Lipshitz-Ozsvath-Thurston assign to a parametrized surface, and the module is what they call $\widehat{CFA}(Y)$. They arrived at it by a quite different route. They don't bother with constructing $L_Y$ itself, only the module it defines. Because they use the groups of type (iii) to form their algebra, their approach only works in $M$, not $X$. For that reason, they only capture the hat-theory. The great advantage of LOT's approach is its finiteness and computability. Lekili and I do construct $L_Y$. We can guess at finite collections of "test Lagrangians" sufficient to compute the module $HF^*_{X,D}(\cdot, L_Y)$, but have not yet proved that they are sufficient. - Dear Tim. Thank you very much for your informative answer. Just to be sure that I understand correctly, are you suggesting that the LOT approach can indeed only recover the hat-theory? – Stefan Behrens Aug 2 2011 at 11:31 Stephan, it's usually unwise to say in absolute terms that X can't be approached by method Y. That is my suggestion, but Robert, Dylan and Peter, or somebody else, may prove me wrong! – Tim Perutz Aug 2 2011 at 13:09 You're right, maybe a jumped the gun a little. Come to think of it, I'm actually not even sure what "the LOT approach" is since I don't know enough about the motivation behind their constructions. – Stefan Behrens Aug 3 2011 at 13:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937183678150177, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Frequency_spectrum
# Frequency spectrum The frequency spectrum of a time-domain signal is a representation of that signal in the frequency domain. The frequency spectrum can be generated via a Fourier transform of the signal, and the resulting values are usually presented as amplitude and phase, both plotted versus frequency.[1] Any signal that can be represented as an amplitude that varies with time has a corresponding frequency spectrum. This includes familiar concepts such as visible light (color), musical notes, radio/TV channels, and even the regular rotation of the earth. When these physical phenomena are represented in the form of a frequency spectrum, certain physical descriptions of their internal processes become much simpler. Often, the frequency spectrum clearly shows harmonics, visible as distinct spikes or lines, that provide insight into the mechanisms that generate the entire signal. ## Light A source of light can have many colors mixed and in different amounts (intensities). A rainbow, or prism, sends the different frequencies in different directions, making them individually visible at different angles. A graph of the intensity plotted against the frequency (showing the amount of each color) is the frequency spectrum of the light. When all the visible frequencies are present in equal amounts, the perceived color of the light is white, and the spectrum is a flat line. Therefore, flat-line spectrums in general are often referred to as white, whether they represent light or something else ## Sound Similarly, a source of sound can have many different frequencies mixed. A musical tone's timbre is characterized by its harmonic spectrum. Sound in our environment that we refer to as noise includes many different frequencies. When a sound signal contains a mixture of all audible frequencies, distributed equally over the audio spectrum, it is called white noise.[2] ### Physical acoustics of music Main article: musical acoustics Acoustic spectrogram of the note G played on a Piano. In this spectrogram, the vertical axis represents frequency linearly extending from 0 to 10 kHz, and the horizontal axis represents time over an interval of 1.5 seconds. Generated with Fatpigdog's PC based Real Time FFT Spectrum Analyzer. Click below to hear the G Piano Note: Sound spectrum is one of the determinants of the timbre or quality of a sound or note. It is the relative strength of pitches called harmonics and partials (collectively overtones) at various frequencies usually above the fundamental frequency, which is the actual note named (e.g. an A). Spectrum analyzer or Sonagraph The spectrum analyzer is an instrument which can be used to convert the sound wave of the musical note into a visual display of the constituent frequencies. This visual display is referred to as an acoustic spectrogram. Software based audio spectrum analyzers are available at low cost, providing easy access not only to industry professionals, but also to academics, students and the hobbyist. The acoustic spectrogram generated by the spectrum analyzer provides an acoustic signature of the musical note. In addition to revealing the fundamental frequency and its overtones, the spectrogram is also useful for analysis of the temporal attack, decay, sustain, and release of the musical note. ## Radio In radio and telecommunications, the frequency spectrum can be shared among many different broadcasters. Each broadcast radio and TV station transmits a wave on an assigned frequency range, called a channel. When many broadcasters are present, the radio spectrum consists of the sum of all the individual channels, each carrying separate information, spread across a wide frequency spectrum. Any particular radio receiver will detect a single function of amplitude (voltage) vs. time. The radio then uses a tuned circuit or tuner to select a single channel or frequency band and demodulate or decode the information from that broadcaster. If we made a graph of the strength of each channel vs. the frequency of the tuner, it would be the frequency spectrum of the antenna signal. ## Spectrum analysis Example of voice waveform and its frequency spectrum A triangle wave pictured in the time domain (top) and frequency domain (bottom). The fundamental frequency component is at 220 Hz (A2). Spectrum analysis, also referred to as frequency domain analysis or spectral density estimation, is the technical process of decomposing a complex signal into simpler parts. As described above, many physical processes are best described as a sum of many individual frequency components. Any process that quantifies the various amounts (e.g. amplitudes, powers, intensities, or phases), versus frequency can be called spectrum analysis. Spectrum analysis can be performed on the entire signal. Alternatively, a signal can be broken into short segments (sometimes called frames), and spectrum analysis may be applied to these individual segments. Periodic functions (such as $sin (t)$) are particularly well-suited for this sub-division. General mathematical techniques for analyzing non-periodic functions fall into the category of Fourier analysis. The Fourier transform of a function produces a frequency spectrum which contains all of the information about the original signal, but in a different form. This means that the original function can be completely reconstructed (synthesized) by an inverse Fourier transform. For perfect reconstruction, the spectrum analyzer must preserve both the amplitude and phase of each frequency component. These two pieces of information can be represented as a 2-dimensional vector, as a complex number, or as magnitude (amplitude) and phase in polar coordinates. A common technique in signal processing is to consider the squared amplitude, or power; in this case the resulting plot is referred to as a power spectrum. In practice, nearly all software and electronic devices that generate frequency spectra apply a fast Fourier transform (FFT), which is a specific mathematical approximation to the full integral solution. Formally stated, the FFT is a method for computing the discrete Fourier transform of a sampled signal. Because of reversibility, the Fourier transform is called a representation of the function, in terms of frequency instead of time; thus, it is a frequency domain representation. Linear operations that could be performed in the time domain have counterparts that can often be performed more easily in the frequency domain. Frequency analysis also simplifies the understanding and interpretation of the effects of various time-domain operations, both linear and non-linear. For instance, only non-linear or time-variant operations can create new frequencies in the frequency spectrum. The Fourier transform of a stochastic (random) waveform (noise) is also random. Some kind of averaging is required in order to create a clear picture of the underlying frequency content (frequency distribution). Typically, the data is divided into time-segments of a chosen duration, and transforms are performed on each one. Then the magnitude or (usually) squared-magnitude components of the transforms are summed into an average transform. This is a very common operation performed on digitally sampled time-domain data, using the discrete Fourier transform. This type of processing is called Welch's method. When the result is flat, it is commonly referred to as white noise. However, such processing techniques often reveal spectral content even among data which appears noisy in the time domain. ## References 1. Alexander, Charles; Sadiku, Matthew (2004). Fundamentals of Electric Circuits (Second ed.). McGraw-Hill. p. 761. ISBN 0-07-249350-X. "The frequency spectrum of a signal consists of the plots of the amplitudes and phases of the harmonics versus frequency." 2. "white noise definition". yourdictionary.com.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193853139877319, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/35453/finding-all-cycles-of-a-certain-length-in-a-graph/35528
## Finding all cycles of a certain length in a graph ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello, I'm looking for a formula or algorithm to find the number of cycles of a certain length $k$ in a graph. I know that $(A^k)_{ii}$ gives me the number of cycles from vertex $i$ to itself ($A$ is the adjacency matrix), but these are cycles that might contain the same vertex twice. I have to tried to devise some sort of a recurrence formula but to no avail. Thanks! - Can't you do some inclusion/exclusion if you know every cycle length? – Per Alexandersson Aug 13 2010 at 11:58 ## 3 Answers Is your graph topologically planar or non-planar, weighted or unweighted, directed or undirected? Do you want an algorithm and/or a formula/bound? For bounds on planar graphs, see Alt et al. On the number of simple cycles in planar graphs For an algorithm, see the following paper. It incrementally builds k-cycles from (k-1)-cycles and (k-1)-paths without going through the rigourous task of computing the cycle space for the entire graph. It also handles duplicate avoidance. • Hongbo Liu; Jiaxin Wang; , "A new way to enumerate cycles in graph," Telecommunications, 2006. AICT-ICIW '06. International Conference on Internet and Web Applications and Services/Advanced International Conference on , vol., no., pp. 57- 57, 19-25 Feb. 2006 doi: 10.1109/AICT-ICIW.2006.22 URL: http://www.ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1602189&isnumber=33674 - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you just want a recursion then let $C_k$ be the number of cycles of length $k$. You have the identity $$(A^k) _{ii}=\sum _{r\geq 2}C_r (A^{k-r}) _{ii}.$$ - If you consider only simple cycles (every vertex visited at most once) then this problem is NP-complete, so no polynomial (in $|G|$ and $k$) algorithm is known. If non-polynomial algorithms are ok, you can use dynamic programming algorithm with complexity $O(\sum_{i=0}^{i\le k}\binom{n}{i}n^2)$. This algorithm calculates for every subset $S$ of at most $k$ vertices and every vertex $v \in S$ from this subset the number of paths that goes through all vertices from $S$ and has $v$ as the last vertex. - Is this algorithm better than enumerating the $k$-tuples of vertices and seeing if they are the vertices of a cycles in the given order? – damiano Aug 13 2010 at 9:18 The number of $k$-tuples with distinct elements is $\frac{n!}{(n-k)!}$ which for big $k$ is much more than the number of subsets with at most $k$ elements. For example if $n=k$ then this algorithm works in time $O(n^2 2^n)$ and enumerating algorithm in time $O(n!)$. – falagar Aug 13 2010 at 9:36 I see where our discrepancies lie: I had interpreted the question as fixing a certain value of k, whereas you are not doing this. For a fixed value of k, enumerating *k*-tuples seems to have complexity $O(\binom{n}{k})=O(n^k)$. – damiano Aug 13 2010 at 9:40 Yes, if $k$ is fixed and we are interested only how time complexity depends on $n$ then both algorithms have the same complexity. They actually differ by a factor of $k!$. – falagar Aug 13 2010 at 9:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9002171754837036, "perplexity_flag": "middle"}
http://all-science-fair-projects.com/science_fair_projects_encyclopedia/Frame_of_reference
All Science Fair Projects Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. Categories: Physics | Geodesy | Navigation | Astronomy | Surveying Frame of reference A frame of reference in physics is a set of axes which enable an observer to measure the aspect, position and motion of all points in a system relative to the reference frame. Two observers may choose to use different frames of reference to investigate a common system. This definition applies to "classical" physics, i.e. before the general theory of relativity. In relativity, it can happen that a spacetime cannot be covered by (described in terms of) a rigid reference frame, but gravity can cause distortions that vary from place to place. (see, for example [1] which demonstrates experimentally that the rotation of the Earth pulls inertial reference frames near it in a circular motion whose rotational speed must fall off at large distances. Simply put, a set of locally inertial reference frames at varying distances from the Earth's axis are twisting up kind of like molasses stirred by a central rotator.) The measurements that an observer makes about a system generally depend on the observer's frame of reference (examples are given below). However, the principle of relativity states that, even though a set of measurements may depend on an observer's particular frame of reference, the observed physical events must still follow the same physical laws in all inertial frames of reference. For example, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the x-axis and the direction in front of him as the positive y-axis. To him, the car moves along the x axis with some velocity v in the positive x-direction. Alfred's frame of reference is considered an inertial frame of reference because he is not accelerating (ignoring effects such as Earth's rotation and gravity). Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive x-axis, and the direction in front of her as the positive y-axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving - for instance, as she drives past Alfred, she observes him moving with velocity v in the negative y-direction. If she is driving north, then north is the positive y-direction; if she turns east, east becomes the positive y-direction. Now assume Candace is driving her car in the opposite direction. As she passes by him, Alfred measures her acceleration and finds it to be a in the negative x-direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity v is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same - in her frame of reference, a in the negative y-direction. However, if she is accelerating at rate A in the negative y-direction (in other words, slowing down), she will find Candace's acceleration to be a' = a - A in the negative y-direction - a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive y-direction (speeding up), she will observe Candace's acceleration as a' = a + A in the negative y-direction - a larger value than Alfred's measurement. Frames of reference are especially important in special relativity, because when a frame of reference is moving at some significant fraction of the speed of light, then the flow of time in that frame does not necessarily apply in another reference frame. The speed of light is considered to be the only true constant between moving frames of reference. Nomenclature and notation When working a problem involving one or more frames of reference it is common to designate an inertial frame of reference. An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x' , y' , a' . The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r'. From the geometry of the situation, we get $\vec r = \vec R + \vec r'$ Taking the first and second derivatives of this, we obtain $\vec v = \vec V + \vec v'$ $\vec a = \vec A + \vec a'$ where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame. These equations allow transformations between the two coordinate systems; for example, we can now write Newton's second law as $\vec F = m\vec a = m\vec A + m\vec a'$ When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decellerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or decelleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in tangential direction, the coriolis effect). In actual fact the force exerted on the object that keeps the object's motion in sync with the rotating frame elicits manifestation of inertia. If there is insufficient force to keep the object's motion in sync with the rotating frame, then seen from the perspective of the rotating frame there is an apparent accelaration. Whenever manifestation of inertia appears to act as a force it is labeled as a fictitious force. Inertia is very much real, of course, but unlike force it never accelerates an object. A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried). This arrangement leads to the equation $\vec a = \vec a' + \dot\vec\omega \times \vec r' + 2\vec\omega \times \vec v' + \vec\omega \times (\vec\omega \times \vec r') + \vec A_0$ Multiplying through by the mass m gives $\vec F' = \vec F_\mathrm{physical} + \vec F'_\mathrm{transverse} + \vec F'_\mathrm{coriolis} + \vec F'_\mathrm{centripetal} - m\vec A_0$ where $\vec F'_\mathrm{transverse} = -m\dot\vec\omega \times \vec r'$ $\vec F'_\mathrm{coriolis} = -2m\vec\omega \times \vec v'$ (Coriolis force) $\vec F'_\mathrm{centrifugal} = -m\vec\omega \times (\vec\omega \times \vec r')=m(\omega^2 \vec r'- (\vec\omega \cdot \vec r')\vec\omega)$ (centrifugal force) Particular frames of reference in common use • International Terrestrial Reference Frame • International Celestial Reference Frame See also Categories: Physics | Geodesy | Navigation | Astronomy | Surveying 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276171326637268, "perplexity_flag": "middle"}
http://divisbyzero.com/2008/10/19/measuring-an-angle-with-a-ruler/?like=1&source=post_flair&_wpnonce=a34a077b61
# Division by Zero A blog about math, puzzles, teaching, and academic technology Posted by: Dave Richeson | October 19, 2008 ## Measuring an angle with a ruler In the September 2008 issue of the College Mathematics Journal Travis Kowalski presents an neat way to measure an angle using a ruler.  He attributes the discovery to a student of his, Tor Bertin. Given an acute angle $\alpha$ (the technique can be modified for obtuse angles), measure off a distance $s$ on each ray.  Then measure the distance between these two points, $b$. He claims that $\alpha$ is approximately $\displaystyle\frac{60b}{s}$ degrees. He illustrated this technique using $s=3$.  Some example include: • if $\alpha=15^\circ$, then the approximation is $15.7^\circ$ • if $\alpha=45^\circ$, then the approximation is $45.9^\circ$ • if $\alpha=70^\circ$ then the approximation is $68.8^\circ$ • obviously, if $\alpha=60^\circ$, then the approximation is $60^\circ$. As another example, if we take $s$ to be 6 centimeters, then the measurement of $b$ in milimeters is the approximate number of degrees for $\alpha$. The derivation of this approximation is elementary.  Using trigonometry, it is easy to see that $\displaystyle\sin(\frac{\alpha}{2})=\frac{b}{2s}$.  Assuming sine takes angles in radians, but that $\alpha$ is measured in degrees, this becomes $\displaystyle\sin(\frac{\pi\alpha}{360})=\frac{b}{2s}$. Then the fact that $\sin\theta\approx\theta$ and $\pi\approx 3$ yields the desired result. The rest of the article is devoted to looking at whether 60 is the best constant to be used in this approximation formula.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8865861892700195, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=512188
Physics Forums ## How to calculate Clebsch-Gordon coefficients Can anybody tell me how to calculate Clebsch-Gordon coefficients? I see a table given for the coefficients in some books (Griffiths p200), but it is not clear how to read the table. Any help would be appreciated. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Science Advisor Quote by Avijeet Can anybody tell me how to calculate Clebsch-Gordon coefficients? I see a table given for the coefficients in some books (Griffiths p200), but it is not clear how to read the table. Any help would be appreciated. Well, I never use the Clebsch-Gordon coefficients directly in calculations ... I use Wigner 3-j symbols instead.(see http://en.wikipedia.org/wiki/3-jm_symbol) The CG coeffs are useful because they are directly related to the angular momentum coupling equations, but the Wigner symbols are much more intuitive to use in calculations, and have useful symmetry properties as well. I am not going to give a detailed re-hashing of the CG coeffs here ... the treatment in Griffiths is good .. you could also try Zare's "Angular Momentum", which gives a more thorough description IMO. If you have specific questions, please ask them .. in the meantime I can give the following descriptive summaries that may prove helpful. CG coeffs are the scalar products of the description between the uncoupled and coupled representations for two angular momentum vectors j1 and j2. In usual notation, $<j_1m_1j_2m_2|j_1j_2JM>$, the uncoupled representation is on the left, where the z-projections of the two angular momenta are considered separately. The coupled representation is on the right, where the two angular momenta are first added together to give total angular momentum (J), and it's projection on the z-axis (M). Remember there are multiple ways that two angular momenta can be added ... that is the reason we need the CG coefficients in the first place. The formula for calculating a CG coefficient can be found here: http://en.wikipedia.org/wiki/Table_o...ts#Formulation. Thread Tools | | | | |-------------------------------------------------------------------|----------------------------------------|---------| | Similar Threads for: How to calculate Clebsch-Gordon coefficients | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 10 | | | Quantum Physics | 5 | | | Advanced Physics Homework | 4 | | | High Energy, Nuclear, Particle Physics | 2 | | | Quantum Physics | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8918878436088562, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/174677/possible-distance-b-w-points
# Possible distance b/w points I am stumped on the following question (at least a part of the question) The distance from town A to town B is five miles . Town C is six miles from B .Which of the following could be a distance from A to C ? A)11 b)7 c)1 The answer is all of them. I could only figure out 11. How did they get 7 and 1 ? - ## 4 Answers Draw a picture. Say $A$ and $B$ live on the $x$-axis, with $B$ to the right of $A$. You noticed that if $C$ also lives on the $x$-axis, $6$ miles to the right of $B$, then $C$ will be $11$ miles from $A$. If $C$ lives on the $x$-axis, $6$ miles to the left of $B$, then $C$ will be $1$ mile from $A$. As for $7$, there certainly is a triangle $ABC$ with $AB=5$, $BC=6$, and $CA=7$. In general, if we are given three positive real numbers $a$, $b$, and $c$, and the sum of any two of $a$, $b$, and $c$ is greater than the third, then there is a triangle with sides $a$, $b$, and $c$. To think about it another way, draw a circle with centre $B$ and radius $6$. Draw a circle with centre $A$ and radius $7$. These two circles meet (in fact in two places). So there are two points $C$ which are distance $6$ from $B$ and distance $7$ from $A$. - Getting 1 is easy: Say B is 5 miles directly east of A. Also say that C is 6 miles directly west of B. This makes C 1 mile directly west of A. Getting 7 is a bit trickier and requires some thought: We know that A is 5 miles away from B and that B is 6 miles away from C. If we were to make a right triangle with 5 on the bottom and 6 on the side, we would get a hypotenuse length of sqrt(61), which is greater than 7. Therefore, we know that the angle of ABC is less than 90 degrees. We also know that there exists a triangle with sides 5, 6, and 7, and so we have our answer. - the triangle inequality states that $AB\leq BC+AC$ ,$BC\leq AB+AC$ and $AC\leq BC+AB$ If AC=7. If $AC=11$ then $AB+BC=AC$ which means C is in the road between A and B. if \$AC=1, then AB+AC=BC which would mean c is in the road between A and B. The problem is that two of the answers make all towns be colinear while the other one makes a proper triangle with sides 5,6,7. - You know two things: the line connecting $A$ and $B$ is five miles long, and the line between $B$ and $C$ is six miles long. You do not know where $C$ is relative to $B$. That means that $C$ must lie on a circle with a radius of 6 miles from $B$. If $A$ lies directly between $B$ and $C$, then what is the distance from $A$ to $C$? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9666137099266052, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/37569/do-atoms-expand-with-universe
# Do atoms expand with universe? [duplicate] Possible Duplicate: Why space expansion affects matter? Why does space expansion not expand matter? As we know, the universe is expanding, galaxies are away from each other. But what about atoms? Do they also in expanding? What's more, Bohr radius is $$a_0=\frac{\hbar}{m_e c \alpha}$$, if it is increasing, does it means $m_e$ is decreasing due to the density of Higgs field is getting thinner. or $c$ is decreasing or $\hbar$ is increasing? - – Qmechanic♦ Sep 16 '12 at 16:16 ## marked as duplicate by Qmechanic♦, Jerry Schirmer, David Zaslavsky♦Sep 16 '12 at 20:26 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 1 Answer No, the size of the atoms isn't changing as the Universe keeps on expanding. The Bohr radius will always be the same fraction of a nanometer or the same fraction of a wavelength of some light (of a certain spectral line). Because the Universe is expanding and the size is growing, it literally means that there's "more room" and one can squeeze an increasing number of atoms in the "same" volume, i.e. into the tetrahedron with vertices located at centers of 4 galaxies. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444820284843445, "perplexity_flag": "middle"}
http://cms.math.ca/10.4153/CJM-2010-057-6
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM Abstract view # Moduli Spaces of Reflexive Sheaves of Rank 2 Read article [PDF: 323KB] http://dx.doi.org/10.4153/CJM-2010-057-6 Canad. J. Math. 62(2010), 1131-1154 Published:2010-07-06 Printed: Oct 2010 • Jan O. Kleppe, Oslo University College, Faculty of Engineering , Pb. 4 St. Olavs plass, 0130, Oslo, Norway Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax PDF ## Abstract Let $\mathcal{F}$ be a coherent rank $2$ sheaf on a scheme $Y \subset \mathbb{P}^{n}$ of dimension at least two and let $X \subset Y$ be the zero set of a section $\sigma \in H^0(\mathcal{F})$. In this paper, we study the relationship between the functor that deforms the pair $(\mathcal{F},\sigma)$ and the two functors that deform $\mathcal{F}$ on $Y$, and $X$ in $Y$, respectively. By imposing some conditions on two forgetful maps between the functors, we prove that the scheme structure of \emph{e.g.,} the moduli scheme ${\rm M_Y}(P)$ of stable sheaves on a threefold $Y$ at $(\mathcal{F})$, and the scheme structure at $(X)$ of the Hilbert scheme of curves on $Y$ become closely related. Using this relationship, we get criteria for the dimension and smoothness of ${\rm M_{Y}}(P)$ at $(\mathcal{F})$, without assuming ${\textrm{Ext}^2}(\mathcal{F} ,\mathcal{F} ) = 0$. For reflexive sheaves on $Y=\mathbb{P}^{3}$ whose deficiency module $M = H_{*}^1(\mathcal{F})$ satisfies ${_{0}\! \textrm{Ext}^2}(M ,M ) = 0$ (\emph{e.g.,} of diameter at most 2), we get necessary and sufficient conditions of unobstructedness that coincide in the diameter one case. The conditions are further equivalent to the vanishing of certain graded Betti numbers of the free graded minimal resolution of $H_{*}^0(\mathcal{F})$. Moreover, we show that every irreducible component of ${\rm M}_{\mathbb{P}^{3}}(P)$ containing a reflexive sheaf of diameter one is reduced (generically smooth) and we compute its dimension. We also determine a good lower bound for the dimension of any component of ${\rm M}_{\mathbb{P}^{3}}(P)$ that contains a reflexive stable sheaf with small'' deficiency module $M$. Keywords: moduli space, reflexive sheaf, Hilbert scheme, space curve, Buchsbaum sheaf, unobstructedness, cup product, graded Betti numbers.xdvi MSC Classifications: 14C05 - Parametrization (Chow and Hilbert schemes) qqqqq14D22 - unknown classification qqqqq14D2214F05 - Sheaves, derived categories of sheaves and related constructions [See also 14H60, 14J60, 18F20, 32Lxx, 46M20] 14J10 - Families, moduli, classification: algebraic theory 14H50 - Plane and space curves 14B10 - Infinitesimal methods [See also 13D10] 13D02 - Syzygies, resolutions, complexes 13D07 - Homological functors on modules (Tor, Ext, etc.) © Canadian Mathematical Society, 2013 © Canadian Mathematical Society, 2013 : http://www.cms.math.ca/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7958685755729675, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/53461-indicator-function-question.html
# Thread: 1. ## Indicator Function Question What is the value of $\sum_{i=0}^{4}1_Q(2^{\frac{i}{2}})$ where $1_Q(x)$ is the indicator function for the set of rational numbers. 2. What do you think this sum equals $I_Q (2^0 ) + I_Q (2^{\frac{1}<br /> {2}} ) + I_Q (2^1 ) + I_Q (2^{\frac{3}<br /> {2}} ) + I_Q (2^2 )=?$ 3. so $1+\sqrt2+2+\sqrt{2^3}+4$ ? 4. Originally Posted by jpatrie $1+\sqrt2+2+\sqrt{2^3}+4$ ? No indeed. Do you know what an indicator function is? Often it is called a characteristic function. $I_Q (x) = \left\{ {\begin{array}{rl}<br /> 1 & {x \in Q} \\ 0 & {x \notin Q} \\ \end{array} } \right.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8728235363960266, "perplexity_flag": "middle"}
http://nrich.maths.org/7272
### Ball Bearings If a is the radius of the axle, b the radius of each ball-bearing, and c the radius of the hub, why does the number of ball bearings n determine the ratio c/a? Find a formula for c/a in terms of n. ### Overarch 2 Bricks are 20cm long and 10cm high. How high could an arch be built without mortar on a flat horizontal surface, to overhang by 1 metre? How big an overhang is it possible to make like this? ### Cushion Ball The shortest path between any two points on a snooker table is the straight line between them but what if the ball must bounce off one wall, or 2 walls, or 3 walls? # Population Dynamics - Part 6 ##### Stage: 5 Challenge Level: We now incorporate the effect of the environment on the Lotka-Volterra equations derived earlier. Consider a population of giraffes of size x and a population of hyenas of size y. Using the logistic equation from before, we can model the effect of the carrying capacity with the equations:  $$\begin{align*} \frac {\mathrm{d}x}{\mathrm{d}t}&=r_1 \frac{K_1-x}{K_1} x= r_1 x\left(1-\frac{x}{K_1}\right) \\ \frac {\mathrm{d}y}{\mathrm{d}t}&=r_2 \frac{K_2-y}{K} = r_2y\left(1-\frac{y}{K_2}\right) \end{align*}$$ An increase in either population of species will reduce the resources available to both. In order to model this, we introduce a competition coefficient to represent the competitive effect of one species on the other. Let $\alpha$ be the competitive effect of the hyenas on the giraffes, and $\beta$ be the competitive effect of the giraffes on the hyenas. We then consider the terms  $\frac{K_1-x-\alpha y}{K_1}$ and $\frac{K_2-y-\beta x}{K_2}$ . Question:   Can you explain the logic behind these terms? Think what would happen if either the giraffe or hyena population died out. Our population equations then become: $$\begin{align*} \frac {\mathrm{d}x}{\mathrm{d}t}&=r_1 x \Bigg(1- \frac{x+\alpha y}{K_1}\Bigg) \\ \frac {\mathrm{d}y}{\mathrm{d}t}&=r_2 y \Bigg(1- \frac{y+\beta x}{K_2}\Bigg) \end{align*}$$ Question:    Suppose (quite grandly) that two populations of giraffes and hyenas have population equations  $\frac {\mathrm{d}x}{\mathrm{d}t}=2x\bigg(1-\frac{x+2y}{5}\bigg)$ and $\frac {\mathrm{d}y}{\mathrm{d}t}=y\bigg(1-\frac{y+x}{3}\bigg)$ . What are the equilibrium points? Look at the phase diagram below. What is happening to the populations of both animals? What do you think the red point means? Question:    If you were to create your own model, what other parameters would you consider when creating differential equations to describe the population sizes? Perhaps think about other predators and seasonal variation of the carrying capacity. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9059574007987976, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/60403?sort=oldest
## Can Chern class/character be categorified? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The Chern character sends the class of a locally free sheaf to the cohomology ring of the underlying variety X. And it is a ring homomorphism from K to H^*. I saw people write its source as the bounded derived category too, which make sense if the underlying variety is smooth (sending a bounded complex to the alternating "sum" of the Chern characters of its cohomology sheaf). My question is, if I want to think $D^b(X)$ as a certain categorification of $K_0(X)$, is it possible to categorify the chern character map? What will be a good candidate of the target category? (Or is there a heuristic showing this is not likely to be true?) - ## 2 Answers There are categorified analogs of the Chern character, but I don't think of them in the way you're proposing. More precisely, you can take an object in the derived category and assign to it a class in cohomology, and this map factors through K-theory, so the two constructions you're discussing seem to me to be the same. One way to think of the Chern character is the following. Given any associative, dg or $A_\infty$ algebra, you can define its Hochschild homology. This is the recipient for a universal trace map from the algebra, and more generally for any "finite" module (perfect complex) you get a class (its character) in Hochschild homology. Given more generally a (dg or $A_\infty$) category you can similarly define its Hochschild homology and a character map for "finite" objects (which factors through the K-theory of the category), which agrees with the above when your category is modules over an algebra (which it usually is, noncanonically). To "categorify" you can replace an algebra by an associative algebra object in any symmetric monoidal $\infty$-category. Its Hochschild homology is defined as an object of said category and again there's a Chern character map for "finite" modules. Why is this a categorification? for example you can take your associative algebra to be some derived category of sheaves with a monoidal structure (eg coherent sheaves or $\D$-modules or.. with tensor product or some convolution product), and then its Hochschild homology is itself a category. Thus module categories will have Chern characters which are objects of this homology category. This is (one way to think of) the notion of a "character sheaf" in representation theory (where our associative algebra is sheaves on a group with convolution, and module categories are categories with a nice action of the group, and their Chern character are adjoint-equivariant sheaves on the group, ie categorified class functions). (This story is by the way a special case of the Cobordism Hypothesis with Singularities of Jacob Lurie -- in fact just of its one-dimensional case.. our algebra objects are assigned to a point, their Hochschild homology is assigned to the circle, modules are allowable "singularities" in the theory and their Chern character is attached to a circle with a marked "singular point") - David, could you please explain the category structure on Hochschild homology of an associative algebra in a symmetric monoidal $\infty$-category? – Sasha Apr 3 2011 at 3:59 For the first paragraph above, I understand that I described the same thing, I just want to ask if one can replace the target by a category such that when one goes back one gets the original Chern character. The rest is a bit in over my head, I will try to understand it, though. Thank you for a great reply. – 36min Apr 3 2011 at 4:01 Sasha - I only meant you'll get a category if your algebra itself is one, i.e. lives in an $\infty$-category of categories. For example if you take sheaves on a stack, with tensor product, its Hochschild homology (as well as its Hochschild cohomology, or Drinfeld center) is given by sheaves on the (derived form of the) inertia stack or "derived loop space" (cf. arxiv.org/abs/0805.0157, and arxiv.org/abs/0904.1247 for D-module analogs). – David Ben-Zvi Apr 3 2011 at 4:24 @36min - that is an interesting point, to which I don't know an answer. I guess I think of passing to characters as a decategorification, as in the case of representations of a group (which is a special case of the discussion). In the usual formulation though we use that this decategorification factors through a map from K-theory to cohomology so it doesn't feel like one! – David Ben-Zvi Apr 3 2011 at 4:28 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The paper http://arxiv.org/abs/0804.1274 of Toën-Vezzosi is about categorifying the Chern character. Let me try to summarize their strategy. First of all they introduce a triangulated $2$-category $Dg(X)$ of derived categorical sheaves on a (derived) scheme $X$. It is based on a the idea that a categorification of the theory of modules on a commutative ring $k$ is given by $k$-linear categories: they argue that dg-categories can be used in order to categorify homological algebra in a similar but better way (better in the sens that the non-dg setting seems to be too rigid to allow push-forwards). The second step is to use, for a given (derived) scheme $X$, the pull-back along the projection $LX\to X$. For a categorical sheaf $F$ on $X$ on consider its pull-back $p^*F$, which naturally come equipped with a self-equivalence $u$. The rough idea to see this is to consider the pull-back (a-k-a >transgression) along the evaluation map $S^1\times LX\to X$, and then to observe that categorical sheaves on $S^1\times LX$ are categorical sheaves on $LX$ together with a $\mathbb{Z}$-action. Finally, they conjecture the existence of an $S^1$-equivariant trace $Tr^{S^1}(u)\in D^{S^1}(LX)$. Its $K_0$ would be a candidate for the (categorified) Chern character of $F$. ### Why does this categorify the Chern character ? If we do the same construction starting with a sheaf of $X$, then we get in the end an element in $\pi_0(\mathcal O_{LX}^{S^1})=HC_0^{-}(X)$ (while the non-$S^1$-invariant trace takes values in $\pi_0(\mathcal O_{LX})=HH_0(X)$). One can show that this constructs the ususal Chern character. The main difficulty is the (conjectural) existence of the $S^1$-invariant trace. ### Follow-up A complete treatment of this approach (together with a proof of the conjecture) has been done by the above mentioned authors in a long paper in french. - 2 Damien - thanks for the very informative answer! It might be worth pointing out that this construction (for X reasonable - eg a scheme or more generally perfect stack) is a special case of the construction I explain. Namely we consider the dg category QC(X) of quasicoherent sheaves on X, which is an associative (in fact commutative) algebra object in dg-cats. Module categories for it are the same as derived categorical sheaves (or more precisely, quasicoherent ones - alternatively we can work in a sheafified setting from the beginning). – David Ben-Zvi May 3 2011 at 16:38 2 The Hochschild homology of QC(X) was calculated in arxiv.org/abs/0805.0157 to be QC(LX) (and the cyclic homology of QC(X) was calculated in arxiv.org/abs/1002.3636 as D-modules on X..for X a scheme - for a perfect stack you get D(LX)). So the Chern character of a derived categorical sheaf is a sheaf on the loop space (and in fact a D-module on X, which is just the cyclic homology of your derived categorical sheaf with its Gauss-Manin connection). In the case of pt/G you recover QC(G/G) ("quasicoherent character sheaves") as the characters of categories with algebraic G-action. – David Ben-Zvi May 3 2011 at 16:43 2 (Of course my comment is meant purely mathematically, not historically - the picture I explain certainly owes a great debt to the ideas of Toen and Vezzosi!) – David Ben-Zvi May 3 2011 at 16:50 2 Damien - Great! and Yes: an $E_n$ algebra can be integrated on all manifolds of dim at most n via topological chiral homology (ie it's n-dualizable in the appropriate "Morita" higher category). A (left) module for such an algebra is a morphism from the trivial field theory (the simplest kind of example of a "singularity" in the theory), and so we get a "Chern character" for the module which is a class in the chiral homology on any manifold of dim at most n. – David Ben-Zvi May 3 2011 at 20:51 2 (If you'd like, this is a statement about functoriality of HH_* for higher algebras: the morphism from the unit corresponding to a module gives a map on chiral homology, and its image defines the Chern character.) – David Ben-Zvi May 3 2011 at 20:51 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202755093574524, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/205193-conditional-probability.html
# Thread: 1. ## Conditional probability Hello, I'm stuck on a problem and I wasn't getting any answer in the pre-university statistic topic so I'd be very grateful if someone could help: To go to work employs take their car or the bus. If they take their car they have 1/2 chances to be late, if they take the bus only 1/4 chances to be late. If they are on time one day they will take the same mean of transportation the next day, if they are late they switch. If p is the probability that an employ goes to work on day one with his car: a) what is the probability that he'll go to work with his car on day n? I started by writing the probability with the conditional probability that he went to work on day n-1... but that's the best i can come up with I don't know what else I can do. b) what is the probability that he will be late on day n. c)what is the limit when n---> inf for a) and b). I thought of something but it doesn't work: $C_{n}=\{\text{arrives with car on day n}\}$ $A=\{\text{arrives on time}\}$ then if $P$ is our function of probability: $P(C_n)=P(A|C_{n-1})+P(A^{c}|C_{n-1}^c))=\frac{1}{2} P(C_{n-1})+\frac{1}{4}P(C_{n-1}^c)= \frac{1}{2} P(C_{n-1}) + \frac{1}{4}(1-P(C_{n-1})) = \frac{1}{4} (P(C_{n-1})+1)$ then recursively you get $\frac{P(C_{1})}{4^n} + \sum^{n}_{k=1} \frac{1}{4^k}= \frac{P(C_{1})}{4^n} + \frac{1- \frac{1}{4^{n-1}}}{ \frac{3}{4}}$ but if n---->inf then $P(C_n)$ is greater than 1... find the big mistake.... thanks a lot in advance. 2. ## Re: Conditional probability Hey sunmalus. Are you familiar with Markov Chains? Markov chain - Wikipedia, the free encyclopedia 3. ## Re: Conditional probability No I didn't but it's exactly what I needed. Thanks a lot!! 4. ## Re: Conditional probability btw it works the way I did it. But I forgot to put a (1/4) if front of the last expression..... And then I find the same thing as when using Markov chains!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532255530357361, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/17032/which-p-adic-numbers-are-also-algebraic
## Which p-adic numbers are also algebraic? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is $\mathbb{Q}_p \cap \overline{\mathbb{Q}}$ ? For instance, we know that $\mathbb{Q}_p$ contains the $p-1$st roots of unity, so we might say that $\mathbb{Q}(\zeta) \subset \mathbb{Q}_p \cap \overline{\mathbb{Q}}$, where $\zeta$ is a primitive $p-1$st root. As a more specific example, $x^2 - 6$ has 2 solutions in $\mathbb{Q}_5$, so we could also say that $\mathbb{Q}(\sqrt{6},\sqrt{-1})\subset \mathbb{Q}_p \cap \overline{\mathbb{Q}}$. Edit: I removed the motivation for this question (which I think stands by itself), as it will be better as a separate question once I think it through a bit better. - 2 Note that, as an abstract field, $\overline{\mathbb{Q}_p}$ is isomorphic to $\mathbb{C}$: both are algebraically closed fields of characteristic zero and with transcendence degree $2^{\aleph_0}$ over $\mathbb{Q}$. I saw a talk recently on $p$-adic L-functions where this isomorphism was used (somewhat apologetically) in a critical way. – Pete L. Clark Mar 4 2010 at 0:59 1 (I think from the phrasing of your question that you knew this already, so my comment is more for the benefit of the other readers.) – Pete L. Clark Mar 4 2010 at 1:00 5 Victor: Pete was only saying the two fields were abstractly isomorphic -- he did not claim the isomorphism preserved any topology. I've also used that isomorphism before. I would describe its use as "violent." – Jared Weinstein Mar 4 2010 at 2:35 1 Complete is a topological condition, not a field theoretic one. All algebraically closed fields of a given transcendence degree over $\mathbb{Q}$ are isomorphic, whatever topology you put on them. – Ben Webster♦ Mar 4 2010 at 2:37 4 I would go so far as to state that such an isomorphism could never be needed for any investigation related to $p$-adic $L$-functions, $p$-adic Galois reps. attached to modular forms, and so on. Although choosing it will save later circumlocutions, it is always just shorthand for fixing embeddings of $\bar{\mathbb Q}$ into $\bar{\mathbb Q}_p$ and $\mathbb C$. – Emerton Mar 4 2010 at 15:53 show 7 more comments ## 2 Answers The field $K_p = \mathbb{Q}_p \cap \overline{\mathbb{Q}}$ is a very natural and well-studied one. I can throw some terminology at you, but I'm not sure exactly what you want to know about it. 1) It is often called the field of "$p$-adic algebraic numbers". This comes up in model theory: it is a $p$-adically closed field, which is the $p$-adic analogue of a real-closed field. In particular, it is elementarily equivalent to $\mathbb{Q}_p$. 2) It is the Henselization of $\mathbb{Q}$ with respect to the $p$-adic valuation, or the fraction field of the Henselization of the ring $\mathbb{Z}_{(p)}$ -- i.e., $\mathbb{Z}$ localized at the prime ideal $p$. The idea is that this field is not complete but is Henselian -- it satisfies the conclusion of Hensel's Lemma. Alternately and somewhat more gracefully, Henselian valued fields are characterized by the fact that the valuation extends uniquely to any algebraic field extension. Roughly speaking, Henselian fields are as good as complete fields for algebraic constructions but are not "big enough" to have the same topological properties. For instance, note that $K_p$ cannot possibly be complete with respect to the $p$-adic valuation, because it is countably infinite and without isolated points: apply the Baire Category Theorem. - 2 Pete, are you sure about the "totally" terminology in (1)? The real numbers which are algebraic over Q are not just the totally real numbers (those who min. poly. over Q splits completely over R) but includes a whole lot more. I'm not quibbling with the usage of the term "p-adically closed". (I accidentally posted this comment as a full reply before. If someone can delete that one, please do.) – KConrad Mar 4 2010 at 1:52 2 I agree with KConrad on this one. The 2nd option is right, and for the first it's just a suitably directed union of number fields equipped with a p-adic place that has its own e=f=1. That's a down to earth way of getting at description #2. Rather interesting, and less evident, is that something similar is true for completions of higher-dimensional normal local noetherian domains, and more specifically the excellent ones. This comes out from the Artin approximation theorem (in the general form of Popescu, for excellent rings). – BCnrd Mar 4 2010 at 3:41 2 @Conrads: I agree; the "totally" should not be there. I removed it. – Pete L. Clark Mar 4 2010 at 4:24 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There's a slightly subtle point near here of which some people are not aware: that it is dangerous (perhaps even nonsensical) to compare algebraic numbers under various different completions. So, to talk about $Q_p\cap \bar Q$, you should be talking about a completion of $Q$ containing $Q_p$, not, e.g., a completion of $Q$ lying inside $C$. I don't think this is what is happening here, but some people may find this interesting. Now, there are lots of isomorphisms floating around, so usually everything turns out just fine, but sometimes not. Here are two examples. (1) The following fallacious argument that $e$ is transcendental is from a talk by Gouvêa, "Hensel's p-adic Numbers: early history" (originally due to Hensel himself). The series expansion of $e^p$ converges in $Q_p$, thus $e$ is a solution to the equation $X^p=1+p\epsilon$, where $\epsilon$ is a $p$-adic unit. So $[Q_p(e):Q_p]=p$ (of course you need to argue that the polynomial is irreducible), and so $[Q(e):Q]\ge p$. Since $p$ was arbitrary, $e$ must be transcendental over $Q$. The fallacy is that even though the series for $e$ (and $e^p$) converges in $R$ and $Q_p$, the numbers they converge to are not the same. (2) The following is from Koblitz's $p$-adic book, page 83 (with an example and some other fallacious arguments). It is not true that if an infinite sum of rational numbers (a) converges $p$-adically to a rational number for some $p$ and (b) converges in the real topology to a rational number, then the rational numbers the two series converge to are the same! - I'm pretty the original poster simply meant "what are the numbers in Q_p which are algebraic over Q?" But your examples are worth pointing out to any readers who are not sensitive to the danger of talking about intersections of fields which are not lying in some common field. (For a similar reason, the notion of composite of two fields can be dangerous if the fields are not already given inside a common field.) Your reference to a series in (2) is not necessary: a sequence in Q can converge to different rational numbers in different topologies (real and p-adic or p-adic and q-adic, say). – KConrad Mar 9 2010 at 21:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559409618377686, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/124174-simulation.html
# Thread: 1. ## Simulation Suppose you have some random vector $\bold{X} = (X_1, \dots, X_n)$ with a density function $f(x_1, \dots, x_n)$. Suppose you want to compute the expected value of some function of the random vector (e.g. $E[g(\bold{X})]$). We know that $E[g(\bold{X})] = \int \int \cdots \int g(x_1, \dots, x_n) f(x_1, \dots, x_n) dx_1 \cdots dx_n$ What is a good way of developing approximation methods for computing $E[g(\bold{X})]$? If $g(\bold{X}) = \bold{X}$ then we are just computing the expected value of $\bold{X}$. So then we can reduce this question (in this case) to what are some good ways of approximating the expected value of a random vector? 2. Originally Posted by Sampras Suppose you have some random vector $\bold{X} = (X_1, \dots, X_n)$ with a density function $f(x_1, \dots, x_n)$. Suppose you want to compute the expected value of some function of the random vector (e.g. $E[g(\bold{X})]$). We know that $E[g(\bold{X})] = \int \int \cdots \int g(x_1, \dots, x_n) f(x_1, \dots, x_n) dx_1 \cdots dx_n$ What is a good way of developing approximation methods for computing $E[g(\bold{X})]$? If $g(\bold{X}) = \bold{X}$ then we are just computing the expected value of $\bold{X}$. So then we can reduce this question (in this case) to what are some good ways of approximating the expected value of a random vector? If you can sample from the distribution with density $f(x_1, \dots, x_n)$ then the mean of the function values for the sample is an unbiased estimator of the expectation of $g(\bold{X})$. CB 3. Originally Posted by CaptainBlack If you can sample from the distribution with density $f(x_1, \dots, x_n)$ then the mean of the function values for the sample is an unbiased estimator of the expectation of $g(\bold{X})$. CB In other words, $\lim\limits_{n \to \infty} \frac{g(\bold{X}^{(1)}) + \cdots + g(\bold{X}^{(n)})}{n} = E[g(\bold{X})]$? 4. Originally Posted by Sampras In other words, $\lim\limits_{n \to \infty} \frac{g(\bold{X}^{(1)}) + \cdots + g(\bold{X}^{(n)})}{n} = E[g(\bold{X})]$? Yes CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904712975025177, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/38999/the-necessity-of-the-b-field/39000
# The necessity of the B field It is fairly easy using basic special relativity to arrive at the conclusion that the magnetic force effect on nearby charges of wires carrying currents on nearby charges is only due to the length contraction in certain inertial frames from the point of view of charges, which gives rise to a perceived change in charge density. However, we structure the (classical field) laws of electromagnetism in Maxwell's laws using a B field. My question is, is it possible to formulate Maxwell's equations only in terms of Lorentz transformed E fields? If not, why not? - 2 – Art Brown Oct 4 '12 at 0:13 – Qmechanic♦ Oct 4 '12 at 6:52 Many thanks for pointing those out, apologies for the repetition – Jallen Oct 4 '12 at 11:32 ## 4 Answers I think the key here is your requirement of a formulation "in terms of Lorentz transformed E-fields". The E-field is a 3-vector field. A Lorentz covariant field must be a 4-vector or 4-tensor field (a Lorentz invariant field must be a Lorentz scalar field). In fact, the E-field is, within SR, a component of a rank 2 electromagnetic 4-tensor field whilst the scalar and vector potentials are components of a electromagnetic 4-vector potential. Since the E-field is only a part of a Lorentz covariant tensor, the notion of a "Lorentz transformed E-field" needs further clarification. - No. $\boldsymbol{B}$ is an independent entity. Reference Jackson, Classical Electrodynamics, Section 12.2. Jackson argues that the anti-symmetric tensor $F$ formulation of EM that we all know and love: $$m \frac{d^2x^{\alpha}}{d\tau^2} = q F^{\alpha\beta} \frac{dx_{\beta}}{d\tau}$$ is not the only possible covariant generalization of the rest frame force law: $$ma=qE$$ In fact, he constructs a counter-example based on a Lorentz scalar potential $\phi$. Non-relativistically, this potential gives a Coulomb and a magnetic-like force, just like we see, but there's no independent $\boldsymbol{B}$-field: in any frame, the force is given in terms of the 4-gradient of the scalar potential, $\partial_\mu \phi$, not the 6 components of $F$. If this Lorentz scalar were the way the world works, the answer to your question would be yes. But it's not. - Of course you also need another assumption to pick out spin 1 rather than spin 0: the transformation law for E and B, the charge is independent of the velocity, the theory is renormalizable and natural. The scalar alternative is the Higgs coupling and it occurs in nature too. – Ron Maimon Oct 4 '12 at 16:08 @Ron: Interesting. I hadn't considered the spin connection (my understanding of qm is abysmal) and will have to think on it. I always appreciate your insights (even when, too often, they go over my head). Thanks! – Art Brown Oct 4 '12 at 16:44 @Ron: So my takeaway from your comment is that there are even more ways than the one I mentioned in which the Maxwell generalization of coulomb+SR is not unique. I studied Purcell as a kid and fell in love with this idea; it was quite a shock to realize t'aint so. Frankly, I think this treatment is corrupting the youth. ;-) – Art Brown Oct 4 '12 at 19:43 it's so, using Purcell's explicit additional assumption--- "The charge is independent of the velocity". This is a discrete choice, and it is equivalent to saying "Electromagnetism is spin 1". For fundamental theories, there are only 3 options--- scalar, spin-1, gravitational, and each one has a different transformation law for the charge, but Purcell gives you the physical point of view regarding spin 1 in a reasonable way (although I never liked the idea of spending so much physical and intuitive effort on something that only takes 3 lines once you learn covariant formalism). – Ron Maimon Oct 4 '12 at 23:40 @Ron: It's certainly required that charge be velocity-independent for standard em, but I don't think that assumption rules out a scalar potential alternative. Compare four-force $f^\alpha = q F^{\alpha \beta} U_\beta$ with the scalar version $f^\alpha = g[\partial^\alpha \phi - U^\alpha U_\beta \partial^\beta \phi ]$; in both cases the charge is invariant. Instead, the discriminant is that, for an external time-invariant electric field, the resultant force be velocity-independent. That characteristic rules out the scalar potential case. – Art Brown Nov 29 '12 at 4:50 Yes--- electromagnetism is developed from this point of view, as suitable for an undergraduate course, in Purcell. The equation of motion is $$ma = qE$$ where E and a are both in the rest frame. The covariant form of this equation is the usual equation of motion: $$m {d^2x_{\nu}\over d\tau^2} = q F_{\mu\nu} {dx^\mu\over d\tau}$$ As you can see by specialising to the rest frame, so the Newton's law in the rest frame, only using E, is indeed covariantly equivalent to both E and B in the usual frame. - If you look at the covariant formalism of classical electrodynamics, you can see that you don't have to mention either the $\bar{E}$ or $\bar{B}$ field. You can do all your calculations with the four-potential $A_\mu$ and the electromangetic tensor $F_{\mu\nu}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919478714466095, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2008/04/23/cauchys-condition/?like=1&source=post_flair&_wpnonce=c8d80d4293
# The Unapologetic Mathematician ## Cauchy’s Condition We defined the real numbers to be a complete uniform space, meaning that limits of sequences are convergent if and only if they are Cauchy. Let’s write these two out in full: • A sequence $a_n$ is convergent if there is some $L$ so that for every $\epsilon$ there is an $N$ such that $n>N$ implies $|a_n-L|<\epsilon$. • A sequence $a_n$ is Cauchy if for every $\epsilon$ there is an $N$ such that $m>N$ and $n>N$ implies $|a_m-a_n|<\epsilon$. See how similar the two definitions are. Convergent means that the points of the sequence are getting closer and closer to some fixed $L$. Cauchy means that the points of the sequence are getting closer to each other. Now there’s no reason we can’t try the same thing when we’re taking the limit of a function at $\infty$. In fact, the definition of convergence of such a limit is already pretty close to the above definition. How can we translate the Cauchy condition? Simple. We just require that for every $\epsilon>0$ there exist some $R$ so that for any two points $x,y>R$ we have $|f(x)-f(y)|<\epsilon$. So let’s consider a function $f$ defined in the ray $\left[a,\infty\right)$. If the limit $\lim\limits_{x\rightarrow\infty}f(x)$ exists, with value $L$, then for every $\epsilon>0$ there is an $R$ so that $x>R$ implies $|f(x)-L|<\frac{\epsilon}{2}$. Then taking $y>R$ as well, we see that $|f(x)-f(y)|=|(f(x)-L)-(f(y)-L)|\leq|f(x)-L|+|f(y)-L|<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$ and so the Cauchy condition holds. Now let’s assume that the Cauchy condition holds. Define the sequence $a_n=f(a+n)$. This is now a Cauchy sequence, and so it converges to a limit $A$, which I assert is also the limit of $f$. Given an $\epsilon>0$, choose an $R$ so that • $|f(x)-f(y)|<\frac{\epsilon}{2}$ for any two points $x$ and $y$ above $R$ • $|a_n-A|<\frac{\epsilon}{2}$ whenever $a+n\geq B$ Just take a $B$ for each condition, and go with the larger one. In fact, we may as well round $B$ up so that $B=a+N$ for some natural number $N$. Then for any $b>B$ we have $|f(b)-A|=|(f(b)-f(B))+(f(B)-A)|\leq|f(b)-f(B)|+|a_N-A|<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$ and so the limit at infinity exists. In the particular case of an improper integral, we have $I(b)=\int_a^bf\,d\alpha$. Then $I(c)-I(b)=\int_b^cf\,d\alpha$. Our condition then reads: For every $\epsilon>0$ there is a $B$ so that $c>b>B$ implies $\left|\int_b^cfd\alpha\right|<\epsilon$. ### Like this: Posted by John Armstrong | Analysis, Calculus ## 3 Comments » 1. [...] read in Cauchy’s condition as follows: the series converges if and only if for every there is an so that for all the sum [...] Pingback by | April 25, 2008 | Reply 2. [...] Cauchy’s condition comes in to say that the series converges if and only for every there is an so that for all the sum . [...] Pingback by | August 28, 2008 | Reply 3. [...] uniform convergence has some things in common with convergence of numbers. And, in particular, Cauchy’s condition comes [...] Pingback by | September 8, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 51, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201708436012268, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/171525-airplane-vectors.html
# Thread: 1. ## Airplane vectors 12. A pilot wishes to fly from Toronto to Montreal, a distance of 508 km on a bearing of 075^o. The cruising speed of the plane is 550 km/h. An 80 km/h wind is blowing on a bearing of 125^o a) What heading should the pilot take to reach his destination? b) What will be the speed of the plane relative to the ground? c) How long will the trip take? I do not know how to get the angle the pilot should take. I don't understand how to do it. I know there is a little angle of 15^o, however, can't find the angle in the triangle to add to the little angle then subtract it from 90^o. I know this is easy, however, can't seem find the method to find the bearing. NOTE: All angles are from true North [up] and then rotate clockwise. 2. Originally Posted by Barthayn 12. A pilot wishes to fly from Toronto to Montreal, a distance of 508 km on a bearing of 075^o. The cruising speed of the plane is 550 km/h. An 80 km/h wind is blowing on a bearing of 125^o a) What heading should the pilot take to reach his destination? b) What will be the speed of the plane relative to the ground? c) How long will the trip take? NOTE: All angles are from true North [up] and then rotate clockwise. law of cosines ... $A^2 = G^2 + W^2 - 2GW \cos(50^\circ)<br />$ $550^2 = G^2 + 80^2 - 160G\cos(50^\circ)$ solve the resulting quadratic for $G$ , the Ground speed. Once you find $G$ , calculate the time to travel the given distance. Use the law of sines to find the angle to steer relative to the direction of the ground vector $075^\circ$ 3. Where did you get the 50 degrees from? 4. Originally Posted by Barthayn Where did you get the 50 degrees from? basic geometry ... 5. I did what you said, and came out with the wrong answer. What did I do wrong? I got an answer of 615.857 km/h. The answer is 598 km/h. EDIT: Never mind, I redid it and got the correct answer. Must have been a calculation error. Thank you very much. !
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314861297607422, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/40615/obstruction-for-a-real-algebraic-surface-to-be-a-complex-algebraic-curve/40635
## Obstruction for a real algebraic surface to be a complex algebraic curve ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Not every real algebraic surface can be endowned a structure of a complex algebraic curve. The only obstruction I know is orientability. Are there any others? - ## 3 Answers If you're not requiring any compatibility criterion between the real and complex structure, then the only obstruction is in fact orientability. Every smooth projective real algebraic surface is a smooth compact real 2-manifold (without boundary). If it's orientable, it must then be a surface of genus $g$ for some $g$. But every surface of genus $g$ admits a complex structure, and every Riemann surface is algebraic. I don't study real algebraic geometry much, but I'm not aware of a good compatibility condition to impose on your complex structure. If you've got something in mind, let me know. - 3 One could ask the surface to be the Weil restriction of the curve -- in this case another obstruction would be that the surface must split geometrically as a product of two same-genus curves. – Dustin Clausen Sep 30 2010 at 14:41 2 @Dustin Clausen: Not just same genus, they actually have to be conjugate to each other, right? For instance, a surface that splits as a product of two non-isomorphic real curves of the same genus won't do. – t3suji Sep 30 2010 at 16:05 t3suji - I agree, thanks! And I suppose that now we have not just an obstruction, but actually an equivalent condition. – Dustin Clausen Sep 30 2010 at 16:13 @ Jack Complex does not automatically mean algebraic. It is clearly true for compact surfaces but I doubt this for non-compact surfaces. Can you provide a reference? – Bugs Bunny Sep 30 2010 at 19:08 I meant to include the word "projective," which would have addressed this. Answer edited. – Jack Huizenga Sep 30 2010 at 23:08 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As observed in some of the previous comments, every closed (=compact, without boundary) orientable real 2-manifold admits a complex structure. So in the smooth case orientability is essentially the only obstruction. If one also considers the case of singular real algebraic surfaces, the situation is more involved and I don't know whether satisfactory results are known. Anyway, one obvious obstruction is the presence of non-isolated singularities, since every complex curve has only a finite number of singular points. For instance, take $X:=S^1 \times C$, where $C \subset \mathbb{RP}^2$ is the nodal real cubic of equation $y^2z=x^3+x^2z$. The singular locus of $X$ is isomorphic to $S^1$, so $X$ surely cannot be endowed with the structure of a complex algebraic curve. - If it is orientable, you have a complex structure and the field of meromorphic functions. Putting my ears into the firing line, I suggest that something should go wrong with the transcendence degree of the field of meromorphic functions. If it is 1, you can consider DVR-s that will give you a compact algebraic curve, and I see no reason for the original curve not to be a subset. If it is more than 1 the surface cannot be algebraic... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932987630367279, "perplexity_flag": "head"}