url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://math.stackexchange.com/questions/11070/to-operate-or-not-to-operate?answertab=active
# To operate or not to operate? I've been having some rather morbid thoughts lately, so, naturally, I decided to share: Suppose I have a deadly disease, which has a chance of killing me every day that it is left uncured. Also, suppose that there's this operation I can go through, but it is not without its risks: If it is successful, I won't have to worry about the disease killing me ever again. If it's a failure, I die there and then. More formally, let $q : N \rightarrow [0,1]$ be the probability that the disease kills me at any given day $t \in N$. Also, let $p : N \rightarrow [0,1]$ be the probability the that the operation fails if I have it on day $t \in N$. Assume that both $p$ and $q$ are monotonically increasing. The question is: At what day $t_0$ do I have to go through the operation in order to maximise my expected lifespan? Now, to solve this: It is obvious that at any day that $p(t) \leq q(t)$ you're better off having the operation because if you don't, you'd have the same chance of dying and no chance of being cured. However, that can't possibly be the whole story. What if $p(t) > q(t)$ for every $t \in N$? You would still have to do the operation at some point, otherwise you'll eventually die because of the accumulated probability of all those days you're left uncured. In that light, let $f(t) = \prod_{k=0}^{t}(1-q(k))$ be the probability that you survive $t$ days without going through the operation. Then, all you need to do is to solve the inequality $f(t_0) < f(t_0-1) \cdot (1-p(t_0))$. Is that correct? Or am I still missing something? - The functions cannot be monotonically increasing because their values should add up to something less or equal to one. (The sum of all values of $q$ would be the probability of dying from the disease provided you're never having the operation.) – Rasmus Nov 20 '10 at 11:19 The computation in your last paragraph is not what you asked for: you wanted to compute expectation values. – Rasmus Nov 20 '10 at 11:24 There is the following problem: if there is a non-zero probability of surviving the operation, then you get an infinite expexted lifespan (provided you decide to take the operation). You could resolve this by assuming that you're dying at a certain age X anyway. – Rasmus Nov 20 '10 at 11:27 About the functions being monotonically increasing: I meant that each day you have an independent chance of dying. The same way that you can throw a die 100 times and still have a 1/6 chance that you get a six each time. About expectation values: The question is about finding the best day to have the operation, in order to maximise the expected lifespan. In my proposed solution, I didn't think it necessary to go through expected values calculations to achieve that. Why should it be necessary anyway? – Naurgul Nov 20 '10 at 11:32 Well, your question, as I understand it, is: in which case is the expected value greater? So you have to compute these values. – Rasmus Nov 20 '10 at 11:36 ## 1 Answer Elaborating on my comments above, the formula for the expected lifespan in the case of not taking the operation is $$\sum_{t<T}t\bigl(f(t)-f(t-1)\bigr),$$ while the expected lifespan in case of taking the operation is $$\sum_{t<t_0}t\bigl(f(t)-f(t-1)\bigr)+T\cdot f(t_0).$$ Here $T$ should be the age that you will die if you take the operation, survive it, and die a natural death. Now you just have to plug in and see which number is larger. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9600375890731812, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/79528/list
## Return to Question 3 edited tags 2 added 24 characters in body This question is prompted by this one by Arturo Magidin: whether there exist varieties of groups in which the relatively free group of rank 2 is finite, and the relatively free group of rank 3 is infinite. My question is: in which varieties of groups is it true that relatively free groups of bigger, countable or finite, rank embed into relatively free groups of smaller rank? This is so, for example, for the variety of all groups, and also for Burnside varieties (defined by the identity $x^n = 1$). On the other hand, this is not so for solvable or nilpotent varieties. My knowledge on this is limited by the (old) book of H. Neumann "Varieties of Groups", and by paper of Shirvanyan about free Burnside varietiesgroups. Probably more is known nowadays. Of course, one can pose the same question also for other algebraic systems, for example, for algebras. 1 # Embedding of relatively free groups of bigger rank into ones of smaller rank This question is prompted by this one by Arturo Magidin: whether there exist varieties of groups in which the relatively free group of rank 2 is finite, and the relatively free group of rank 3 is infinite. My question is: in which varieties of groups is it true that relatively free groups of bigger rank embed into relatively free groups of smaller rank? This is so, for example, for the variety of all groups, and also for Burnside varieties (defined by the identity $x^n = 1$). On the other hand, this is not so for solvable or nilpotent varieties. My knowledge on this is limited by the (old) book of H. Neumann "Varieties of Groups", and by paper of Shirvanyan about Burnside varieties. Probably more is known nowadays. Of course, one can pose the same question also for other algebraic systems, for example, for algebras.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422810673713684, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/207054/expected-value-for-a-function-concerning-a-balls-and-bins-problem/207058
# Expected value for a function concerning a balls and bins problem I'm optimizing a hash function mapping $M$ items into $N$ bins and I need a criterion for evaluating the quality of the mapping. Denoting the number of items put into bin $i$ by $x_i$, an ideal mapping would make each $x_i$ equal to either $\lfloor {\frac MN} \rfloor$ or $\lceil {\frac MN} \rceil$. Currently, I'm using $\sum x_i ^ p$ with $p=3$ as my criterion. Are there simple closed form expressions for the expected value and variance assuming uniform random placement? I might switch to another criterion, so I'm interested in formulas for them, too. I don't care about asymptotic expressions as the typical values are $10 < N < M < 1000$. - ## 1 Answer By symmetry and linearity of expectation, the expected value is just $N$ times the expected value of $x_i^p$ for one bin, that is, $$N\sum_{k=0}^M\binom Mk\left(\frac1N\right)^k\left(1-\frac1N\right)^{M-k}k^3\;,$$ which according to Wolfram|Alpha is $$\frac M{N^2}\left(M^2+3MN+N^2-3M-3N+2\right)\;.$$ for $p=3$. You can calculate the variance similarly, by expressing it as an expectation value, but the resulting expression will probably be rather complicated. If you want to use a different value of $p$, just replace it in the W|A input. - Thank you for this helpful answer. Now it looks quite easy, but I couldn't hope to come to a usable expression in an acceptable time (and I didn't know that WA is that good). – maaartinus Oct 4 '12 at 18:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298537373542786, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/12200/normalizing-variables-for-svd-pca
# “Normalizing” variables for SVD / PCA Suppose we have $N$ measurable variables, $(a_1, a_2, \ldots, a_N)$, we do a number $M > N$ of measurements, and then wish to perform singular value decomposition on the results to find the axes of highest variance for the $M$ points in $N$-dimensional space. (Note: assume that the means of $a_i$ have already been subtracted, so $\langle a_i \rangle = 0$ for all $i$.) Now suppose that one (or more) of the variables has significantly different characteristic magnitude than the rest. E.g. $a_1$ could have values in the range $10-100$ while the rest could be around $0.1-1$. This will skew the axis of highest variance towards $a_1$'s axis very much. The difference in magnitudes might simply be because of an unfortunate choice of unit of measurement (if we're talking about physical data, e.g. kilometres vs metres), but actually the different variables might have totally different dimensions (e.g. weight vs volume), so there might not be any obvious way to choose "comparable" units for them. Question: I would like to know if there exist any standard / common ways to normalize the data to avoid this problem. I am more interested in standard techniques that produce comparable magnitudes for $a_1 - a_N$ for this purpose rather than coming up with something new. EDIT: One possibility is to normalize each variable by its standard deviation or something similar. However, the following issue appears then: let's interpret the data as a point cloud in $N$-dimensional space. This point cloud can be rotated, and this type of normalization will give different final results (after the SVD) depending on the rotation. (E.g. in the most extreme case imagine rotating the data precisely to align the principal axes with the main axes.) I expect there won't be any rotation-invariant way to do this, but I'd appreciate if someone could point me to some discussion of this issue in the literature, especially regarding caveats in the interpretation of the results. - 3 The problem itself usually is not rotation invariant, because each of the variables is recorded with a conventional unit of measurement appropriate to it. E.g, $a_1$ might be in feet, $a_2$ in microns, $a_3$ in liters, etc. Even when all units are the same, if the variables measure different kinds of things, the amounts by which they vary will likely differ in ways characteristic of those variables: once again, this is not rotation invariant. Therefore you should abandon rotation invariance as a guiding principle or consideration. – whuber♦ Jun 22 '11 at 19:28 ## 2 Answers The three common normalizations are centering, scaling, and standardizing. With variable X: Centering is Xi-MEANx. The resultant X will have mean=0. Scaling is Xi/sqrt(SSx). The resultant X will have SS=1. Standardizing is centering-then-scaling. The resultant X will have mean=0 and SS=1. - Can you define "SS" please? – Szabolcs Jun 22 '11 at 7:53 Sum-of-squares. Sum of squared Xi. – ttnphns Jun 22 '11 at 7:57 The reason for setting the sum of squares to 1, and not the variance, is that then the singular values will correspond to the standard deviations along the principal axes (unless I'm mistaken)? – Szabolcs Jun 22 '11 at 8:08 Please also see my edit to the question. – Szabolcs Jun 22 '11 at 8:13 @Szabolcs, I actually may miss a point of your edit. But PCA (or SVD) is just a rotation itself (a special case of orthogonal rotation of the axes). Any translation (like centering) or shrinking/dilatation (like scaling) of the cloud should affect the results of this rotation. – ttnphns Jun 22 '11 at 8:40 A common technique before applying PCA is to subtract the mean from the samples. If you don't do it, the first eigenvector will be the mean. I'm not sure whether you have done it but let me talk about it. If we speak in MATLAB code: this is ````clear, clf clc %% Let us draw a line scale = 1; x = scale .* (1:0.25:5); y = 1/2*x + 1; %% and add some noise y = y + rand(size(y)); %% plot and see subplot(1,2,1), plot(x, y, '*k') axis equal %% Put the data in columns and see what SVD gives A = [x;y]; [U, S, V] = svd(A); hold on plot([mean(x)-U(1,1)*S(1,1) mean(x)+U(1,1)*S(1,1)], ... [mean(y)-U(2,1)*S(1,1) mean(y)+U(2,1)*S(1,1)], ... ':k'); plot([mean(x)-U(1,2)*S(2,2) mean(x)+U(1,2)*S(2,2)], ... [mean(y)-U(2,2)*S(2,2) mean(y)+U(2,2)*S(2,2)], ... '-.k'); title('The left singular vectors found directly') %% Now, subtract the mean and see its effect A(1,:) = A(1,:) - mean(A(1,:)); A(2,:) = A(2,:) - mean(A(2,:)); [U, S, V] = svd(A); subplot(1,2,2) plot(x, y, '*k') axis equal hold on plot([mean(x)-U(1,1)*S(1,1) mean(x)+U(1,1)*S(1,1)], ... [mean(y)-U(2,1)*S(1,1) mean(y)+U(2,1)*S(1,1)], ... ':k'); plot([mean(x)-U(1,2)*S(2,2) mean(x)+U(1,2)*S(2,2)], ... [mean(y)-U(2,2)*S(2,2) mean(y)+U(2,2)*S(2,2)], ... '-.k'); title('The left singular vectors found after subtracting mean') ```` As can be seen from the figure, I think you should subtract the mean from the data if you want to analyze the (co)variance better. Then the values will not be between 10-100 and 0.1-1, but their mean will all be zero. The variances will be found as the eigenvalues (or square of the singular values ). The found eigenvectors are not affected by the scale of a dimension for the case when we subtract the mean as much as the case when we do not. For instance, I've tested and observed the following that tells subtracting the mean might matter for your case. So the problem may result not from the variance but from the translation difference. ````% scale = 0.5, without subtracting mean U = -0.5504 -0.8349 -0.8349 0.5504 % scale = 0.5, with subtracting mean U = -0.8311 -0.5561 -0.5561 0.8311 % scale = 1, without subtracting mean U = -0.7327 -0.6806 -0.6806 0.7327 % scale = 1, with subtracting mean U = -0.8464 -0.5325 -0.5325 0.8464 % scale = 100, without subtracting mean U = -0.8930 -0.4501 -0.4501 0.8930 % scale = 100, with subtracting mean U = -0.8943 -0.4474 -0.4474 0.8943 ```` - I should have mentioned in the question that the mean has already been subtracted. I'll edit it accordingly. – Szabolcs Jun 22 '11 at 7:37 One might simply divide each variable by its standard deviation, but I was wondering if there are other things people do. For example, we can think of this dataset as a point cloud in $N$-dimensional space. Is there a way to do it in a way that does not depend on the rotation in this $N$-d space? If we divide by standard deviations, it will matter along which axes those standard deviations are taken (i.e. it's not rotation invariant). If we do it along the principal axes, then I think the variables will appear uncorrelated. – Szabolcs Jun 22 '11 at 7:41 I realize there might not be a rotation-invariant way to do it, but I'd love to at least read some discussion of these issues ... any pointers welcome. Note: I have no training in applied stat (only maths, such as linalg, prob theory), so I'm learning this stuff as I'm going. – Szabolcs Jun 22 '11 at 7:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8962807059288025, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/247495/adding-cohen-reals-one-at-a-time
# Adding Cohen reals one at a time We know that if we start with a ctm $\mathbb{B}$ and force with the poset of finite functions from $\omega$ to $2$, we add a single Cohen real. We also know that if we force with the poset $\mathbb{P} = Fn(\kappa \times \omega, 2, \aleph_0)$, we add $\kappa$ many reals (and hence can make the Continuuum Hypothesis fail). What happens if instead we iterate adding one real $\kappa$ many times? Would we still get a model for not-CH? - How are you iterating? (Meaning what are you doing at limit stages? Or perhaps you are iterating in some completely different, non-linear, fashion?) – Andres Caicedo Nov 29 '12 at 19:32 @Andres: Just by the fact that the question was asked naively I would guess that this is a finite support iteration. – Asaf Karagila Nov 29 '12 at 19:40 – Asaf Karagila Nov 29 '12 at 19:42 1 Perhaps it is worth to note that even when forcing with finite functions from $\omega$ to 2 you don't add just a single Cohen real but a whole bunch of them. – Miha Habič Nov 29 '12 at 20:59 @Miha: Yes, but not "enough"... – Asaf Karagila Nov 29 '12 at 22:26 show 3 more comments ## 1 Answer Note that the definition of a Cohen forcing as $2^{<\omega}$ does not change between models. Iterating it $\kappa$ many times, or taking the product of $\kappa$ many Cohen posets, or using $\mathbb P$ as you defined it -- all of these have the same consequence. So to your question, yes. A finite-support iteration of length $\kappa$ of adding a single Cohen at a time would end up with a model of $\lnot$CH. - thanks! we've only done basic one-step forcing in lectures, so i didn't even know how you would iterate forcing (apart from in the naive way). – Kris Nov 29 '12 at 20:01 Sorry to ruin your palindrome. ;) – Arthur Fischer Nov 29 '12 at 20:16 1 @Kris, the issue is that explaining precisely what one means by "the naive way" may be a bit problematic. For example, you could have $M_0\subseteq M_1\subseteq M_2\subseteq\dots$ models of set theory, each $M_{i+1}$ an extension of $M_i$ by Cohen forcing, and yet $M_\omega=\bigcup_n M_n$ is not a model of set theory. Once one sees how to do iterations "internally" these worries disappear, but there are several options on how to proceed. (It is a very interesting topic.) – Andres Caicedo Nov 29 '12 at 20:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485989212989807, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/39318/a-closed-vessel-full-of-water/39321
# A closed vessel full of water [closed] A closed vessel full of water is rotating with constant angular velocity $\Omega$ about a horizontal axis. Show that the surfaces of equal pressure are circular cylinders whose common axis is at a height $g/{\Omega}^2$ above the axis of rotation. Any ideas? I do not know how to start. - 1 – David Zaslavsky♦ Oct 8 '12 at 3:46 Hi glebovg. Welcome to Physics.SE. This site deals with conceptual Physics Q&A. We don't encourage homework questions that doesn't involve any sort of work done by the author (which is you) and asks other users to solve the problem. If you think you could clarify your question, add what you've done along with your question. We're ready to help you. If you aren't clear, Please have a look at our homework policy for more info. After improving the post, flag it for moderator attention. – Raindrop Feb 17 at 17:30 ## closed as too localized by David Zaslavsky♦Oct 8 '12 at 3:45 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, see the FAQ. ## 1 Answer Use the assumption that water rotates at the same angular velocity as the vessel. Consider a small cube in an arbitrary point of the vessel. Consider the forces acting on the cube: the force of gravity and the force caused by pressure. As the cube is small, the force caused by pressure can be expressed via spatial derivatives of the pressure. Together the forces make the cube move with centripetal acceleration, so you can find the spatial derivatives of pressure $\frac{\partial P}{\partial x}$ and $\frac{\partial P}{\partial y}$ from Newton's second law. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417115449905396, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Straight_Line_and_its_construction&diff=13914&oldid=13911
# Straight Line and its construction ### From Math Images (Difference between revisions) | | | | | |-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | | | | Line 34: | | Line 34: | | | | {{!}}align="center"{{!}}'''Image 3''' | | {{!}}align="center"{{!}}'''Image 3''' | | | {{!}}- | | {{!}}- | | - | {{!}}The picture above shows a patent drawing of an early steam engine. It is of the simplest form with a boiler (on the left), a cylinder with piston, a beam (on top) and a pump (on the right side) at the other end. The pump was usually used to extract water from the mines. When the piston is at its lowest position, steam is let into the cylinder from valve K and it pushes the piston upwards. Afterward, when the piston is at its highest position, cold water is let in from valve E, cooling the steam in the cylinder and causing the pressure in the the cylinder to drop below the atmospheric pressure. The difference in pressure caused the piston to move downwards. After the piston returns to the lowest position, the whole process is repeated. This kind of steam engine is called "atmospheric" because it utilized atmospheric pressure to cause the downward action of the piston (steam only balances out the atmospheric pressure and allow the piston to return to the highest point). Since in the downward motion, the piston pulls on the beam and in the upward motion, the beam pulls on the piston, the connection between the end of the piston rod and the beam is always in tension (under stretching) and that is why a chain is used as the connection. | + | {{!}}'''Image 3''' shows a patent drawing of an early steam engine. It is of the simplest form with a boiler (on the left), a cylinder with piston, a beam (on top) and a pump (on the right side) at the other end. The pump was usually used to extract water from the mines. When the piston is at its lowest position, steam is let into the cylinder from valve K and it pushes the piston upwards. Afterward, when the piston is at its highest position, cold water is let in from valve E, cooling the steam in the cylinder and causing the pressure in the the cylinder to drop below the atmospheric pressure. The difference in pressure caused the piston to move downwards. After the piston returns to the lowest position, the whole process is repeated. This kind of steam engine is called "atmospheric" because it utilized atmospheric pressure to cause the downward action of the piston (steam only balances out the atmospheric pressure and allow the piston to return to the highest point). Since in the downward motion, the piston pulls on the beam and in the upward motion, the beam pulls on the piston, the connection between the end of the piston rod and the beam is always in tension (under stretching) and that is why a chain is used as the connection. | | | {{!}}- | | {{!}}- | | | {{!}}Anyway, the piston moves in the vertical direction and the piston rod takes only axial loading, i.e. forces applied in the direction along the rod. However, from the above picture, it is clear that the end of the piston does not move in a straight line due to the fact that the end of the beam describes an arch of a circle. As a result, horizontal forces are created and subjected onto the piston rod. Consequently, the process of wear and tear is very much quickened and the efficiency of the engine greatly compromised. Now considering that the up-and-down cycle repeats itself hundreds of times every minute and the engine is expected to run 24/7 to make profits for the investors, such defect in the engine must not be tolerated and thus poses a great need for improvements. | | {{!}}Anyway, the piston moves in the vertical direction and the piston rod takes only axial loading, i.e. forces applied in the direction along the rod. However, from the above picture, it is clear that the end of the piston does not move in a straight line due to the fact that the end of the beam describes an arch of a circle. As a result, horizontal forces are created and subjected onto the piston rod. Consequently, the process of wear and tear is very much quickened and the efficiency of the engine greatly compromised. Now considering that the up-and-down cycle repeats itself hundreds of times every minute and the engine is expected to run 24/7 to make profits for the investors, such defect in the engine must not be tolerated and thus poses a great need for improvements. | | Line 42: | | Line 42: | | | | {{!}}align="center"{{!}}'''Image 4''' | | {{!}}align="center"{{!}}'''Image 4''' | | | {{!}}- | | {{!}}- | | - | {{!}}Improvements were made. Firstly, "double-action" engines were made, part of which is shown in the picture on top. Atmospheric pressure acts in both upward and downward strokes of the engine and two chains were used (one connected to the top of the arched end of the beam and one to the bottom), both of which will take turns to be in tension throughout one cycle. One might ask why chain was used all the time. The answer was simple: to fit the curved end of the beam. However, this does not fundamentally solved the problem and unfortunately created more. The additional chain increased the height of the engine and made the manufacturing very difficult (it was hard to make straight steel bars and rods back then) and costly. | + | {{!}}Improvements were made. Firstly, "double-action" engines were made, part of which is shown in '''Image 4'''. Atmospheric pressure acts in both upward and downward strokes of the engine and two chains were used (one connected to the top of the arched end of the beam and one to the bottom), both of which will take turns to be in tension throughout one cycle. One might ask why chain was used all the time. The answer was simple: to fit the curved end of the beam. However, this does not fundamentally solved the problem and unfortunately created more. The additional chain increased the height of the engine and made the manufacturing very difficult (it was hard to make straight steel bars and rods back then) and costly. | | | {{!}}- | | {{!}}- | | | {{!}}[[Image:Img326.gif|border|center|200px]] | | {{!}}[[Image:Img326.gif|border|center|200px]] | | Line 48: | | Line 48: | | | | {{!}}align="center"{{!}}'''Image 5''' | | {{!}}align="center"{{!}}'''Image 5''' | | | {{!}}- | | {{!}}- | | - | {{!}}Secondly, beam was dispensed and replaced by a gear as shown on the left. Consequently, the piston rod was fitted with teeth (labeled k) to drive the gear. Theoretically, this solves the problem fundamentally. The piston rod is confined between the guiding wheel at K and the gear, and it moves only in the up-and-down motion. However, the practical problem was still there. The friction and the noise between all the guideways and the wheels could not be ignored, not to mention the increased possibility of failure and cost of maintenance due to additional parts. Therefore, both of these methods were not satisfactory and the need for a linkage that produces straight line action was still imperative. | + | {{!}}Secondly, beam was dispensed and replaced by a gear as shown in '''Image 5'''. Consequently, the piston rod was fitted with teeth (labeled k) to drive the gear. Theoretically, this solves the problem fundamentally. The piston rod is confined between the guiding wheel at K and the gear, and it moves only in the up-and-down motion. However, the practical problem was still there. The friction and the noise between all the guideways and the wheels could not be ignored, not to mention the increased possibility of failure and cost of maintenance due to additional parts. Therefore, both of these methods were not satisfactory and the need for a linkage that produces straight line action was still imperative. | | | {{!}}} | | {{!}}} | | | | | | | Line 55: | | Line 55: | | | | | | | | | {{{!}} | | {{{!}} | | - | {{!}}colspan="2"{{!}}James Watt found a mechanism that converted the linear motion of pistons in the cylinder to the semi circular motion of the beam (or the circular motion of the [http://en.wikipedia.org/wiki/Flywheel flywheel]) and vice versa. In 1784, he invented a [http://en.wikipedia.org/wiki/Linkage_(mechanical) three member linkage] that solved the linear motion to circular problem practically as illustrated by the animation below. In its simplest form, there are two radius arms that have the same lengths and a connecting arm with midpoint P. Point P moves in a straight line. However, this linkage only produced approximate straight line (a stretched figure 8 actually) as shown on the right, much to the chagrin of the mathematicians who were after absolute straight lines. There is a more general form of the Watt's linkage that the two radius arms having different lengths like shown in the figure in the middle. To make sure that Point P still move in the stretched figure 8, it has to be positioned such that it adheres to the ratio<math>\frac{AB}{CD} = \frac{CP}{CB}</math>. | + | {{!}}colspan="2"{{!}}James Watt found a mechanism that converted the linear motion of pistons in the cylinder to the semi circular motion of the beam (or the circular motion of the [http://en.wikipedia.org/wiki/Flywheel flywheel]) and vice versa. In 1784, he invented a [http://en.wikipedia.org/wiki/Linkage_(mechanical) three member linkage] that solved the linear motion to circular problem practically as illustrated by the animation below. In its simplest form, there are two radius arms that have the same lengths and a connecting arm with midpoint P. Point P moves in a straight line. However, this linkage only produced approximate straight line (a stretched figure 8 actually) as shown in '''Image 7''', much to the chagrin of the mathematicians who were after absolute straight lines. There is a more general form of the Watt's linkage that the two radius arms having different lengths like shown in '''Image 6'''. To make sure that Point P still move in the stretched figure 8, it has to be positioned such that it adheres to the ratio<math>\frac{AB}{CD} = \frac{CP}{CB}</math>. | | | {{!}}- | | {{!}}- | | Line 178: | | Line 178: | | | | {{!}}align="center"{{!}}'''Image 10''' | | {{!}}align="center"{{!}}'''Image 10''' | | | {{!}}- | | {{!}}- | | - | {{!}}Imitations were a big problems beck in those days. When filing for a patent, James Watt and other inventors, had to explain how their devices work without revealing the critical secrets so that others could easily copy them. As seen in the original patent illustration on the bottom right, Watt illustrated his simple linkage on a separate diagram but we couldn't find it in anywhere in the illustration. That is Watt's secret. What he had actually used on his engine was the modified version of the basic linkage as show on the left. The link <math>ABCD</math> is the original three member linkage with <math>AB=CD</math> and point <math>P</math> being the midpoint of <math>BC</math>. A is the pivot of the beam fixed on the engine frame while D is also fixed. However, Watt modified it by adding a parallelogram <math>BCFE</math> to it and connecting point <math>F</math> to the piston rod. We now know that point <math>P</math> moves in quasi straight line as shown previously. The importance for two points move in a straight line is that one has to be connected to the piston rod that drives the beam, another will convert the circular motion to linear motion so as to drive the valve gears that control the opening and closing of the valves. It turns out that point F moves in a similar quasi straight line as point P. | + | {{!}}Imitations were a big problems beck in those days. When filing for a patent, James Watt and other inventors, had to explain how their devices work without revealing the critical secrets so that others could easily copy them. As seen in '''Image 10''', the original patent illustration, Watt illustrated his simple linkage on a separate diagram but we couldn't find it in anywhere in the illustration. That is Watt's secret. What he had actually used on his engine was the modified version of the basic linkage as show in '''Image 11'''. The link <math>ABCD</math> is the original three member linkage with <math>AB=CD</math> and point <math>P</math> being the midpoint of <math>BC</math>. A is the pivot of the beam fixed on the engine frame while D is also fixed. However, Watt modified it by adding a parallelogram <math>BCFE</math> to it and connecting point <math>F</math> to the piston rod. We now know that point <math>P</math> moves in quasi straight line as shown previously. The importance for two points move in a straight line is that one has to be connected to the piston rod that drives the beam, another will convert the circular motion to linear motion so as to drive the valve gears that control the opening and closing of the valves. It turns out that point F moves in a similar quasi straight line as point P. | | | {{!}}- | | {{!}}- | | | {{!}}[[Image:Watt1.png|center|border|600px]] | | {{!}}[[Image:Watt1.png|center|border|600px]] | | Line 202: | | Line 202: | | | | ==='''The First Planar Straight Line Linkage - Peaucellier-Lipkin Linkage'''=== | | ==='''The First Planar Straight Line Linkage - Peaucellier-Lipkin Linkage'''=== | | | {{{!}} | | {{{!}} | | - | {{!}}align="center"{{!}}[[Image:Peaucellier linkage animation.gif|center|border]]Image 13{{!}}{{!}}Mathematicians and engineers have being searching for almost a century to find the solution to the straight line linkage but all had failed until 1864, a French army officer Charles Nicolas Peaucellier came up with his ''inversor linkage''. Interestingly, he did not publish his findings and proof until 1873, when Lipmann I. Lipkin, a student from University of St. Petersburg, demonstrated the same working model at the World Exhibition in Vienna. Peaucellier acknowledged Lipkin's independent findings with the publication of the details of his discovery in 1864 and the mathematical proof. | + | {{!}}align="center"{{!}}[[Image:Peaucellier linkage animation.gif|center|border]]'''Image 13'''{{!}}{{!}}Mathematicians and engineers have being searching for almost a century to find the solution to the straight line linkage but all had failed until 1864, a French army officer Charles Nicolas Peaucellier came up with his ''inversor linkage''. Interestingly, he did not publish his findings and proof until 1873, when Lipmann I. Lipkin, a student from University of St. Petersburg, demonstrated the same working model at the World Exhibition in Vienna. Peaucellier acknowledged Lipkin's independent findings with the publication of the details of his discovery in 1864 and the mathematical proof. | | | | | | | | | | | | Line 208: | | Line 208: | | | | | | | | | | | | | - | Now, the linkage that produces a straight line motion is much more complicated than folding a piece of paper but the Peaucellier-Lipkin Linkage is amazingly simple as shown on the left and right. In the next section, a proof of how this linkage draws a straight line is provided. | + | Now, the linkage that produces a straight line motion is much more complicated than folding a piece of paper but the Peaucellier-Lipkin Linkage is amazingly simple as shown in '''Image 13'''. In the next section, a proof of how this linkage draws a straight line is provided. | | | {{!}}- | | {{!}}- | | | {{!}}colspan="2"{{!}}[[Image:PL cell.png|center|border|550px]] | | {{!}}colspan="2"{{!}}[[Image:PL cell.png|center|border|550px]] | | Line 214: | | Line 214: | | | | {{!}}align="center" colspan="2"{{!}}'''Image 14''' | | {{!}}align="center" colspan="2"{{!}}'''Image 14''' | | | {{!}}- | | {{!}}- | | - | {{!}}colspan="2"{{!}}Let's turn to a skeleton drawing of the Peaucellier-Lipkin linkage. It is constructed in such a way that <math>OA = OB</math> and <math>AC=CB=BP=PA</math>. Furthermore, all the bars are free to rotate at every joint and point <math>O</math> is a fixed pivot. Due to the symmetrical construction of the linkage, it goes without proof that points <math>O</math>,<math>C</math> and <math>P</math> lie in a straight line. Construct lines <math>OCP</math> and <math>AB</math> and they meet at point <math>M</math>. | + | {{!}}colspan="2"{{!}}Let's turn to a skeleton drawing of the Peaucellier-Lipkin linkage in '''Image 14'''. It is constructed in such a way that <math>OA = OB</math> and <math>AC=CB=BP=PA</math>. Furthermore, all the bars are free to rotate at every joint and point <math>O</math> is a fixed pivot. Due to the symmetrical construction of the linkage, it goes without proof that points <math>O</math>,<math>C</math> and <math>P</math> lie in a straight line. Construct lines <math>OCP</math> and <math>AB</math> and they meet at point <math>M</math>. | | | | | | | | Since shape <math>APBC</math> is a rhombus | | Since shape <math>APBC</math> is a rhombus | | Line 239: | | Line 239: | | | | {{!}}align="center" colspan="2"{{!}}'''Image 15''' | | {{!}}align="center" colspan="2"{{!}}'''Image 15''' | | | {{!}}- | | {{!}}- | | - | {{!}}colspan="2"{{!}}Refer to the graph above. Let's fix the path of point <math>C</math> such that it traces out a circle that has point <math>O</math> on it. <math>QC</math> is the the extra link pivoted to the fixed point <math>Q</math> with <math>QC=QO</math>. Construct line <math>OQ</math> that cuts the circle at point <math>R</math>. In addition, construct line <math>PN</math> such that <math>PN \perp OR</math>. | + | {{!}}colspan="2"{{!}}Refer to '''Image 15'''. Let's fix the path of point <math>C</math> such that it traces out a circle that has point <math>O</math> on it. <math>QC</math> is the the extra link pivoted to the fixed point <math>Q</math> with <math>QC=QO</math>. Construct line <math>OQ</math> that cuts the circle at point <math>R</math>. In addition, construct line <math>PN</math> such that <math>PN \perp OR</math>. | | | | | | | | Since, <math> \angle OCR = 90^\circ</math> | | Since, <math> \angle OCR = 90^\circ</math> | | Line 261: | | Line 261: | | | | {{!}}align="center"{{!}}'''Image 16''' | | {{!}}align="center"{{!}}'''Image 16''' | | | {{!}}- | | {{!}}- | | - | {{!}}The new linkage caused considerable excitement in London. Mr. Prim, "engineer to the House", utilized the new compact form invented by H.Hart to fit his new blowing engine which proved to be "exceptionally quiet in their operation." In this compact form, <math>DA=DC</math>, <math>AF=CF</math> and <math>AB = BC</math>. Point <math>E</math> and <math>F</math> are fixed pivots. In the diagram above. F is the inversive center and points <math>D</math>,<math>F</math> and <math>B</math> are collinear and <math>DF \cdot DB</math> is of constant value. I left it to you to prove the rest. Mr. Prim's blowing engine used for ventilating the House of Commons, 1877. The crosshead of the reciprocating air pump is guided by a Peaucellier linkage shown at the center. The slate-lined air cylinders had rubber-flap inlet and exhaust valves and a piston whose periphery was formed by two rows of brush bristles. Prim's machine was driven by a steam engine. | + | {{!}}The new linkage caused considerable excitement in London. Mr. Prim, "engineer to the House", utilized the new compact form invented by H.Hart to fit his new blowing engine which proved to be "exceptionally quiet in their operation." In this compact form, <math>DA=DC</math>, <math>AF=CF</math> and <math>AB = BC</math>. Point <math>E</math> and <math>F</math> are fixed pivots. In '''Image 16'''. F is the inversive center and points <math>D</math>,<math>F</math> and <math>B</math> are collinear and <math>DF \cdot DB</math> is of constant value. I left it to you to prove the rest. Mr. Prim's blowing engine used for ventilating the House of Commons, 1877. The crosshead of the reciprocating air pump is guided by a Peaucellier linkage shown in '''Image 17'''. The slate-lined air cylinders had rubber-flap inlet and exhaust valves and a piston whose periphery was formed by two rows of brush bristles. Prim's machine was driven by a steam engine. | | | {{!}}- | | {{!}}- | | | {{!}}[[Image:Blowing engine.jpg|center|border|600px]] | | {{!}}[[Image:Blowing engine.jpg|center|border|600px]] | | Line 271: | | Line 271: | | | | {{{!}} | | {{{!}} | | - | {{!}}After Peaucellier-Lipkin Linkage was introduced to England in 1874, Mr. Hart of Woolwich devised a new linkage that contained only four links which is the blue part as shown in the picture below. Point <math>O</math> is the inversion center with <math>OP</math> and <math>OQ</math> collinear and <math>OP \cdot OQ =</math> constant. When point <math>P</math> is constrained to move in a circle that passes through point <math>O</math>, then point <math>Q</math> will trace out a straight line. See below for proof. | + | {{!}}After Peaucellier-Lipkin Linkage was introduced to England in 1874, Mr. Hart of Woolwich devised a new linkage that contained only four links which is the blue part as shown in '''Image 18'''. Point <math>O</math> is the inversion center with <math>OP</math> and <math>OQ</math> collinear and <math>OP \cdot OQ =</math> constant. When point <math>P</math> is constrained to move in a circle that passes through point <math>O</math>, then point <math>Q</math> will trace out a straight line. See below for proof. | | | {{!}}- | | {{!}}- | | Line 317: | | Line 317: | | | | {{!}}align="center"{{!}}'''Image 19''' | | {{!}}align="center"{{!}}'''Image 19''' | | | {{!}}- | | {{!}}- | | - | {{!}}There are many other mechanisms that create straight line. I will only introduce one of them here. Refer to the diagrams above. Consider two circles <math>C_1</math> and <math>C_2</math> with radius having the relation <math>2r_2=r_1</math>. We roll <math>C_2</math> inside <math>C_1</math> without slipping as show in the diagram below. | + | {{!}}There are many other mechanisms that create straight line. I will only introduce one of them here. Refer to '''Image 19'''. Consider two circles <math>C_1</math> and <math>C_2</math> with radius having the relation <math>2r_2=r_1</math>. We roll <math>C_2</math> inside <math>C_1</math> without slipping as show in '''Image 20'''. | | | {{!}}- | | {{!}}- | | | {{!}}[[Image:Circle in circle 2.png|border|center|300px]] | | {{!}}[[Image:Circle in circle 2.png|border|center|300px]] | | Line 324: | | Line 324: | | | | {{!}}- | | {{!}}- | | | {{!}}Then the arch lengths <math>r_1\beta = r_2\alpha</math>. Voila! <math>\alpha = 2\beta</math> and point <math>C</math> has to be on the line joining the original points <math>P</math> and <math>Q</math>! The same argument goes for point <math>P</math>. As a result, point <math>C</math> moves in the horizontal line and point <math>P</math> moves in the vertical line. | | {{!}}Then the arch lengths <math>r_1\beta = r_2\alpha</math>. Voila! <math>\alpha = 2\beta</math> and point <math>C</math> has to be on the line joining the original points <math>P</math> and <math>Q</math>! The same argument goes for point <math>P</math>. As a result, point <math>C</math> moves in the horizontal line and point <math>P</math> moves in the vertical line. | | - | {{!}}- | | | | - | {{!}}align="center"{{!}}'''Image 21''' | | | | | {{!}}- | | {{!}}- | | | {{!}}[[Image:Img335.gif|border|center|300px]] | | {{!}}[[Image:Img335.gif|border|center|300px]] | | | {{!}}- | | {{!}}- | | - | {{!}}align="center"{{!}}'''Image 22''' | + | {{!}}align="center"{{!}}'''Image 21''' | | | {{!}}- | | {{!}}- | | - | {{!}}In 1801, James White patented his mechanism using this rolling motion. Its picture is shown on the right. Interestingly, if you attach a rod of fixed length to point <math>C</math> and <math>P</math> and the end of the rod <math>T</math> will trace out an ellipse. Why? Consider the coordinates of <math>P</math> in terms of <math>\theta</math>, <math>PT</math> and <math>CT</math>. Point <math>T</math> will have the coordinates <math>(CT \cos \theta, PT \sin \theta)</math>. Now, whenever we see <math>\cos \theta</math> and <math>\sin \theta</math> together, we want to square them. Hence, <math>x^2=CT^2 \cos^2 \theta</math> and <math>y^2=PT^2 \sin^2 \theta</math>. Well, they are not so pretty yet. So we make them pretty by dividing <math>x^2</math> by <math>CT^2</math> and <math>y^2</math> by <math>PT^2</math>, obtaining <math>\frac {x^2}{CT^2} = \cos^2 \theta</math> and <math>\frac {y^2}{PT^2} = \sin^2 \theta</math>. Voila again! <math>\frac {x^2}{CT^2} + \frac {y^2}{PT^2}=1</math> and this is exactly the algebraic formula for an ellipse. | + | {{!}}In 1801, James White patented his mechanism using this rolling motion. It is shown in'''Image 21'''. Interestingly, if you attach a rod of fixed length to point <math>C</math> and <math>P</math> and the end of the rod <math>T</math> will trace out an ellipse as seen in '''Image 22'''. Why? Consider the coordinates of <math>P</math> in terms of <math>\theta</math>, <math>PT</math> and <math>CT</math>. Point <math>T</math> will have the coordinates <math>(CT \cos \theta, PT \sin \theta)</math>. Now, whenever we see <math>\cos \theta</math> and <math>\sin \theta</math> together, we want to square them. Hence, <math>x^2=CT^2 \cos^2 \theta</math> and <math>y^2=PT^2 \sin^2 \theta</math>. Well, they are not so pretty yet. So we make them pretty by dividing <math>x^2</math> by <math>CT^2</math> and <math>y^2</math> by <math>PT^2</math>, obtaining <math>\frac {x^2}{CT^2} = \cos^2 \theta</math> and <math>\frac {y^2}{PT^2} = \sin^2 \theta</math>. Voila again! <math>\frac {x^2}{CT^2} + \frac {y^2}{PT^2}=1</math> and this is exactly the algebraic formula for an ellipse. | | | {{!}}- | | {{!}}- | | | {{!}}[[Image:Ellipsograph2.png|border|center|500px]] | | {{!}}[[Image:Ellipsograph2.png|border|center|500px]] | | | {{!}}- | | {{!}}- | | - | {{!}}align="center"{{!}}'''Image 23''' | + | {{!}}align="center"{{!}}'''Image 22''' | | | {{!}}} | | {{!}}} | | | |other=A little Geometry | | |other=A little Geometry | ## Revision as of 13:35, 7 July 2010 How to draw a straight line without a straight edge Independently invented by a French army officer, Charles-Nicolas Peaucellier and a Lithuanian (disputable) mathematician Lipmann Lipkin, this is the device that draws a straight line without using a straight edge. It was the first planar linkageIt is defined as a series of rigid links connected with joints to form a closed chain, or a series of closed chains. Each link has two or more joints, and the joints have various degrees of freedom to allow motion between the links that drew straight line without a reference guideway and it had important applications in engineering and mathematics. How to draw a straight line without a straight edge Field: Geometry Created By: Cornell University Libraries and the Cornell College of Engineering # Basic Description What is a straight line? How do you define straightness? How do you construct something straight without assuming you have a straight edge? These are questions that seem silly to ask because they are so intuitive. We come to accept that straightness is simply straightness and its definition, like that of point and line, is simply assumed. However, compare this to the way we draw a circle. When using a compass to draw a circle, we are not starting with a figure that we accept as circular; instead, we are using a fundamental property of circles that the points on a circle are at a fixed distance from the center. This page explores the properties of a straight line and hence its construction. # A More Mathematical Explanation Note: understanding of this explanation requires: *A little Geometry [Click to view A More Mathematical Explanation] ## What is a straight line? Im [...] [Click to hide A More Mathematical Explanation] ## What is a straight line? Image 1 Today, we simply define a line as a one-dimensional object that extents to infinity in both directions and it is straight, i.e. no wiggles along its length. But what is straightness? It is a hard question because we have the picture in our head and the answer right there under our breath but we simply cannot articulate it. In Euclid's book Elements, he defined a straight line as "lying evenly between its extreme points" and it has "breadthless width." The definition is pretty useless. What does he mean if he says "lying evenly"? It tells us nothing about how to describe or construct a straight line. So what is a straightness anyway? There are a few good answers. For instance, in the Cartesian Coordinates, the graph of $y=ax+b$ is a straight line. In addition, we are most familiar with another definition is the shortest distance between two points is a straight line. However, it is important to realize that the definitions of being "shortest" and "straight" are different from that on a flat plane. For example, the shortest distance between two points on a sphere is the the "great circle", a section of a sphere that contains a diameter of the sphere, and great circle is straight on the spherical surface. For more properties on staight line, you refer to the book Experience Geometry by zzz. Image 2 ## The Quest to Draw a Straight Line ### The Practical Need Now having defined what a straight line is, we have to figure out a way to construct it on a plane without using anything that we assume to be straight such as a straight edge (or ruler) just like how we construct a circle using a compass. Historically, it has been of great interest to mathematicians and engineers not only because it is an interesting question to ponder about but also it has important application in engineering. Since the invention of various steam engines and machines that are powered by them, engineers have been trying to perfect the mechanical linkage to convert all kinds of motions (especially circular motion) to linear motions. Image 3 Image 3 shows a patent drawing of an early steam engine. It is of the simplest form with a boiler (on the left), a cylinder with piston, a beam (on top) and a pump (on the right side) at the other end. The pump was usually used to extract water from the mines. When the piston is at its lowest position, steam is let into the cylinder from valve K and it pushes the piston upwards. Afterward, when the piston is at its highest position, cold water is let in from valve E, cooling the steam in the cylinder and causing the pressure in the the cylinder to drop below the atmospheric pressure. The difference in pressure caused the piston to move downwards. After the piston returns to the lowest position, the whole process is repeated. This kind of steam engine is called "atmospheric" because it utilized atmospheric pressure to cause the downward action of the piston (steam only balances out the atmospheric pressure and allow the piston to return to the highest point). Since in the downward motion, the piston pulls on the beam and in the upward motion, the beam pulls on the piston, the connection between the end of the piston rod and the beam is always in tension (under stretching) and that is why a chain is used as the connection. Anyway, the piston moves in the vertical direction and the piston rod takes only axial loading, i.e. forces applied in the direction along the rod. However, from the above picture, it is clear that the end of the piston does not move in a straight line due to the fact that the end of the beam describes an arch of a circle. As a result, horizontal forces are created and subjected onto the piston rod. Consequently, the process of wear and tear is very much quickened and the efficiency of the engine greatly compromised. Now considering that the up-and-down cycle repeats itself hundreds of times every minute and the engine is expected to run 24/7 to make profits for the investors, such defect in the engine must not be tolerated and thus poses a great need for improvements. Image 4 Improvements were made. Firstly, "double-action" engines were made, part of which is shown in Image 4. Atmospheric pressure acts in both upward and downward strokes of the engine and two chains were used (one connected to the top of the arched end of the beam and one to the bottom), both of which will take turns to be in tension throughout one cycle. One might ask why chain was used all the time. The answer was simple: to fit the curved end of the beam. However, this does not fundamentally solved the problem and unfortunately created more. The additional chain increased the height of the engine and made the manufacturing very difficult (it was hard to make straight steel bars and rods back then) and costly. Image 5 Secondly, beam was dispensed and replaced by a gear as shown in Image 5. Consequently, the piston rod was fitted with teeth (labeled k) to drive the gear. Theoretically, this solves the problem fundamentally. The piston rod is confined between the guiding wheel at K and the gear, and it moves only in the up-and-down motion. However, the practical problem was still there. The friction and the noise between all the guideways and the wheels could not be ignored, not to mention the increased possibility of failure and cost of maintenance due to additional parts. Therefore, both of these methods were not satisfactory and the need for a linkage that produces straight line action was still imperative. ### James Watt's breakthrough James Watt found a mechanism that converted the linear motion of pistons in the cylinder to the semi circular motion of the beam (or the circular motion of the flywheel) and vice versa. In 1784, he invented a three member linkage that solved the linear motion to circular problem practically as illustrated by the animation below. In its simplest form, there are two radius arms that have the same lengths and a connecting arm with midpoint P. Point P moves in a straight line. However, this linkage only produced approximate straight line (a stretched figure 8 actually) as shown in Image 7, much to the chagrin of the mathematicians who were after absolute straight lines. There is a more general form of the Watt's linkage that the two radius arms having different lengths like shown in Image 6. To make sure that Point P still move in the stretched figure 8, it has to be positioned such that it adheres to the ratio$\frac{AB}{CD} = \frac{CP}{CB}$. Image 6 Image 7 ### The Motion of Point P We intend to described the path of $P$ so that we could show it does not move in a straight line (which is obvious) and more importantly to pinpoint the position of $P$ using certain parameter we know such as the angle of rotation or one coordinate of point $P$. This is awfully important in engineering as engineers would like to know that there are no two parts of the machine will collide with each other throughout the motion. #### Algebraic Description We see that $P$ moves in a stretched figure 8 and will tend to think that there should be a nice close form of the relationship between the coordinates of $P$ like that of the circle. But after this section, you will see that there is a closed form, at least theoretically, but it is not "nice" at all. Image 8 We know coordinates $A$ and $D$. Hence let the coordinates of $A$ be $(0,0)$, coordinates of $B$ be $(c,d)$. We also know the length of the bar. Let $AB=CD=r, BC=m$. Suppose that at one instance we know the coordinates of $B$ as $(a,b)$, then $P$ will be on the circle centered at $B$ with a radius of $m$. Since $P$ is on the circle centered at $D$ with radius $r$. Then the coordinates of $C$ have to satisfy the two equations below. $\begin{cases} (x-a)^2+(y-b)^2=m^2 \\ (x-c)^2+(y-d)^2=r^2 \end{cases}$ Now, since we know that $B$ is on the circle centered at $A$ with radius $r$, the coordinates of $B$ have to satisfy the equation $a^2+b^2=r^2$. Therefore, the coordinates of $C$ have to satisfy the three equations below. $\begin{cases} (x-a)^2+(y-b)^2=m^2 \\ (x-c)^2+(y-d)^2=r^2 \\ a^2+b^2=r^2 \end{cases}$ Now, expanding the first two equations we have, $\begin{cases} x^2+y^2-2ax-2by+a^2+b^2=m^2 \cdots \cdots \\ x^2+y^2-2cx-2dy+c^2+d^2=r^2 \cdots \cdots \\ \end{cases}$ Subtract Eq. 2 from Eq. 1 we have, $(-2a+2c)x-(2b-2d)y+(a^2+b^2)-(c^2+d^2)=m^2-r^2\cdots \cdots$ Substituting $a^2+b^2=r^2$ and rearranging we have, $(-2a+2c)x-(2b-2d)y=m^2-2r^2+c^2+d^2$ Hence $y=\frac {-2a+2c}{2b-2d}x-\frac {m^2-2r^2+c^2+d^2}{2b-2d} \cdots \cdots$ Now, we could manipulate Eq. 3 to get an expression for $b$, i.e. $b=f(a,c,d,m,r,x,y)$. Next, we substitute $b=f(a,c,d,m,r,x,y)$ back into Eq. 1 and will be able to obtain an expression for $a$, i.e. $a=g(x,y,d,c,m,r)$. Since $b=\pm \sqrt {r^2-a^2}$, we have expressions of $a$ and $b$ in terms of $x,y,d,c,m$ and $r$. Say point $P$ has coordinates $(x',y')$, then $x'=\frac {a+x}{2}$ and $y'=\frac {b+y}{2}$ which will yield $\begin{cases} x=2x'-a \cdots \cdots \\ y=2y'-b \cdots \cdots \end{cases}$ In the last step we substitute $a=g(x,y,d,c,m,r)$,$b=\pm \sqrt {r^2-a^2}$, Eq. 5 and Eq. 6 back into Eq. 4 and we will finally have a relationship between $x'$ and $y'$. Of course, it will be a messy one but we could definitely use Mathematica to do the maths. #### Parametric Description Alright, since the algebraic equations are not agreeable at all, we have to resort to the parametric description. To think about, it would not be too bad if we could describe the motion of $P$ using the angle of ration. As a matter of fact, it is easier to obtain the angle of rotation than knowing one of $P$'s coordinates. Image 9 We will parametrize the $P$ with the angle $\theta$ in conformation of most parametrizations of point. $\begin{cases} \overrightarrow {AB} = (r \sin \theta, r \cos \theta) \\ \overrightarrow {BC} = (m \sin (\frac {\pi}{2} + \beta + \alpha), m \cos (\frac {\pi}{2} + \beta + \alpha)) \\ \end{cases}$ Now let $BD=l$. Then using cosine formula, we have $m^2+l^2-2ml\cos \alpha = r^2$ As a result, we can express $\alpha$ and $\beta$ as $\alpha = \cos^{-1} \frac {m^2+l^2-r^2}{2ml}$ Since $l = \sqrt{(c-r \sin \theta)^2+(d-r \cos \theta)^2}$, $c$ and $d$ being the coordinates of point $D$, we can find $\alpha$ in terms of $\theta$. Furthermore, $\begin{align} \overrightarrow {BD} & = \overrightarrow {AD}-\overrightarrow {AB} \\ & = (c,d) - (r\sin \theta, r \cos \theta) \\ & = (c - r\sin \theta, d - r \cos \theta) \end{align}$ Therefore, $\beta = \tan^{-1}\frac {d-r \cos \theta}{c - r \sin \theta}$ Hence, $\begin{align} \overrightarrow {AP} & = \overrightarrow {AB} + \frac {1}{2} \overrightarrow {BC} \\ & = (r \sin \theta, r \cos \theta) + \frac {m}{2}(\sin (\frac {\pi}{2} + \alpha + \beta), \cos (\frac {\pi}{2} + \alpha + \beta)) \\ \end{align}$ Now, $\overrightarrow {AP}$ is parametrized in term of $\theta, c, d, r$ and $m$. Image 10 Imitations were a big problems beck in those days. When filing for a patent, James Watt and other inventors, had to explain how their devices work without revealing the critical secrets so that others could easily copy them. As seen in Image 10, the original patent illustration, Watt illustrated his simple linkage on a separate diagram but we couldn't find it in anywhere in the illustration. That is Watt's secret. What he had actually used on his engine was the modified version of the basic linkage as show in Image 11. The link $ABCD$ is the original three member linkage with $AB=CD$ and point $P$ being the midpoint of $BC$. A is the pivot of the beam fixed on the engine frame while D is also fixed. However, Watt modified it by adding a parallelogram $BCFE$ to it and connecting point $F$ to the piston rod. We now know that point $P$ moves in quasi straight line as shown previously. The importance for two points move in a straight line is that one has to be connected to the piston rod that drives the beam, another will convert the circular motion to linear motion so as to drive the valve gears that control the opening and closing of the valves. It turns out that point F moves in a similar quasi straight line as point P. Image 11 How would we find the parametric equation for point $F$ then? Well, it is easy enough. Image 12 $\overrightarrow {AB} = (r \sin \theta, r \cos \theta) \therefore \overrightarrow {AE} = \frac {e+f}{r}(r \sin \theta, r \cos \theta)$ Furthermore $\overrightarrow {AF} = \overrightarrow {AE} + \overrightarrow {BC}$ Therefore, $\overrightarrow {AF} = \frac {e+f}{r}(r \sin \theta, r \cos \theta) + (m \sin (\frac {\pi}{2} + \beta + \alpha), m \cos (\frac {\pi}{2} + \beta + \alpha))$ ### The First Planar Straight Line Linkage - Peaucellier-Lipkin Linkage Image 13 Mathematicians and engineers have being searching for almost a century to find the solution to the straight line linkage but all had failed until 1864, a French army officer Charles Nicolas Peaucellier came up with his inversor linkage. Interestingly, he did not publish his findings and proof until 1873, when Lipmann I. Lipkin, a student from University of St. Petersburg, demonstrated the same working model at the World Exhibition in Vienna. Peaucellier acknowledged Lipkin's independent findings with the publication of the details of his discovery in 1864 and the mathematical proof. Take a minute to ponder the question: "How do you produce a straight line?" We all know, or rather assume, that light travels in straight line. But does it always do that? Einstein's theory of relativity has shown (and been verified) that light is bent by gravity and therefore, our assumption that light travels in straight lines does not hold all the time. Another simpler method is just to fold a piece of paper and the crease will be a straight line. Now, the linkage that produces a straight line motion is much more complicated than folding a piece of paper but the Peaucellier-Lipkin Linkage is amazingly simple as shown in Image 13. In the next section, a proof of how this linkage draws a straight line is provided. Image 14 Let's turn to a skeleton drawing of the Peaucellier-Lipkin linkage in Image 14. It is constructed in such a way that $OA = OB$ and $AC=CB=BP=PA$. Furthermore, all the bars are free to rotate at every joint and point $O$ is a fixed pivot. Due to the symmetrical construction of the linkage, it goes without proof that points $O$,$C$ and $P$ lie in a straight line. Construct lines $OCP$ and $AB$ and they meet at point $M$. Since shape $APBC$ is a rhombus $AB \perp CP$ and $CM = MP$ Now, $(OA)^2 = (OM)^2 + (AM)^2$ $(AP)^2 = (PM)^2 + (AM)^2$ Therefore, $\begin{align} (OA)^2 - (AP)^2 & = (OM)^2 - (PM)^2\\ & = (OM-PM)\cdot(PM + PM)\\ & = OC \cdot OP\\ \end{align}$ Let's take a moment to look at the relation $(OA)^2 - (AP)^2 = OC \cdot OP$. Since the length $OA$ and $AP$ are of constant length, then the product $OC \cdot OP$ is of constant value however you change the shape of this construction. Image 15 Refer to Image 15. Let's fix the path of point $C$ such that it traces out a circle that has point $O$ on it. $QC$ is the the extra link pivoted to the fixed point $Q$ with $QC=QO$. Construct line $OQ$ that cuts the circle at point $R$. In addition, construct line $PN$ such that $PN \perp OR$. Since, $\angle OCR = 90^\circ$ We have $\vartriangle OCR \sim \vartriangle ONP, \frac{OC}{OR} = \frac{ON}{OP}$, and $OC \cdot OP = ON \cdot OR$ Therefore $ON = \frac {OC \cdot OP}{OR} =$constant, i.e. the length of $ON$(or the x-coordinate of $P$ w.r.t $O$) does not change as points $C$ and $P$ move. Hence, point $P$ moves in a straight line. ∎ ### Inversive Geometry in Peaucellier-Lipkin Linkage As a matter of fact, the first part of the proof given above is already sufficient. Due to inversive geometry, once we have shown that points $O$,$C$ and $P$ are collinear and that $OC \cdot OP$ is of constant value. Points $C$ and $P$ are inversive pairs with $O$ as inversive center. Therefore, once $C$ moves in a circle that contains $O$, then $P$ will move in a straight line and vice versa. ∎ See Inversion for more detail. ### Peaucellier-Lipkin Linkage in action Image 16 The new linkage caused considerable excitement in London. Mr. Prim, "engineer to the House", utilized the new compact form invented by H.Hart to fit his new blowing engine which proved to be "exceptionally quiet in their operation." In this compact form, $DA=DC$, $AF=CF$ and $AB = BC$. Point $E$ and $F$ are fixed pivots. In Image 16. F is the inversive center and points $D$,$F$ and $B$ are collinear and $DF \cdot DB$ is of constant value. I left it to you to prove the rest. Mr. Prim's blowing engine used for ventilating the House of Commons, 1877. The crosshead of the reciprocating air pump is guided by a Peaucellier linkage shown in Image 17. The slate-lined air cylinders had rubber-flap inlet and exhaust valves and a piston whose periphery was formed by two rows of brush bristles. Prim's machine was driven by a steam engine. Image 17 ### Hart's Linkage After Peaucellier-Lipkin Linkage was introduced to England in 1874, Mr. Hart of Woolwich devised a new linkage that contained only four links which is the blue part as shown in Image 18. Point $O$ is the inversion center with $OP$ and $OQ$ collinear and $OP \cdot OQ =$ constant. When point $P$ is constrained to move in a circle that passes through point $O$, then point $Q$ will trace out a straight line. See below for proof. Image 18 We know that $AB = CD, BC = AD$ As a result, $BD \parallel AC$ Draw line $OQ \parallel AC$, intersecting $AD$ at point $P$. Consequently, points $O,P,Q$ are collinear Construct rectangle $EFCA$ $\begin{align} AC \cdot BD & = EF \cdot BD \\ & = (ED + EB) \cdot (ED - EB) \\ & = (ED)^2 - (EB)^2 \\ \end{align}$ For $\begin{array}{lcl} (ED)^2 + (AE)^2 & = & (AD)^2 \\ (EB)^2 + (AE)^2 & = & (AB)^2 \end{array}$, We then have $AC \cdot BD = (ED)^2 - (EB)^2 = (AD)^2 - (AB)^2$. Further, due to $\frac{OP}{BD} = m, \frac{OQ}{AC} = 1-m$ where $0<m<1$ We have $\begin{align} OP \cdot OQ & = m(1-m)BD \cdot AC\\ & = m(1-m)((AD)^2 - (AB)^2) \end{align}$ ### Other straight line mechanism Image 19 There are many other mechanisms that create straight line. I will only introduce one of them here. Refer to Image 19. Consider two circles $C_1$ and $C_2$ with radius having the relation $2r_2=r_1$. We roll $C_2$ inside $C_1$ without slipping as show in Image 20. Image 20 Then the arch lengths $r_1\beta = r_2\alpha$. Voila! $\alpha = 2\beta$ and point $C$ has to be on the line joining the original points $P$ and $Q$! The same argument goes for point $P$. As a result, point $C$ moves in the horizontal line and point $P$ moves in the vertical line. Image 21 In 1801, James White patented his mechanism using this rolling motion. It is shown inImage 21. Interestingly, if you attach a rod of fixed length to point $C$ and $P$ and the end of the rod $T$ will trace out an ellipse as seen in Image 22. Why? Consider the coordinates of $P$ in terms of $\theta$, $PT$ and $CT$. Point $T$ will have the coordinates $(CT \cos \theta, PT \sin \theta)$. Now, whenever we see $\cos \theta$ and $\sin \theta$ together, we want to square them. Hence, $x^2=CT^2 \cos^2 \theta$ and $y^2=PT^2 \sin^2 \theta$. Well, they are not so pretty yet. So we make them pretty by dividing $x^2$ by $CT^2$ and $y^2$ by $PT^2$, obtaining $\frac {x^2}{CT^2} = \cos^2 \theta$ and $\frac {y^2}{PT^2} = \sin^2 \theta$. Voila again! $\frac {x^2}{CT^2} + \frac {y^2}{PT^2}=1$ and this is exactly the algebraic formula for an ellipse. Image 22 # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # About the Creator of this Image KMODDL is a collection of mechanical models and related resources for teaching the principles of kinematics--the geometry of pure motion. The core of KMODDL is the Reuleaux Collection of Mechanisms and Machines, an important collection of 19th-century machine elements held by Cornell's Sibley School of Mechanical and Aerospace Engineering. # References How to draw a straight line: a lecture on linkages, Alfred Bray Kempe, Ithaca, New York: Cornell University Library How round is your circle?, John Bryant and Chris Sangwin, Princeton, Princeton University Press # Future Directions for this Page I need to change the size of the main picture and maybe some more theoretical description what a straight line here. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. Categories: | | |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 198, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275825619697571, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/56992-matrix-proof.html
# Thread: 1. ## Matrix Proof I need to prove that these determinants are equals. Note: these are determinants, not matrices, I just don't know how to set them up as determinants using the math code... $\begin{bmatrix}a1+b1t&a2+b2t&a3+b3t\\a1t+b1&a2t+b2 &a3t+b3\\c1&c2&c3 \end{bmatrix}$ = (1- $t^2$) $\begin{bmatrix}a1&a2&a3\\b1&b2&b3\\c1&c2&c3 \end{bmatrix}$ This tells me that I need to somehow get (1- $t^2$) multiplied into the determinant. I know that to get that out, one would have to multiply a row by (1- $t^2$), or possibly (1-t) on two different rows... Am I thinking it the right direction? 2. Originally Posted by Hellreaver I need to prove that these determinants are equals. Note: these are determinants, not matrices, I just don't know how to set them up as determinants using the math code... $\begin{bmatrix}a1+b1t&a2+b2t&a3+b3t\\a1t+b1&a2t+b2 &a3t+b3\\c1&c2&c3 \end{bmatrix}$ = (1- $t^2$) $\begin{bmatrix}a1&a2&a3\\b1&b2&b3\\c1&c2&c3 \end{bmatrix}$ This tells me that I need to somehow get (1- $t^2$) multiplied into the determinant. I know that to get that out, one would have to multiply a row by (1- $t^2$), or possibly (1-t) on two different rows... Am I thinking it the right direction? Consider the matrix $A = \begin{bmatrix}a1&a2&a3\\b1&b2&b3\\c1&c2&c3 \end{bmatrix}$. Apply the row operation R1 --> R1 + t R2 on matrix A to get matrix B: det(B) = det(A). Apply the row operation (1 - t^2) R2 + R1 --> R2 on matrix B to get matrix C: det(C) = (1 - t^2) det(B) ...... 3. Originally Posted by mr fantastic Consider the matrix $A = \begin{bmatrix}a1&a2&a3\\b1&b2&b3\\c1&c2&c3 \end{bmatrix}$. Apply the row operation R1 --> R1 + t R2 on matrix A to get matrix B: det(B) = det(A). Apply the row operation (1 - t^2) R2 + R1 --> R2 on matrix B to get matrix C: det(C) = (1 - t^2) det(B) ...... Ok, that's tricky. Thank you so much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267630577087402, "perplexity_flag": "middle"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/G04/g04cac.html
# NAG Library Function Documentnag_anova_factorial (g04cac) ## 1  Purpose nag_anova_factorial (g04cac) computes an analysis of variance table and treatment means for a complete factorial design. ## 2  Specification #include <nag.h> #include <nagg04.h> void nag_anova_factorial (Integer n, const double y[], Integer nfac, const Integer lfac[], Integer nblock, Integer inter, Integer irdf, Integer *mterm, double **table, double **tmean, Integer *maxt, double **e, Integer **imean, double **semean, double **bmean, double r[], NagError *fail) ## 3  Description An experiment consists of a collection of units, or plots, to which a number of treatments are applied. In a factorial experiment the effects of several different sets of conditions are compared, e.g., three different temperatures, ${T}_{1}$, ${T}_{2}$ and ${T}_{3}$, and two different pressures, ${P}_{1}$ and ${P}_{2}$. The conditions are known as factors and the different values the conditions take are known as levels. In a factorial experiment the experimental treatments are the combinations of all the different levels of all factors, e.g., $T 1 P 1 T 2 P 1 T 3 P 1 ,$ $T 1 P 2 T 2 P 2 T 3 P 2$ The effect of a factor averaged over all other factors is known as a main effect, and the effect of a combination of some of the factors averaged over all other factors is known as an interaction. This can be represented by a linear model. In the above example if the response was ${y}_{ijk}$ for the $k$th replicate of the $i$th level of $T$ and the $j$th level of $P$ the linear model would be $y ijk = μ + t i + p j + γ ij + e ijk$ where $\mu $ is the overall mean, ${t}_{i}$ is the main effect of $T$, ${p}_{j}$ is the main effect of $P$, ${\gamma }_{ij}$ is the $T×P$ interaction and ${e}_{ijk}$ is the random error term. In order to find unique estimates constraints are placed on the parameter estimates. For the example here these are: $∑ i=1 3 t ^ i = 0 , ∑ j=1 2 p ^ j = 0 , ∑ i=1 3 γ ^ ij = 0 for ​ j = 1 , 2 and ∑ j=1 2 γ ^ ij = 0 for ​ i = 1 , 2 , 3 ,$ where $\stackrel{^}{}$ denotes the estimate. If there is variation in the experimental conditions, (e.g., in an experiment on the production of a material, different batches of raw material, may be used, or the experiment may be carried out on different days) then plots that are similar are grouped together into blocks. For a balanced complete factorial experiment all the treatment combinations occur the same number of times in each block. nag_anova_factorial (g04cac) computes the analysis of variance (ANOVA) table by sequentially computing the totals and means for an effect from the residuals computed when previous effects have been removed. The effect sum of squares is the sum of squared totals divided by the number of observations per total. The means are then subtracted from the residuals to compute a new set of residuals. At the same time the means for the original data are computed. When all effects are removed the residual sum of squares is computed from the residuals. Given the sums of squares an ANOVA table is then computed along with standard errors for the difference in treatment means. The data for nag_anova_factorial (g04cac) has to be in standard order given by the order of the factors. Let there be $k$ factors, ${f}_{1},{f}_{2},\dots ,{f}_{k}$ in that order with levels ${l}_{1},{l}_{2},\dots ,{l}_{k}$ respectively. Standard order requires the levels of factor ${f}_{1}$ are in order $1,2,\dots ,{l}_{1}$ and within each level of ${f}_{1}$ the levels of ${f}_{2}$ are in order $1,2,\dots ,{l}_{2}$ and so on. For an experiment with blocks the data is for block 1 then for block 2, etc. Within each block the data must be arranged so that the levels of factor ${f}_{1}$ are in order $1,2,\dots ,{l}_{1}$ and within each level of ${f}_{1}$ the levels of ${f}_{2}$ are in order $1,2,\dots ,{l}_{2}$ and so on. Any within block replication of treatment combinations must occur within the levels of ${f}_{k}$. The ANOVA table is given in the following order. For a complete factorial experiment the first row is for blocks, if present, then the main effects of the factors in their order, e.g., ${f}_{1}$ followed by ${f}_{2}$ etc. These are then followed by all the two factor interactions then all the three factor interactions etc. The last two rows being for the residual and total sums of squares. The interactions are arranged in lexical order for the given factor order. For example, for the three factor interactions for a five factor experiment the 10 interactions would be in the following order: $f 1 f 2 f 3 ,$ $f 1 f 2 f 4 ,$ $f 1 f 2 f 5 ,$ $f 1 f 3 f 4 ,$ $f 1 f 3 f 5 ,$ $f 1 f 4 f 5 ,$ $f 2 f 3 f 4 ,$ $f 2 f 3 f 5 ,$ $f 2 f 4 f 5 ,$ $f 3 f 4 f 5$ ## 4  References John J A and Quenouille M H (1977) Experiments: Design and Analysis Griffin ## 5  Arguments 1:     n – IntegerInput On entry: the number of observations. Constraints: • ${\mathbf{n}}\ge 4$. • n must be a multiple of nblock if ${\mathbf{nblock}}>1$. • n must be a multiple of the number of treatment combinations, that is a multiple of ${\prod }_{i=1}^{k}{\mathbf{lfac}}\left[i-1\right]$. 2:     y[n] – const doubleInput On entry: the number of observations in standard order, see Section 3. 3:     nfac – IntegerInput On entry: the number of factors, $k$. Constraint: ${\mathbf{nfac}}\ge 1$. 4:     lfac[nfac] – const IntegerInput On entry: ${\mathbf{lfac}}\left[\mathit{i}-1\right]$ must contain the number of levels for the $\mathit{i}$th factor, for $\mathit{i}=1,2,\dots ,k$. Constraint: ${\mathbf{lfac}}\left[\mathit{i}-1\right]\ge 2$, for $\mathit{i}=1,2,\dots ,k$. 5:     nblock – IntegerInput On entry: the number of blocks. If there are no blocks, set ${\mathbf{nblock}}=0$ or 1. Constraint: ${\mathbf{nblock}}\ge 0$. If ${\mathbf{nblock}}\ge 2$, n/nblock must be a multiple of the number of treatment combinations, that is a multiple of ${\prod }_{i=1}^{k}{\mathbf{lfac}}\left[i-1\right]$. 6:     inter – IntegerInput On entry: the maximum number of factors in an interaction term. If no interaction terms are to be computed, set ${\mathbf{inter}}=0$ or 1. Constraint: $0\le {\mathbf{inter}}\le {\mathbf{nfac}}$. 7:     irdf – IntegerInput On entry: the adjustment to the residual and total degrees of freedom. The total degrees of freedom are set to ${\mathbf{n}}-{\mathbf{irdf}}$ and the residual degrees of freedom adjusted accordingly. For examples of the use of irdf see Section 8. Constraint: ${\mathbf{irdf}}\ge 0$. 8:     mterm – Integer *Output On exit: the number of terms in the analysis of variance table, see Section 8. The number of treatment effects is ${\mathbf{mterm}}-3$. 9:     table – double **Output On exit: a pointer which points to ${\mathbf{mterm}}×5$ memory locations, allocated internally. Viewing this memory as a two-dimensional ${\mathbf{mterm}}×5$ array, the first mterm rows of table contain the analysis of variance table. The first column contains the degrees of freedom, the second column contains the sum of squares, the third column (except for the row corresponding to the total sum of squares) contains the mean squares, i.e., the sums of squares divided by the degrees of freedom, and the fourth and fifth columns contain the $F$ ratio and significance level, respectively (except for rows corresponding to the total sum of squares, and the residual sum of squares). All other cells of the table are set to zero. The first row corresponds to the blocks and is set to zero if there are no blocks. The mtermth row corresponds to the total sum of squares for y and the $\left({\mathbf{mterm}}-1\right)$th row corresponds to the residual sum of squares. The central rows of the table correspond to the main effects followed by the interaction if specified by inter. The main effects are in the order specified by lfac and the interactions are in lexical order, see Section 3. 10:   tmean – double **Output On exit: a pointer pointing to maxt memory locations, allocated internally. It contains the treatment means. The position of the means for an effect is given by the index in imean. For a given effect the means are in standard order, see Section 3. 11:   maxt – Integer *Output On exit: the number of treatment means that have been computed, see Section 8. 12:   e – double **Output On exit: a pointer pointing to maxt memory locations, allocated internally. It contains the estimated effects in the same order as for the means in tmean. 13:   imean – Integer **Output On exit: a pointer pointing to mterm memory locations, allocated internally. It indicates the position of the effect means in tmean. The effect means corresponding to the first treatment effect in the ANOVA table are stored in ${\mathbf{tmean}}\left[0\right]$ up to ${\mathbf{tmean}}\left[{\mathbf{imean}}\left[0\right]-1\right]$. Other effect means corresponding to the $\mathit{i}$th treatment effect, for $\mathit{i}=2,3,\dots ,{\mathbf{mterm}}-3$, are stored in ${\mathbf{tmean}}\left[{\mathbf{imean}}\left[i-2\right]\right]$ up to ${\mathbf{tmean}}\left[{\mathbf{imean}}\left[i-1\right]-1\right]$. 14:   semean – double **Output On exit: a pointer pointing to mterm memory locations, allocated internally. It contains the standard error of the difference between means corresponding to the $i$th treatment effect in the ANOVA table. 15:   bmean – double **Output On exit: a pointer pointing to ${\mathbf{nblock}}+1$ memory locations, allocated internally. ${\mathbf{bmean}}\left[0\right]$ contains the grand mean, if ${\mathbf{nblock}}>1$, ${\mathbf{bmean}}\left[1\right]$ up to ${\mathbf{bmean}}\left[{\mathbf{nblock}}\right]$ contain the block means. 16:   r[n] – doubleOutput On exit: the residuals. 17:   fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_2_INT_ARG_GT On entry, ${\mathbf{inter}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nfac}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{inter}}\le {\mathbf{nfac}}$. NE_ALLOC_FAIL Dynamic memory allocation failed. NE_ARRAY_CONSTANT On entry, the elements of the array y are constant. NE_G04CA_RES_DF There are no degrees of freedom for the residual or the residual sum of squares is zero. In either case the standard errors and $F$-statistics cannot be computed. NE_INT_2 On entry, ${\mathbf{nblock}}=〈\mathit{\text{value}}〉$, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: n must be a multiple of nblock, when ${\mathbf{nblock}}>1$. NE_INT_ARG_LT On entry, ${\mathbf{inter}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{inter}}\ge 0$. On entry, ${\mathbf{irdf}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{irdf}}\ge 0$. On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 4$. On entry, ${\mathbf{nblock}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{nblock}}\ge 0$. On entry, ${\mathbf{nfac}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{nfac}}\ge 1$. NE_INTARR On entry, ${\mathbf{lfac}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{lfac}}\left[\mathit{i}-1\right]\ge 2$, for $\mathit{i}=1,2,\dots ,{\mathbf{nfac}}$. NE_PLOT_TREAT The number of plots per block is not a multiple of the number of treatment combinations. ## 7  Accuracy The block and treatment sums of squares are computed from the block and treatment residual totals. The residuals are updated as each effect is computed and the residual sum of squares computed directly from the residuals. This avoids any loss of accuracy in subtracting sums of squares. ## 8  Further Comments The number of rows in the ANOVA table and the number of treatment means are given by the following formulae. Let there be $k$ factors with levels ${l}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,k$, and let $t$ be the maximum number of terms in an interaction, then the number of rows in the ANOVA table is $∑ i=1 t k i + 3 .$ The number of treatment means is $∑ i=1 t ∏ j ∈ S i l j ,$ where ${S}_{i}$ is the set of all combinations of the $k$ factors $i$ at a time. To estimate missing values the Healy and Westmacott procedure or its derivatives may be used, see John and Quenouille (1977). This is an iterative procedure in which estimates of the missing values are adjusted by subtracting the corresponding values of the residuals. The new estimates are then used in the analysis of variance. This process is repeated until convergence. A suitable initial value may be the grand mean. When using this procedure irdf should be set to the number of missing values plus one to obtain the correct degrees of freedom for the residual sum of squares. For analysis of covariance the residuals are obtained from an analysis of variance of both the response variable and the covariates. The residuals from the response variable are then regressed on the residuals from the covariates using, say, nag_regress_confid_interval (g02cbc) or nag_regsn_mult_linear (g02dac). The coefficients obtained from the regression can be examined for significance and used to produce an adjusted dependent variable using the original response variable and covariate. An approximate adjusted analysis of variance table can then be produced by using the adjusted dependent variable. In this case irdf should be set to one plus the number of fitted covariates. For designs such as Latin squares one more of the blocking factors has to be removed in a preliminary analysis before the final analysis. This preliminary analysis can be performed using nag_anova_random (g04bbc) or a prior call to nag_anova_factorial (g04cac) if the data is reordered between calls. The residuals from the preliminary analysis are then input to nag_anova_factorial (g04cac). In these cases irdf should be set to the difference between n and the residual degrees of freedom from preliminary analysis. Care should be taken when using this approach as there is no check on the orthogonality of the two analyses. If nag_anova_factorial (g04cac) is to be called repeatedly then the memory allocated to table, tmean, e, imean, semean, and bmean must be freed between calls. You are advised to call nag_anova_factorial_free (g04czc) to achieve this. ## 9  Example The data, given by John and Quenouille (1977), is for the yield of turnips for a factorial experiment with two factors, the amount of phosphate with 6 levels and the amount of liming with 3 levels. The design was replicated in 3 blocks. The data is input and the analysis of variance computed. The analysis of variance table and tables of means with their standard errors are printed. ### 9.1  Program Text Program Text (g04cace.c) ### 9.2  Program Data Program Data (g04cace.d) ### 9.3  Program Results Program Results (g04cace.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 114, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8385528326034546, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/107236/why-does-int-0-infty-fracdx1x-sin-x2-diverge
# Why does $\int_{0}^{\infty}\frac{dx}{1+(x \sin x)^2}$ diverge? I'd like your help with understanding and showing why $\int_{0}^{\infty}\frac{dx}{1+(x \sin x)^2}$ diverges. As I see it the "problematic spots" where the function may blow are backed up by the sum with $1$. What can I do in order to show that it does diverge? Thanks a lot! - ## 2 Answers Heuristically: Each time $\sin x$ crosses zero, the integrand briefly soars up to $1$ and back down again. The only hope for the integral to converge is if the width of those peaks go towards $0$ sufficiently fast that the sum of their areas is finite. However, the width of each peak is determined mainly by the slope of $x\sin x$ at the zero crossing -- double the slope means half as wide a peak, and so on. Unfortunately these slopes form an alternating arithmetic progression: $0, -\pi, 2\pi, -3\pi, \ldots$. This means that in the limit, the width of the peaks in the integrand (and therefore the areas of the peaks) fall off proportionally to $1/n$ -- and that is not fast enough to have finite sum. I expect that this reasoning can be made rigorous by taking the "width of a peak" to mean, for example, the width of an interval where $\frac{1}{1+(x\sin x)^2}\ge \frac 12$. Then certainly each peak contributes at least half its width to the integral, and it ought to be possible to prove that the width of the peak at $n\pi$ is strictly greater than $a/n$ for some constant $a$ and possibly excluding the first few peaks. - 9 To make this a bit more quantitative: for $x \in [n\pi - 1/n, n \pi]$, where $n$ is a positive integer, $|x \sin x| \le \pi$ so $$\int_0^N \frac{dx}{1 + (x \sin x)^2} \ge \sum_{n=1}^N \frac{1}{n(1+\pi^2)}$$ – Robert Israel Feb 8 '12 at 22:49 @RobertIsrael: can you please extend your comment? I don't understand it. – Jozef Feb 9 '12 at 8:49 @Jozef, if $x \in \big[n\pi - \frac1n, n\pi\big]$, then $x \le n\pi$ and $\lvert \sin x \rvert \le \lvert x-n\pi \rvert \le \frac1n$. Does that help? – Rahul Narain Feb 9 '12 at 13:57 Just to show that there are several ways to do this: \begin{eqnarray} \int_{k\pi}^{(k+1)\pi} \frac{dx}{1+(x\sin x)^2} & \ge & \int_0^\pi \frac{dx}{1 + ((k+1)\pi)^2(\sin x)^2} \\ & \ge & \int_0^\pi \frac{dx}{1 + ((k+1)\pi)^2 x^2} \\ & = & \frac{\arctan((k+1)\pi^2)}{(k+1)\pi} \ge \frac{1}{(k+1)\pi} \end{eqnarray} The first step uses $x \le (k+1)\pi$ and the periodicity of $\sin^2$ and the third step uses the substitution $y = (k+1)\pi x$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252185821533203, "perplexity_flag": "head"}
http://matthewkahle.wordpress.com/2010/11/05/packing-tetrahedra/?like=1&source=post_flair&_wpnonce=1630920375
# Packing tetrahedra Last spring I saw a great colloquium talk on packing regular tetrahedra in space by Jeffrey Lagarias. He pointed out that in some sense the problem goes back to Aristotle, who apparently claimed that they tile space. Since Aristotle was thought to be infallible, this was repeated throughout the ages until someone (maybe Minkowski?) noticed that they actually don’t. John Conway and Sal Torquato considered various quantitative questions about packing, tiling, and covering, and in particular asked about the densest packing of tetrahedra in space. They optimized over a very special kind of periodic packing, and in the densest packing they found, the tetrahedra take up about 72% of space. Compare this to the densest packing of spheres in space, which take up about 74%. If Conway and Torquato’s example was actually the densest packing of tetrahedra, it would be a counterexample to Ulam’s conjecture that the sphere is the worst case scenario for packing. But a series of papers improving the bound followed, and as of early 2010 the record is held by Chen, Engel, and Glotzer with a packing fraction of 85.63%. I want to advertise two attractive open problems related to this. (1) Good upper bounds on tetrahedron packing. At the time of the colloquium talk I saw several months ago, it seemed that despite a whole host of papers improving the lower bound on tetrahedron packing, there was no upper bound in the literature. Since then Gravel, Elser, and Kallus posted a paper on the arXiv which gives an upper bound. This is very cool, but the upper bound on density they give is something like $1- 2.6 \times 10^{-25}$, so there is still a lot of room for improvement. (2) Packing tetrahedra in a sphere. As far as I know, even the following problem is open. Let’s make our lives easier by discretizing the problem and we simply ask how many tetrahedra we can pack in a sphere. Okay, let’s make it even easier: the edge length of each of the tetrahedra is the same as the radius of the sphere. Even easier: every one of the tetrahedra has to have one corner at the center of the sphere. Now how many tetrahedra can you pack in the sphere? It is fairly clear that you can get 20 tetrahedra in the sphere, since the edge length of the icosahedron is just slightly longer than the radius of its circumscribed sphere. By comparing the volume of the regular tetrahedron to the volume of the sphere, we get a trivial upper bound of 35 tetrahedra. But by comparing surface area instead, we get an upper bound of 22 tetrahedra. There is apparently a folklore conjecture that 20 tetrahedra is the right answer, so proving this comes down to ruling out 21 or 22. To rule out 21 seems like a nonlinear optimization problem in some 63-dimensional space. I’d guess that this is within the realm of computation if someone made some clever reductions. Oleg Musin settled the question of the kissing number in 4-dimensional space in 2003. To rule out kissing number of 25 is essentially optimizing some function over a 75-dimensional space. This sounds a little bit daunting, but it is apparently much easier than Thomas Hales’s proof of the Kepler conjecture. (For a nice survey of this work, see this article by Pfender and Ziegler.) ### Like this: This entry was posted in expository, puzzles, research and tagged discrete geometry, John Conway, packing problems. # Post navigation ## 3 thoughts on “Packing tetrahedra” 1. Anonymous says: if the edges of tetrahedra is 2 the area of the equilateral triangles has area sqrt(3). the area of the circumsphere is 16\pi and you can fit 16\pi/sqrt(3) = 29.02. i am wondering how are you are getting about 22? • matthewkahle says: It is true that the area of the equilateral triangle is sqrt(3), but the surface area is larger once it is projected out on to the sphere. The formula for solid angle for a regular tetrahedron is given here: http://en.wikipedia.org/wiki/Tetrahedron 2. Anonymous says: thanks for blogging about geometry. i think i have it now. the dihedral angle of tetrahedra is \$cos^{-1}(1/3)\$ and the area projected by one face is \$(3cos^{-1}(1/3)-\pi\$ so \$\frac{4\pi}{3\cos^{-1}(1/3) – pi)\$ is about 22.795. # Mathematical art Blog at WordPress.com. | Theme: Dusk To Dawn by Automattic. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379202723503113, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/1451/elgamal-multiplicative-cyclic-group-and-key-generation?answertab=votes
# ElGamal: Multiplicative cyclic group and key generation Alice generates an efficient description of a multiplicative cyclic group G, of order q, with generator g. How is this done? What are some of the properties here? - ## 3 Answers I'm not sure what level of explanation you are looking for, but from the very basics, subgroups work like this. Consider concretely the example of working $\mod{p}$ where $p=11$. Next we have to find a generator $g$. Initially, any number $\{0,\ldots,n-1\}$ (or $\mathbb{Z}_p$ for short) is a candidate. Below is a chart showing each $g$ value as a row, each $a$ value as a column, and the expression $g^a \mod{11}$ evaluated for each $g$ and $a$. $\begin{array}{c|ccccccccccc} g \backslash a & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 1 & 2 & 4 & 8 & 5 & 10 & 9 & 7 & 3 & 6 & 1 \\ 3 & 1 & 3 & 9 & 5 & 4 & 1 & 3 & 9 & 5 & 4 & 1 \\ 4 & 1 & 4 & 5 & 9 & 3 & 1 & 4 & 5 & 9 & 3 & 1 \\ 5 & 1 & 5 & 3 & 4 & 9 & 1 & 5 & 3 & 4 & 9 & 1 \\ 6 & 1 & 6 & 3 & 7 & 9 & 10 & 5 & 8 & 4 & 2 & 1 \\ 7 & 1 & 7 & 5 & 2 & 3 & 10 & 4 & 6 & 9 & 8 & 1 \\ 8 & 1 & 8 & 9 & 6 & 4 & 10 & 3 & 2 & 5 & 7 & 1 \\ 9 & 1 & 9 & 4 & 3 & 5 & 1 & 9 & 4 & 3 & 5 & 1 \\ 10 & 1 & 10 & 1 & 10 & 1 & 10 & 1 & 10 & 1 & 10 & 1 \end{array}$ Property 1: for each row (except the first), the numbers eventually reach 1 and then repeat. The first generator, 0, is degenerate. We usually exclude it from consideration. $\mathbb{Z}_p$ without 0 is denoted $\mathbb{Z}^*_p$. The next generator, 1, only generates the number 1. The next generator, 2, generates {1,2,4,8,5,10,9,7,3,6}, which if you sort turns out to be each 10 elements of $Z_p^*$. The next generator, 3, generates {1,3,9,5,4}. Generators 6,7,8 generate the same group as 2 (just in a different order). Generators 4,5,9 generate the same group as 3. Generator 10 generates {1,10}. There are a lot of properties contained in this chart but the relevant one is to consider the order (number of elements) in each possible group. We saw generators with 1, 2, 5 and 10 elements. These numbers are not coincidental. Property 2: they are the factors of $p-1$ which is 10 when $p=11$. This holds true for any $p$ that is prime. Each of these smaller groups are called "subgroups" of $\mathbb{Z}^*_p$. Take the group generated by 3: {1,3,9,5,4}. If you take any element of this group and multiply it by any other element mod 11, the result will always be one of the elements of this group. This means it is closed under multiplication or a "multiplicative subgroup." Property 3: If $p$ is prime, each subgroup will be multiplicative. For the security of Elgamal, we essentially want both $p$ and the order of the subgroup $q$ to be large primes. This means $q$ should divide $p-1$. In the example $p=11$ and $q=5$. It is typical to set $p=2q+1$ (that is $(p-1)=2q$). For things other than Elgamal (like DSA), we might use $p=\alpha q+1$ for some $\alpha$ larger than 2 (e.g., so that $p$ will be 1024 bits and $q$ will be 160 bits). For $p=2q+1$, there will be subgroups of order $p-1$, $q$, 2 and 1 (the factors of $p-1$). Most generators will either have order $p-1$ (generating $\mathbb{Z}^*_p$) or $q$ (generating a group we call $\mathbb{G}_q$). How do we find $\mathbb{G}_q$? 1. Find a $p$ that will have $\mathbb{G}_q$: we choose a random prime $q$, compute $p=2q+1$, repeat until $p$ is prime. 2. Find a $g$ that will generate $\mathbb{G}_q$ and not $\mathbb{Z}^*_p$ (or any other subgroup). Since groups end with 1 and then repeat, we test if $g^q \mod{p}$ is equal to 1. If it is, we have very likely found a generator of $\mathbb{G}_q$ (and very unlikely found something of order 1 or 2; we can check that $g^2$ is not equal to 1). 3. The description of the group is $\langle g,q,p \rangle$ (you could compute $q$ from $p$ to save space in the description). One final thing: look at the column with $a=2$. These are the quadratic residues of $\mathbb{Z}_p^*$. Property 4: When $p=2q+1$, they are the exact same group as $\mathbb{G}_q$. This means, by using $\mathbb{G}_q$, you don't have to worry about an adversary testing if certain numbers are quadratic residues or not (see @Jalaj's answer). - Well to give a "description" of a multiplicative cyclic group, one need only send the modulus. Since everyone knows how the group is used, that's all you really need. How this is done in practice is described on page 164 of the Handbook of Applied Cryptography. Algorithm 4.84 specifically. - 1 ElGamal-like schemes can be used also with other groups than the standard "Integers modulo prime" group, where some more information might be necessary. – Paŭlo Ebermann♦ Dec 13 '11 at 17:44 One of the property that you need from the group is that it should be of order $q$, where $q$ is a safe prime (of form $2p+1$ where $p$ is also a prime). The reason behind this is because one can possibly break the discrete log assumption if the $q$ is improperly chosen by using Legendre symbol. More details are below. For the semantic security of an ElGamal encryption scheme, we need DDH assumption to be true. If $q$ is improperly chosen, we can have the following attack:$\newcommand\lsb{\operatorname{lsb}}\newcommand\Dlog{\operatorname{Dlog}}$ Given $(\alpha, \beta, \gamma)$, the attacker needs to know whether these are of the form $(g^x, g^y, g^{x·y})$. If $$\lsb(\Dlog(\alpha)) × \lsb(\Dlog(\beta)) = \lsb(\Dlog(\gamma))\mod 2,$$ then return $1$, else return $0$. Now finding $\lsb$ is a simple arithmetic by the use of Legendre symbol, if $q$ is not a safe prime. - I edited your answer to format it a bit more nicely, and add some additional information. Please read again to make sure that I didn't add things you would not have written (and feel free to revert or edit again). – Paŭlo Ebermann♦ Dec 13 '11 at 17:55 Thanks! I tried using the basic latex method to write in the math mode, but it didn't work. Now I know :) – Jalaj Dec 13 '11 at 18:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9190933704376221, "perplexity_flag": "head"}
http://qchu.wordpress.com/tag/orthogonal-polynomials/
# Annoying Precision Feeds: Posts Comments ## Moments, Hankel determinants, orthogonal polynomials, Motzkin paths, and continued fractions Posted in graph theory, algebraic combinatorics, probability, tagged generating functions, walks on graphs, Catalan numbers, orthogonal polynomials, continued fractions, moments, determinants on September 18, 2012 | Leave a Comment » Previously we described all finite-dimensional random algebras with faithful states. In this post we will describe states on the infinite-dimensional $^{\dagger}$-algebra $\mathbb{C}[x]$. Along the way we will run into and connect some beautiful and classical mathematical objects. A special case of part of the following discussion can be found in an old post on the Catalan numbers. (more…) Read Full Post » ## The Catalan numbers, regular languages, and orthogonal polynomials Posted in combinatorics, graph theory, tagged Catalan numbers, Chebyshev polynomials, continued fractions, determinants, generating functions, orthogonal polynomials, regular languages on June 7, 2009 | 14 Comments » I’ve been inspired by The Unapologetic Mathematician (and his pages and pages of archives!) to post more often, at least for the remainder of the summer. So here is a circle of ideas I’ve been playing with for some time. Let $C(x) = \sum_{n \ge 0} C_n x^n$ be the ordinary generating function for the ordered rooted trees on $n+1$ vertices (essentially we ignore the root as a vertex). This is one of the familiar definitions of the Catalan numbers. From a species perspective, ordered rooted trees are defined by the functional equation $\displaystyle C(x) = \frac{1}{1 - xC(x)}$. The generating function $\frac{1}{1 - x} = 1 + x^2 + ...$ describes the species $\textsc{Seq}$ of sequences. So what this definition means is that, after tossing out the root, an ordered rooted tree is equivalent to a sequence of ordered rooted trees (counting their roots) in the obvious way; the roots of these trees are precisely the neighbors of the original root. Multiplying out gets us a quadratic equation we can use to find the usual closed form of $C(x)$, but we can instead recursively apply the above to obtain the beautiful continued fraction $\displaystyle C(x) = \frac{1}{1 - \frac{x}{1 - \frac{x}{1 - ...}}}$. Today’s discussion will center around this identity and some of its consequences. (more…) Read Full Post »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8932240605354309, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/80754/show-that-the-value-of-a-definite-integral-is-unity/80761
# Show that the value of a definite integral is unity $$\int_2^4\frac{\sqrt{\log(9-x)}}{\sqrt{\log(9-x)}+\sqrt{\log(3+x)}}dx=1$$ - 1 In general: $\int_2^4 \frac{f(x)}{f(x)+f(6-x)}\,dx=1$. – pharmine Nov 10 '11 at 5:22 2 why is this the case? – vivaelche05 Nov 10 '11 at 5:26 I posted an answer below. – pharmine Nov 10 '11 at 5:50 ## 2 Answers We show that $I=\int_2^4 \frac{f(x)}{f(x)+f(6-x)}\,dx$ equals $1$, where $f(x)=\sqrt{\log(9-x)}$. Proof. By making a substitution $y=6-x$, we get $$I=\int_4^2 -\frac{f(6-y)}{f(y)+f(6-y)}\,dy=\int_2^4 \frac{f(6-y)}{f(y)+f(6-y)}\,dy.$$ Therefore $$\begin{align*}2I&=\int_2^4 \frac{f(x)}{f(x)+f(6-x)}dx+\int_2^4 \frac{f(6-y)}{f(y)+f(6-y)}dy\\ &=\int_2^4 \frac{f(x)}{f(x)+f(6-x)}dx+\int_2^4 \frac{f(6-x)}{f(x)+f(6-x)}dx=\int_2^4 1\,dx =2.\end{align*}$$ Edit. In the last part, $y$ is a dummy variable and can be changed to $x$ or any other variable you like. Edit2. If you don't like the same $x$ being used, you could use another variable, say $s$, so that with two substitutions $$\begin{align*}2I&=\int_2^4 \frac{f(x)}{f(x)+f(6-x)}dx+\int_2^4 \frac{f(6-y)}{f(y)+f(6-y)}dy\\ &=\int_2^4 \frac{f(s)}{f(s)+f(6-s)}ds+\int_2^4 \frac{f(6-s)}{f(s)+f(6-s)}ds=\int_2^4 1\,ds =2.\end{align*}$$ - how did f(6-y) become f(6-x) in the numerator of the second integrand on the last line? – vivaelche05 Nov 10 '11 at 5:50 Because $y$ is a dummy variable; please see my edited comment. – pharmine Nov 10 '11 at 5:53 Wow, thank you. can I ask, conceptually, why this is true for all integrable f(x)'s? – vivaelche05 Nov 10 '11 at 15:18 2 As Dinesh points out, $g(x)=\frac{f(x)}{f(x)+f(6-x)}$ satisfies $g(x)+g(6-x)=1$ (more generally, $g(x)+g(a-x)=b$ where $a$ and $b$ are constants); this is the condition where you can use this integration trick. – pharmine Nov 10 '11 at 15:33 $\int^a_b f(x)\,dx$ = $\int^a_b f(a+b-x)\,dx$ We can prove this by changing the dummy variable x to a+b-x, we get the integrand as $-f(a+b-x)\,dx$ and by changing the limits accordingly with $a+b-b$ and $a+b-a$, i.e. Now the integral becomes $-\int^b_af(a+b-x)dx$=$\int^a_b f(a+b-x)dx$ The function given in the question satisfies the property that $f(x)+f(6-x)=1$. Let the integral be $I$ then $2I= \int^a_b f(x)\,dx+ \int^a_b f(a+b-x)dx=\int^a_b f(x)+f(a+b-x)\,dx=\int^a_b\,dx=a-b$, In your case $a-b=2$, hence $2I=2$. So $I=1$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8750536441802979, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/04/06/generalized-eigenvectors-of-an-eigenpair/?like=1&source=post_flair&_wpnonce=c589d5bcbb
# The Unapologetic Mathematician ## Generalized Eigenvectors of an Eigenpair Just as we saw when dealing with eigenvalues, eigenvectors alone won’t cut it. We want to consider the kernel not just of one transformation, but of its powers. Specifically, we will say that $v$ is a generalized eigenvector of the eigenpair $(\tau,\delta)$ if for some power $n$ we have $\displaystyle(T^2-\tau T+\delta I_V)^nv=0$ The same argument as before tells us that the kernel will stabilize by the time we take $d=\dim(V)$ powers of an operator, so we define the generalized eigenspace of an eigenpair $(\tau,\delta)$ to be $\displaystyle\mathrm{Ker}\left((T^2-\tau T+\delta I_V)^d\right)$ Let’s look at these subspaces a little more closely, along with the older ones of the form $\mathrm{Ker}\left((T-\lambda I_V)^d\right)$, just to make sure they’re as well-behaved as our earlier generalized eigenspaces are. First, let $V$ be one-dimensional, so $T$ must be multiplication by $\lambda_0$. Then the kernel of $T-\lambda I_V$ is all of $V$ if $\lambda=\lambda_0$, and is trivial otherwise. On the other hand, what happens with an eigenpair $(\tau,\delta)$? Well, one application of the operator gives $\displaystyle(T^2-\tau T+\delta I_V)v=(\lambda_0^2-\tau\lambda_0+\delta)v$ for any nonzero $v$. But this will always be itself nonzero, since we’re assuming that the polynomial $X^2-\tau X+\delta$ has no roots. Thus the generalized eigenspace of $(\tau,\delta)$ will be trivial. Next, if $V$ is two-dimensional, either $T$ has an eigenvalue or it doesn’t. If it does, then this gives a one-dimensional invariant subspace. The argument above shows that the generalized eigenspace of any eigenpair $(\tau,\delta)$ is again trivial. But if $T$ has no eigenvalues, then the generalized eigenspace of any eigenvalue $\lambda$ is trivial. On the other hand we’ve seen that the kernel of $T^2-\tau T+\delta I_V$ is either the whole of $V$ or nothing, and the former case happens exactly when $\tau$ is the trace of $T$ and $\delta$ is its determinant. Now if $V$ is a real vector space of any finite dimension $d$ we know we can find an almost upper-triangular form. This form is highly non-unique, but there are some patterns we can exploit as we move forward. About these ads ### Like this: Like Loading... Posted by John Armstrong | Algebra, Linear Algebra No comments yet. « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## RSS Feeds RSS - Posts RSS - Comments • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8859148025512695, "perplexity_flag": "head"}
http://toomai.wordpress.com/category/computers/
# MATH with my KIDS ## Radix sort with index cards I showed my kids the radix sort. Posted by toomai in computers, numbers and tagged computer science, math, math education, mathematics, radix sort, sorting, sorting algorithm | Leave a Comment 2 Comments ## Math Camp I: Recursion and such The past two years I’ve taught at a summer math camp for high school students. In 2010 I assisted with a class on chaos and fractals. This year I assisted with a computers class. I’m planning to do a couple of posts about the class. This is the first of those posts. The computer class that I helped teach focused on hardware, but also included some software topics. My main role was teaching programming. I should point out that I am not really a programmer. I’m a mathematician who uses some programming in his work. I’ve never had a programming class. Nevertheless, I hope that I was able to get some of the basics across to the students. I first introduced programming to the students by describing a simple language for manipulating a cube, the goal being to get the cube in a prescribed orientation. More on that in a later post. We did our programming in two languages: Python (which you can try in your browser at: Try Python) and Alice. Both are freely available. In Python we did some simple procedural programming (I had them code up a function that computes the factorial of a number and another that runs the Collatz algorithm). One of the students was able to produce a factorial algorithm very quickly. Here is his python code: ```def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) ``` It surprised me that he used recursion, though I think he has had some programming experience in the past. Most of the other students had had no prior programming experience and were having a hard time producing a factorial function. This also surprised me. This was my first time teaching programming, so I had little intuition for where the students might get hung up. Alice is a drag-and-drop object-oriented language for manipulating characters in a 3-D virtual world. We mostly let the students explore Alice on their own as their interests dictated. One student wrote a very simple first-person shooter game. In any case it was hard to get any of the kids very interested in programming. If I go back next year and assist with this same class, then I think I would like to have some compelling problems or mini-projects that would grab the kids attention and require them to do some programming. Suggestions anyone? Posted by toomai in computers, teaching 2 Comments ## Building a Computer 1111: Video of our Signal-Split Prototype As I’ve said previously we are currently working on developing a signal-split device: that is, we want something that takes one input and creates a copy of it. Or in other words one marble in, two marbles out. Below is the design we settled on for our first physical prototype of the signal split toggle. We cut one out and mounted it. Here is video of it at work. It works pretty well! It’s not perfect, we’ll have to keep working on it. The tough part at this point is that we don’t know whether faults in our prototypes are due to faulty craftsmanship or faulty design. We will have to make several prototypes and carefully observe them in operation to figure this out. My lack of any schooling or experience in engineering and woodworking is becoming a hindrance at this point. Posted by toomai in computers ## Building a Computer 1110: Video of the 4-bit Adder As promised in a previous post, here is video of our 4-bit adder prototype. We used Wandel’s plans for it, but it is missing several niceties that Wandel’s version has: a top tray that holds marbles and releases them all at once, a bottom tray to catch the marbles, and a reset mechanism Below are a couple of pictures of detail on the adder that I invented myself. When I first put the adder together, the toggles were not working well at all. After a good deal of thought I came up with the idea of putting a couple of paper washer behind each toggle. This solved the problem. 1 Comment ## Building a Computer 1101: Signal Split I had an idea for a marble implementation of the signal split. It is very closely related to Wandel’s toggle. I ran the idea by my nine-year-old, then showed him the basics of Inkscape and let him do the drawing. Here is what he came up with. I think this is very close to what we will end up using, but of course we haven’t actually built it yet… The funny thing about the signal split is it’s simple electronically: just solder a couple of wires together and you’re good. This video that I found on youtube shows a simple marble implementation of a signal split… …but we need something that is self-resetting. I think what we’ve designed here fits the bill. Building and testing to come… Posted by toomai in computers 1 Comment ## Building a Computer 1100: Multiplication and Subtraction with an Adder My nine-year-old has been playing with our prototype four-bit adder quite a lot. After getting bored with adding, he figured out how to do subtraction with it. Of course we talked about how to do subtraction earlier. What he ended up doing is taking a pair of numbers (x,y). Finding the two’s complement representation of -y (on his own, not using the machine), and then feeding x and -y through the machine. After playing with that for a while, he told me that he had also figured out how to multiply with it. To multiply x and y, simply feed x through the machine y times, or feed y through the machine x times. Despite Keith Devlin‘s insistence that multiplication is not just repeated addition, this works (though to be fair to Devlin, he frankly admits that it works). My son keeps asking about making a multiplier. I’ve been trying to explain to him that we will be implementing multiplication in software. I’m not sure that he gets it yet, but once we get that far his experiments should help him see how to do it. Posted by toomai in computers ## Building a Computer 1010: Counting and Human versus Machine Error I decided that we should use the prototype toggle that we built to actually do a computation. I realized that with some manual interaction we could do a marble count. Here is how it works: Start with a pile of marbles and feed them all through the machine. Collect the marbles that come out of the two toggle outputs in two bowls (which I have labeled bowl A and bowl B. Once all of the marbles have been fed through, the machine will be in one of its two states: either the rocker will be to the left with no marble in the catch, or the rocker will be to the right with a marble in the catch. If you watch the machine operate you realize that when marbles are output they always come in twos. That is every time a marble goes into bowl A one also goes into bowl B and vice versa. So you can see that there will be a marble in the catch if and only if the number of marbles fed through the machine is odd. Furthermore, a binary number ends in a one if and only if it is odd. In other words, once all of the marbles have been fed through the machine you can read the right-most (or least-significant) bit off from the machine: a one if there is a marble in the catch, and a zero if there isn’t. Once all of the marbles have been fed through and the bit recorded, you should reset the machine (move the toggle to the left, releasing the marble that’s caught if there is one). Now take out the contents of bowl A; this becomes your new pile. Repeat the process, feeding all of your new pile through the machine, record the next bit (I’ll leave it to you to convince yourself that this really does tell you the next bit) and repeat the whole process until all of the marbles are in bowl B. At this time you should have the complete binary representation of your total number of marbles. We carried this program out for our set of marbles four times…and we got four different answers….OK, I never claimed that our prototype machine works perfectly. Actually there were two things that went wrong. Sometimes two marbles would sneak through the same side of the machine before the toggle could flip to the other side. We called this a machine error. Sometimes we would do silly things like fail to empty bowl A before feeding its contents through the machine. Thus, some of the contents of bowl A from round n would get mixed with the contents of bowl A from round n+1. This (pretty clearly) causes problems. The results we got for our number of marbles were 96, 118, 119, and 136. Then we counted them by hand and got 118. One other thing I should mention is that we watched the machine as we fed the marbles through, and sometimes there were machine errors that we noticed and corrected in real time. In any case I think that 118 is the correct number. This experiment has got me thinking a bit about operator error versus machine error. Often one hears operator errors called “human errors”, but it occurs to me that even the machine errors are due to humans. That is, the machine errors that occurred in our little exercise were due (almost certainly) to our poor craftsmanship and possibly due to bad engineering. So machine versus operator errors really come down, not to machines versus humans, but to the people who designed and built the machines on one hand and those who are operating them on the other. 1 Comment ## Building a Computer 1001: Video of our One-Bit Adder Prototype As promised in a previous post, here is video of what we have working so far. Based on Wandel‘s plans, this mechanism constitutes a full adder. For my less than stellar craftsmanship it works remarkably well. Posted by toomai in computers 1 Comment ## Building a Computer 1000: Subtraction First off, here is a photo of what we have so far for our four-bit adder: Nothing is glued down and we still have holes to cut and more pieces to cut out. Last night my nine-year-old was asking more about subtraction. He had made a chart of four-bit binary representations of negative numbers. It started like this: ```1 2 3 4 5 0001 0010 0011 0100 0101 ------------------------ etc 1111 1110 1101 1100 1011 -1 -2 -3 -4 -5 ``` With some prodding he found the pattern that -2 is the same 1 only with 1s replaced with 0s and 0s replaced with 1s. Likewise for -3 and 2, -4 and 3, -5 and 4, etc. Another way to say this is that you can get -3 from 2 by flipping all of the bits, and similarly for the other pairs. Next I had him try the following process: Take two positive numbers. Write them out as four-bit binary numerals. Flip all of the bits of the larger one. Add the original smaller number and the bit-flipped number together. Flip all of the bits of the result. He did the following example ```1: 0001 2: 0010 bit flipped 2: 1101 add: 1 <-(carry) 0001 +1101 ---- 1110 flip bits of result: 0001 ``` The result is the binary representation of one, which is the difference 2-1. Here’s another example: ```8: 1000 3: 0011 bit flipped 8: 0111 add: 111 <-(carries) 0111 +0011 ---- 1010 flip bits of result: 0101 = 5 ``` So we have to figure out how to flip bits with marbles. Posted by toomai in computers 1 Comment ## Building a Computer 111: Negative Numbers We have started building a 4-bit adder using Wandel’s plans. Photos to come! It’s amazing what you can do with a coping saw and some pine planks. It won’t look as nice as Wandel’s but I hope that it works. We have even been cutting the holes with the coping saw since I don’t have the proper drill bit. So far they have come out rounder than I expected. Straight lines are hard to do with a coping saw, but…meh they should be straight enough. Anyway, we got some notebooks to write down our ideas and plans in. Tonight the nine-year-old set to work on how we can represent negative numbers in our binary machine. (I clued him in that this seemed to be the way to go to get a subtractor.) Since we are building a 4-bit machine currently I had him work with 4-bit numerals. This means that any carries into the fifth palce just roll off and are lost. I told him to try to find a numeral that when added to 0001 (using the algorithm that we know) gives 0000. What ever that numeral is it should represent -1. He soon found that 1111 works. Then he set to work adding 1111 iteratively to get -2, -3, etc. The numerals he is working out for the negatives are what computer scientists call two’s complement. Computer folks will probably tell you that the name two’s complement derives from the fact that $-n$ is represented by $2^N-n$ , where $N$ is the number of bits. This seems like a silly reason to call it two’s complement to me (two-to-the-N’s compliment, OK). Mathematicians will probably tell you that the name is a joke, deriving from another negative-number convention called one’s complement. What’s really happening (or what a mathematician like me would say is really happening) is that–since working with $N$-bit numbers means you are working modulo $2^N$–two’s compliment is completely natural; that is, it plays very nicely with arithmetic operations such as + and -. Anyway, two’s compliment not only has nice properties with respect to arithmetic operations, it has a pretty clean description in terms of flipping bits. I’ll get to that later as I’m planning on having my kids discover it soon…which will lead us to want to have some other logic gates… While the nine-year-old was at this, the seven-year-old was busy diagramming the marble adder in her notebook, while the four-year-old drew a picture of a computer in hers. Also I’ve begun to read my book. Thoughts on it to come. Posted by toomai in computers
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9652559757232666, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/80339/list
## Return to Answer No. This is because all hypergeometrics are holonomic, and holonomic functions can only have a finite number of singularities, which themselves can only be of certain types. If the logarithm of all hypergeometrics could be so expressed, then you could have a holonomic function with a $\ln \ln (x)$ singularity, which is not possible.I'll add some references No. This is because all hypergeometrics are holonomic, and holonomic functions can only have a finite number of singularities, which themselves can only be of certain types. If the logarithm of all hypergeometrics could be so expressed, then you could have a holonomic function with a $\ln \ln (x)$ singularity, which is not possible. I'll add some references to this later.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354398250579834, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/145023-analysis-help.html
# Thread: 1. ## analysis help I have an analysis exam in a few weeks but i still suck at it, unless ive learnt exactly how to answer a certain type of question (i.e prove a function converges/prove a set has a max) I have absolutely no clue how to even start it. And the book i have has questions but no answers >_< not sure how thats supposed to help anyone D: Anyway my question is does anyone know any site or something which gives tips of how to do prove things or is it just purely practice. 2. I'd say if you give a specific example of the kind of questions your finding difficult. Post up here and someone may be able to explain it to you!! 3. Originally Posted by renlok I have an analysis exam in a few weeks but i still suck at it, unless ive learnt exactly how to answer a certain type of question (i.e prove a function converges/prove a set has a max) I have absolutely no clue how to even start it. And the book i have has questions but no answers >_< not sure how thats supposed to help anyone D: Anyway my question is does anyone know any site or something which gives tips of how to do prove things or is it just purely practice. Functions don't converge, limits, etc. converge. A set having a max is presumably finding the supremum of a bounded non-empty subset of $\mathbb{R}$ Right?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9679130911827087, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/4862/what-do-the-options-of-smoothkerneldistribution-do/4871
# What do the options of SmoothKernelDistribution do? The function `SmoothKernelDistribution` has three options that are not described in too much detail in the Mathematica's help window. InterpolationPoints: What is interpolated by Mathematica in function `SmoothKernelDistribution`? MaxMixtureKernels: As far as my limited knowledge of kernel estimates goes, there will be as many kernels as grid points on my discretized domain. If my data lives on $x$, and I am attempting to approximate probability density function $f(x)$ then I may choose to do so at N evenly-spaced discrete points along $x$, e.g. $x_i, i=1,2,...,N$. MaxRecursion: What recursion does this option pertain to? - ## 2 Answers You can click on each of these variables in the help for further explanation. MaxMixtureKernels: the maximum number of kernels to generate the estimate from. The example in the help file makes this quite clear: As you can increase the number of kernels (tent poles, if you will) from 10, 15, 25, to 100, the smoother the estimate becomes, at the expense of complexity (more parameters to estimate). InterpolationPoints: How many points the interpolation function (kernel density estimate) is to be evaluated at. 10 points on the left, 100 on the right. First you fix the number of kernels (consider the previous diagram), then you select where to sample the interpolant. MaxRecursion: An option for the `Plot` function to achieve better results in places where more samples are needed. Again, the help file provides some illuminating illustrations: Here the levels of recursion runs from 0,1,2,4. - Thank you for your explanations! What I am still a little puzzled about is the fact that Mathematica does all this and gives you the opportunity to change some numbers, yet there are still things that you as a user might want to have control over: MaxMixtureKernels: Each (symmetric) kernel is going to be centred on some value. While this option allows you to change the total number of kernels, we still don't know where these are actually placed (I know the help function says uniformly spaced but what exactly does that mean for your specific set of data?). – Name Apr 29 '12 at 21:52 InterpolationPoints: This option seems to pertain to how often each individual kernel is evaluated, i.e. each kernel is interpolated (somehow ... possibly with splines?) and then the "y-value" where two neighboring kernels overlap are somehow averaged? MaxRecursion: I don't understand why we need to further smoothen the resulting interpolation function ... I mean we already specified everything by saying where we want kernels to centered and how finely we want to interpolate those individually? – Name Apr 29 '12 at 21:56 Forget what I said about overlapping kernels ... of course that shouldn't happen. However, if I choose a simple kernel such as "Rectangular", why would I want those to be interpolated individually with some polynomial? – Name Apr 29 '12 at 22:11 @Emre described the options quite well. Worth mentioning here is `KernelMixtureDistribution` which is a parametric equivalent to `SmoothKernelDistribution`. The goal of `SmoothKernelDistribution` is to interpolate the PDF (using linear interpolation) given by `KernelMixtureDistribution`. In my work I use `KernelMixtureDistribution` when speed is less of an issue than quality since it is always a more accurate representation of a kernel density estimator and is capable of handling symbolic inputs. I use `SmoothKernelDistribution` when I want a quick, numeric and usually visual approximation to some density. It is also worth pointing out the setting `MaxMixtureKernels -> All` guarantees that a kernel will be placed at each data point rather than on a uniform grid. This is a good setting to use whenever the number of data points is not astronomically large. - Thanks very much for pointing out this other function and the thing about placing a kernel at each data point. Using the latter option clarified things for me a little and I'll make sure to check out what this other function does. – Name Apr 29 '12 at 22:03 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255244731903076, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/08/23/stone-spaces/
# The Unapologetic Mathematician ## Stone Spaces The Stone space functor we’ve been working with sends Boolean algebras to topological spaces. Specifically, it sends them to compact Hausdorff spaces. There’s another functor floating around, of course, though it might not be the one you expect. The clue is in our extended result. Given a topological space $X$ we define $S(X)$ to be the Boolean algebra of all clopen subsets. This functor is contravariant — given a continuous map $f:X\to Y$, we get a homomorphism of Boolean algebras $S(f)$ sending the clopen set $Z\subseteq Y$ to its preimage $f^{-1}(Z)\subseteq X$. It’s straightforward to see that this preimage is clopen. Another surprise is that this is known as the “Stone functor”, not to be confused with the Stone space functor $S(\mathcal{B})$. So what happens when we put these two functors together? If we start with a Boolean algebra $\mathcal{B}$ and build its Stone space $S(\mathcal{B})$, then the Stone functor applied to this space gives us a Boolean algebra $S(S(\mathcal{B}))$. This is, by construction, isomorphic to $\mathcal{B}$ itself. Thus the category $\mathbf{Bool}$ is contravariantly equivalent to some subcategory $\mathbf{Stone}$ of $\mathbf{CHaus}$. But which compact Hausdorff spaces arise as the Stone spaces of Boolean algebras? Look at the other composite; starting with a topological space $X$, we find the Boolean algebra $S(X)$ of its clopen subsets, and then the Stone space $S(S(X))$ of this Boolean algebra. We also get a function $X\to S(S(X))$. For each point $x\in X$ we define the Boolean algebra homomorphism $\lambda_x:S(X)\to\mathcal{B}_0$ that sends a clopen set $C\subseteq X$ to $1$ if and only if $x\in C$. We can see that this is a continuous map by checking that the preimage of any basic set is open. Indeed, a basic set of $S(S(X))$ is $s(C)$ for some clopen set $C\subseteq X$. That is, $\{\lambda\in S(S(X))\vert\lambda(C)=1\}$. Which functions of the form $\lambda_x$ are in $s(C)$? Exactly those for which $x\in C$. Since $C$ is clopen, this preimage is open. Two points $x_1$ and $x_2$ are sent to the same function $\lambda_{x_1}=\lambda_{x_2}$ if and only if every clopen set containing $x_1$ also contains $x_2$, and vice versa. That is, $x_1$ and $x_2$ must be in the same connected component. Indeed, if they were in different connected components, then there would be some clopen containing one but not the other. Conversely, if there is a clopen that contains one but not the other they can’t be in the same connected component. Thus this map $X\to S(S(X))$ collapses all the connected components of $X$ into points of $S(S(X))$. If this map $X\to S(S(X))$ is a homeomorphism, then no two points of $X$ are in the same connected component. Thus each singleton $\{x\}\subseteq X$ is a connected component, and we call the space “totally disconnected”. Clearly, such a space is in the image of the Stone space functor. On the other hand, if $X=S(\mathcal{B})$, then $S(S(X))=S(S(S(\mathcal{B})))\cong S(\mathcal{B})=X$, and so this is both a necessary and a sufficient condition. Thus the “Stone spaces” form the full subcategory of $\mathbf{CHaus}$, consisting of the totally disconnected compact Hausdorff spaces. Stone’s representation theorem shows us that this category is equivalent to the dual of the category of Boolean algebras. As a side note: I’d intended to cover the Stone-Čech compactification, but none of the references I have at hand actually cover the details. There’s a certain level below which everyone seems to simply assert certain facts and take them as given, and I can’t seem to reconstruct them myself. ### Like this: No comments yet. « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 47, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9182084202766418, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/202910-tangent-lines-ellipse-external-point.html
2Thanks • 1 Post By topsquark • 1 Post By kalyanram # Thread: 1. ## Tangent lines to an ellipse from an external point Given an the equation of an ellipse, say $\frac{x^2}{9}+\frac{y^2}{16}=1$, find the equations of the two lines tangent to this ellipse passing through the external point P(5,6). Thank you in advance. 2. ## Re: Tangent lines to an ellipse from an external point Originally Posted by Kaloda Given an the equation of an ellipse, say $\frac{x^2}{9}+\frac{y^2}{16}=1$, find the equations of the two lines tangent to this ellipse passing through the external point P(5,6). Thank you in advance. 1. Solve the equation for y^2 2. Implicit differentiation That gives the equation for the tangent lines. 3. Plug in point P. 4. What can you come up with? -Dan 3. ## Re: Tangent lines to an ellipse from an external point Compute the slope of the tangent at the point of contact say $(\alpha,\beta)$which is $\frac{dy}{dx} = \frac{-16\alpha}{9\beta}$. Now we have the slope of the line also as $\frac{6-\beta}{5-\alpha} = \frac{-16\alpha}{9\beta} \implies 16\alpha^2 + 9 \beta^2 = 9*6\beta+9*10\alpha$ from the equation of ellipse we have $16\alpha^2 + 9 \beta^2 = 9*16=9*6\beta+9*10\alpha \implies 5\alpha+3\beta=8$ substitute in $\frac{\alpha^2}{9} + \frac{\beta^2}{16} = 1$ and solve for $\beta$ you will get two points on the ellipse, and hence you can get two lines. ~Kalyan. Attached Thumbnails 4. ## Re: Tangent lines to an ellipse from an external point I had already figured it out but THANKS.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8810870051383972, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/31329-permutation-combination-help.html
# Thread: 1. ## Permutation and Combination Help! Question: A coin is tossed 10 times. (a) How many different sequences of heads and tails are possible? (b) How many different sequences containing six heads and four tails are possible? (c) What is the probability of getting si heads and four tails? Attempt: (a) = 0 Heads or 1 Heads or 2 Heads or 3 Heads or 4 Heads or 5 Heads or 6 Heads or 7 Heads or 8 Heads or 9 Heads or 10 Heads. $= 10C0 + 10C1 + 10C2 + 10C3 + 10C4 + 10C5 + 10C6 + 10C7 + 10C8 + 10C9 + 10C10$ $= 1024$ (b) $= 10C6+10C4 = 420$ (c) No Idea! 2. Hello, looi76! A coin is tossed 10 times. (a) How many different sequences of heads and tails are possible? Your answer is correct, but there's an easier approach. For each coin there are two possible states: Heads or Tails. . . With ten coins, there are: . $2^{10} \:=\:1024$ possible outcomes. (b) How many different sequences containing 6 Heads and 4 Tails are possible? There are: . ${10\choose6,4} \:=\:210$ ways. (c) What is the probability of getting 6 Heads and 4 Tails? There are 210 ways to get 6 Heads and 4 Tails . . out of the 1024 possible outcomes. Therefore: . $P(\text{6 Heads, 4 Tails}) \;=\;\frac{210}{1024} \;=\;\frac{105}{512}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9074071049690247, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/92754/list
## Return to Answer 2 added first line This answer is not for vertex transitive hypergraphs (I have not noticed that condition)! No simple necessary and sufficient condition can exists as 3DM is NP-complete: http://en.wikipedia.org/wiki/3-dimensional_matching Of course, if you are only looking for a sufficient condition, one can come up with several, eg. see: http://arxiv.org/abs/1101.5830 where it is proved by Imdadullah Khan that "A perfect matching in a 3-uniform hypergraph on $n=3k$ vertices is a subset of $\frac{n}{3}$ disjoint edges. We prove that if $H$ is a 3-uniform hypergraph on $n=3k$ vertices such that every vertex belongs to at least ${n-1\choose 2} - {2n/3\choose 2}+1$ edges then $H$ contains a perfect matching. We give a construction to show that this result is best possible." 1 No simple necessary and sufficient condition can exists as 3DM is NP-complete: http://en.wikipedia.org/wiki/3-dimensional_matching Of course, if you are only looking for a sufficient condition, one can come up with several, eg. see: http://arxiv.org/abs/1101.5830 where it is proved by Imdadullah Khan that "A perfect matching in a 3-uniform hypergraph on $n=3k$ vertices is a subset of $\frac{n}{3}$ disjoint edges. We prove that if $H$ is a 3-uniform hypergraph on $n=3k$ vertices such that every vertex belongs to at least ${n-1\choose 2} - {2n/3\choose 2}+1$ edges then $H$ contains a perfect matching. We give a construction to show that this result is best possible."
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9076278209686279, "perplexity_flag": "head"}
http://psychology.wikia.com/wiki/Normal_Distribution
# Normal distribution Talk0 31,725pages on this wiki ## Redirected from Normal Distribution Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory Probability density functionThe green line is the standard normal distribution Cumulative distribution functionColors match the pdf above Parameters $\mu$ location (real)$\sigma^2>0$ squared scale (real) Support $x \in (-\infty;+\infty)\!$ pdf $\frac1{\sigma\sqrt{2\pi}}\; \exp\left(-\frac{\left(x-\mu\right)^2}{2\sigma^2} \right) \!$ cdf $\frac12 \left(1 + \mathrm{erf}\,\frac{x-\mu}{\sigma\sqrt2}\right) \!$ Mean $\mu$ Median $\mu$ Mode $\mu$ Variance $\sigma^2$ Skewness 0 Kurtosis 0 Entropy $\ln\left(\sigma\sqrt{2\,\pi\,e}\right)\!$ mgf $M_X(t)= \exp\left(\mu\,t+\frac{\sigma^2 t^2}{2}\right)$ Char. func. $\phi_X(t)=\exp\left(\mu\,i\,t-\frac{\sigma^2 t^2}{2}\right)$ The normal distribution, also called Gaussian distribution, is an extremely important probability distribution in many fields. It is a family of distributions of the same general form, differing in their location and scale parameters: the mean ("average") and standard deviation ("variability"), respectively. The standard normal distribution is the normal distribution with a mean of zero and a standard deviation of one (the green curves in the plots to the right). It is often called the bell curve because the graph of its probability density resembles a bell. ## Overview The normal distribution is a convenient model of quantitative phenomena in the natural and behavioral sciences. A variety of psychological test scores have been found to approximately follow a normal distribution. While the underlying causes of these phenomena are often unknown, the use of the normal distribution can be theoretically justified in situations where many small effects are added together into a score or variable that can be observed. The normal distribution also arises in many areas of statistics: for example, the sampling distribution of the mean is approximately normal, even if the distribution of the population the sample is taken from is not normal. In addition, the normal distribution maximizes information entropy among all distributions with known mean and variance, which makes it the natural choice of underlying distribution for data summarized in terms of sample mean and variance. The normal distribution is the most widely used family of distributions in statistics and many statistical tests are based on the assumption of normality. In probability theory, normal distributions arise as the limiting distributions of several continuous and discrete families of distributions. ## History The normal distribution was first introduced by Abraham de Moivre| in an article in 1733 (reprinted in the second edition of his The Doctrine of Chances, 1738) in the context of approximating certain binomial distributions for large n. His result was extended by Pierre Simon de Laplace in his book Analytical Theory of Probabilities (1812), and is now called the theorem of de Moivre-Laplace. Laplace used the normal distribution in the analysis of errors of experiments. The important method of least squares was introduced by Adrien Marie Legendre in 1805. Carl Friedrich Gauss who claimed to have used the method since 1794, justified it rigorously in 1809 by assuming a normal distribution of the errors. The name "bell curve" goes back to Jouffret who first used the term "bell surface" in 1872 for a bivariate normal with independent components. The name "normal distribution" was coined independently by Charles S. Peirce, Francis Galton and Wilhelm Lexis around [1875. This terminology is unfortunate, since it reflects and encourages the fallacy that many or all probability distributions are "normal". (See the discussion of "occurrence" below.) ## Specification of the normal distribution There are various ways to specify a random variable. The most visual is the probability density function (plot at the top), which represents how likely each value of the random variable is. The cumulative distribution function is a conceptually cleaner way to specify the same information, but to the untrained eye its plot is much less informative (see below). Equivalent ways to specify the normal distribution are: the moments, the cumulants, the characteristic function, the moment-generating function, and the cumulant-generating function. Some of these are very useful for theoretical work, but not intuitive. See probability distribution for a discussion. All of the cumulants of the normal distribution are zero, except the first two. ### Probability density function The probability density function of the normal distribution with mean $\mu$ and variance $\sigma^2$ (equivalently, standard deviation $\sigma$) is an example of a Gaussian function, $f(x;\mu,\sigma) = \frac{1}{\sigma\sqrt{2\pi}} \, \exp \left( -\frac{(x- \mu)^2}{2\sigma^2} \right).$ (See also exponential function and pi.) If a random variable $X$ has this distribution, we write $X$ ~ $N(\mu, \sigma^2)$. If $\mu = 0$ and $\sigma = 1$, the distribution is called the standard normal distribution and the probability density function reduces to $f(x) = \frac{1}{\sqrt{2\pi}} \, \exp\left(-\frac{x^2}{2} \right).$ The image to the right gives the graph of the probability density function of the normal distribution various parameter values. Some notable qualities of the normal distribution: • The density function is symmetric about its mean value. • The mean is also its mode and median. • 68.268949% of the area under the curve is within one standard deviation of the mean. • 95.449974% of the area is within two standard deviations. • 99.730020% of the area is within three standard deviations. • 99.993666% of the area is within four standard deviations. • The inflection points of the curve occur at one standard deviation away from the mean. ### Cumulative distribution function The cumulative distribution function (cdf) is defined as the probability that a variable $X$ has a value less than or equal to $x$, and it is expressed in terms of the density function as $F(x;\mu,\sigma) = \frac{1}{\sigma\sqrt{2\pi}} \int_{-\infty}^x \exp -\frac{(u - \mu)^2}{2\sigma^2} \, du .$ The standard normal cdf, conventionally denoted $\Phi$, is just the general cdf evaluated with $\mu=0$ and $\sigma=1$, $\Phi(x) =F(x;0,1)= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x \exp\left(-\frac{u^2}{2}\right) \, du .$ The standard normal cdf can be expressed in terms of a special function called the error function, as $\Phi(z) = \frac{1}{2} \left[ 1 + \operatorname{erf} \left( \frac{z}{\sqrt{2}} \right) \right] .$ The inverse cumulative distribution function, or quantile function, can be expressed in terms of the inverse error function: $\Phi^{-1}(p) = \sqrt2 \; \operatorname{erf}^{-1} \left(2p - 1 \right) .$ This quantile function is sometimes called the probit function. There is no elementary primitive for the probit function. This is not to say merely that none is known, but rather that the non-existence of such a function has been proved. Values of Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, or asymptotic series. ### Generating functions #### Moment generating function The moment generating function is defined as the expected value of $\exp(tX)$. For a normal distribution, it can be shown that the moment generating function is $M_X(t)\,$ $= \mathrm{E} \left[ \exp(tX) \right]$ $= \int_{-\infty}^{\infty} \frac {1} {\sigma \sqrt{2\pi} } \exp \left( -\frac{(x - \mu)^2}{2 \sigma^2} \right) \exp (tx) \, dx$ $= \exp \left( \mu t + \sigma^2 \frac{t^2}{2} \right)$ as can be seen by completing the square in the exponent. #### Characteristic function The characteristic function is defined as the expected value of $\exp (i t X)$, where $i$ is the imaginary unit. For a normal distribution, the characteristic function is $\phi_X(t;\mu,\sigma)\!$ $= \mathrm{E} \left[ \exp(i t X) \right]$ $= \int_{-\infty}^{\infty} \frac{1}{\sigma \sqrt{2\pi}} \exp \left(- \frac{(x - \mu)^2}{2\sigma^2} \right) \exp(i t x) \, dx$ $= \exp \left( i \mu t - \frac{\sigma^2 t^2}{2} \right) .$ The characteristic function is obtained by replacing $t$ with $i t$ in the moment-generating function. ## Properties Some of the properties of the normal distribution: 1. If $X \sim N(\mu, \sigma^2)$ and $a$ and $b$ are real numbers, then $a X + b \sim N(a \mu + b, (a \sigma)^2)$ (see expected value and variance). 2. If $X \sim N(\mu_X, \sigma^2_X)$ and $Y \sim N(\mu_Y, \sigma^2_Y)$ are independent normal random variables, then: • Their sum is normally distributed with $U = X + Y \sim N(\mu_X + \mu_Y, \sigma^2_X + \sigma^2_Y)$ (proof). • Their difference is normally distributed with $V = X - Y \sim N(\mu_X - \mu_Y, \sigma^2_X + \sigma^2_Y)$. • Both $U$ and $V$ are independent of each other. 3. If $X \sim N(0, \sigma^2_X)$ and $Y \sim N(0, \sigma^2_Y)$ are independent normal random variables, then: • Their product $X Y$ follows a distribution with density $p$ given by $p(z) = \frac{1}{\pi\,\sigma_X\,\sigma_Y} \; K_0\left(\frac{|z|}{\sigma_X\,\sigma_Y}\right),$ where $K_0$ is a modified Bessel function. • Their ratio follows a Cauchy distribution with $X/Y \sim \mathrm{Cauchy}(0, \sigma_X/\sigma_Y)$. 4. If $X_1, \cdots, X_n$ are independent standard normal variables, then $X_1^2 + \cdots + X_n^2$ has a chi-square distribution with n degrees of freedom. ### Standardizing normal random variables As a consequence of Property 1, it is possible to relate all normal random variables to the standard normal. If $X$ ~ $N(\mu, \sigma^2)$, then $Z = \frac{X - \mu}{\sigma} \!$ is a standard normal random variable: $Z$ ~ $N(0,1)$. An important consequence is that the cdf of a general normal distribution is therefore $\Pr(X \le x) = \Phi \left( \frac{x-\mu}{\sigma} \right) = \frac{1}{2} \left( 1 + \operatorname{erf} \left( \frac{x-\mu}{\sigma\sqrt{2}} \right) \right) .$ Conversely, if $Z$ ~ $N(0,1)$, then $X = \sigma Z + \mu$ is a normal random variable with mean $\mu$ and variance $\sigma^2$. The standard normal distribution has been tabulated, and the other normal distributions are simple transformations of the standard one. Therefore, one can use tabulated values of the cdf of the standard normal distribution to find values of the cdf of a general normal distribution. ### Moments Some of the first few moments of the normal distribution are: Number Raw moment Central moment Cumulant 0 1 0 1 $\mu$ 0 $\mu$ 2 $\mu^2 + \sigma^2$ $\sigma^2$ $\sigma^2$ 3 $\mu^3 + 3\mu\sigma^2$ 0 0 4 $\mu^4 + 6 \mu^2 \sigma^2 + 3 \sigma^4$ $3 \sigma^4$ 0 All of cumulants of the normal distribution beyond the second cumulant are zero. ### Generating normal random variables For computer simulations, it is often useful to generate values that have a normal distribution. There are several methods and the most basic is to invert the standard normal cdf. More efficient methods are also known, one such method being the Box-Muller transform. The Box-Muller transform takes two uniformly distributed values as input and maps them to two normally distributed values. This requires generating values from a uniform distribution, for which many methods are known. See also random number generators. The Box-Muller transform is a consequence of the fact that the chi-square distribution with two degrees of freedom (see property 4 above) is an easily-generated exponential random variable. ### The central limit theorem The normal distribution has the very important property that under certain conditions, the distribution of a sum of a large number of independent variables is approximately normal. This is the central limit theorem. The practical importance of the central limit theorem is that the normal distribution can be used as an approximation to some other distributions. • A binomial distribution with parameters $n$ and $p$ is approximately normal for large $n$ and $p$ not too close to 1 or 0 (some books recommend using this approximation only if $n p$ and $n(1 - p)$ are both at least 5; in this case, a continuity correction should be applied). The approximating normal distribution has mean $\mu = n p$ and variance $\sigma^2 = n p (1 - p)$. • A Poisson distribution with parameter $\lambda$ is approximately normal for large $\lambda$. The approximating normal distribution has mean $\mu = \lambda$ and variance $\sigma^2 = \lambda$. Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. ### Infinite divisibility The normal distributions are infinitely divisible probability distributions. ### Stability The normal distributions are strictly stable probability distributions. ### Standard deviation In practice, one often assumes that data are from an approximately normally distributed population. If that assumption is justified, then about 68% of the values are at within 1 standard deviation away from the mean, about 95% of the values are within two standard deviations and about 99.7% lie within 3 standard deviations. This is known as the "68-95-99.7 rule". ## Normality tests Normality tests check a given set of data for similarity to the normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small P-value indicates non-normal data. ## Related distributions • $R \sim \mathrm{Rayleigh}(\sigma^2)$ is a Rayleigh distribution if $R = \sqrt{X^2 + Y^2}$ where $X \sim N(0, \sigma^2)$ and $Y \sim N(0, \sigma^2)$ are two independent normal distributions. • $Y \sim \chi_{\nu}^2$ is a chi-square distribution with $\nu$ degrees of freedom if $Y = \sum_{k=1}^{\nu} X_k^2$ where $X_k \sim N(0,1)$ for $k=0,1,\cdots,\nu$ and are independent • $Y \sim \mathrm{Cauchy}(\mu = 0, \theta = 1)$ is a Cauchy distribution if $Y = X_1/X_2$ for $X_1 \sim N(0,1)$ and $X_2 \sim N(0,1)$ are two independent normal distributions. • $Y \sim \mbox{Log-N}(\mu, \sigma^2)$ is a log-normal distribution if $Y = \exp(X)$ and $X \sim N(\mu, \sigma^2)$. • Relation to Lévy skew alpha-stable distribution: if $X\sim \textrm{Levy-S}\alpha\textrm{S}(2,\beta,\sigma/\sqrt{2},\mu)$ then $X \sim N(\mu,\sigma^2)$. ## Estimation of parameters ### Maximum likelihood estimation of parameters Suppose $X_1,\dots,X_n$ are independent and identically distributed, and are normally distributed with expectation μ and variance σ2. In the language of statisticians, the observed values of these random variables make up a "sample from a normally distributed population." It is desired to estimate the "population mean" μ and the "population standard deviation" σ, based on observed values of this sample. The joint probability density function of these random variables is $f(x_1,\dots,x_n;\mu,\sigma) \propto \sigma^{-n} \prod_{i=1}^n \exp\left({-1 \over 2} \left({x_i-\mu \over \sigma}\right)^2\right).$ (Nota bene: Here the proportionality symbol $\propto$ means proportional as a function of $\mu$ and $\sigma$, not proportional as a function of $x_1,\dots,x_n$. That may be considered one of the differences between the statistician's point of view and the probabilist's point of view. The reason why this is important will appear below.) As a function of μ and σ this is the likelihood function $L(\mu,\sigma) \propto \sigma^{-n} \exp\left({-\sum_{i=1}^n (x_i-\mu)^2 \over 2\sigma^2}\right).$ In the method of maximum likelihood, the values of μ and σ that maximize the likelihood function are taken to be estimates of the population parameters μ and σ. Usually in maximizing a function of two variables one might consider partial derivatives. But here we will exploit the fact that the value of μ that maximizes the likelihood function with σ fixed does not depend on σ. Therefore, we can find that value of μ, then substitute it from μ in the likelihood function, and finally find the value of σ that maximizes the resulting expression. It is evident that the likelihood function is a decreasing function of the sum $\sum_{i=1}^n (x_i-\mu)^2. \,\!$ So we want the value of μ that minimizes this sum. Let $\overline{x}=(x_1+\cdots+x_n)/n$ be the "sample mean". Observe that $\sum_{i=1}^n (x_i-\mu)^2=\sum_{i=1}^n((x_i-\overline{x})+(\overline{x}-\mu))^2$ $=\sum_{i=1}^n(x_i-\overline{x})^2 + 2\sum_{i=1}^n (x_i-\overline{x})(\overline{x}-\mu) + \sum_{i=1}^n (\overline{x}-\mu)^2$ $=\sum_{i=1}^n(x_i-\overline{x})^2 + 0 + n(\overline{x}-\mu)^2.$ Only the last term depends on μ and it is minimized by $\hat{\mu}=\overline{x}.$ That is the maximum-likelihood estimate of μ. Substituting that for μ in the sum above makes the last term vanish. Consequently, when we substitute that estimate for μ in the likelihood function, we get $L(\overline{x},\sigma) \propto \sigma^{-n} \exp\left({-\sum_{i=1}^n (x_i-\overline{x})^2 \over 2\sigma^2}\right).$ It is conventional to denote the "loglikelihood function", i.e., the logarithm of the likelihood function, by a lower-case $\ell$, and we have $\ell(\hat{\mu},\sigma)=[\mathrm{constant}]-n\log(\sigma)-{\sum_{i=1}^n(x_i-\overline{x})^2 \over 2\sigma^2}$ and then ${\partial \over \partial\sigma}\ell(\hat{\mu},\sigma) ={-n \over \sigma} +{\sum_{i=1}^n (x_i-\overline{x})^2 \over \sigma^3} ={-n \over \sigma^3}\left(\sigma^2-{1 \over n}\sum_{i=1}^n (x_i-\overline{x})^2 \right).$ This derivative is positive, zero, or negative according as σ2 is between 0 and ${1 \over n}\sum_{i=1}^n(x_i-\overline{x})^2,$ or equal to that quantity, or greater than that quantity. Consequently this average of squares of residuals is maximum-likelihood estimate of σ2, and its square root is the maximum-likelihood estimate of σ. #### Surprising generalization The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is subtle. It involves the spectral theorem and the reason why it can be better to view a scalar as the trace of a 1×1 matrix than as a mere scalar. See estimation of covariance matrices. ### Unbiased estimation of parameters The maximum likelihood estimator of the population mean $\mu$ from a sample is an unbiased estimator of the mean, as is the variance when the mean of the population is known a priori. However, if we are faced with a sample and have no knowledge of the mean or the variance of the population from which it is drawn, the unbiased estimator of the variance $\sigma^2$ is: $s^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2.$ ## Occurrence Approximately normal distributions occur in many situations, as a result of the central limit theorem. When there is reason to suspect the presence of a large number of small effects acting additively and independently, it is reasonable to assume that observations will be normal. There are statistical methods to empirically test that assumption, for example the Kolmogorov-Smirnov test. Effects can also act as multiplicative (rather than additive) modifications. In that case, the assumption of normality is not justified, and it is the logarithm of the variable of interest that is normally distributed. The distribution of the directly observed variable is then called log-normal. Finally, if there is a single external influence which has a large effect on the variable under consideration, the assumption of normality is not justified either. This is true even if, when the external variable is held constant, the resulting marginal distributions are indeed normal. The full distribution will be a superposition of normal variables, which is not in general normal. This is related to the theory of errors (see below). To summarize, here is a list of situations where approximate normality is sometimes assumed. For a fuller discussion, see below. • In counting problems (so the central limit theorem includes a discrete-to-continuum approximation) where reproductive random variables are involved, such as • Binomial random variables, associated to yes/no questions; • Poisson random variables, associated to rare events; • In physiological measurements of biological specimens: • The logarithm of measures of size of living tissue (length, height, skin area, weight); • The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of growth; presumably the thickness of tree bark also falls under this category; • Other physiological measures may be normally distributed, but there is no reason to expect that a priori; • Measurement errors are assumed to be normally distributed, and any deviation from normality must be explained; • Financial variables • The logarithm of interest rates, exchange rates, and inflation; these variables behave like compound interest, not like simple interest, and so are multiplicative; • Stock-market indices are supposed to be multiplicative too, but some researchers claim that they are Levy-distributed variables instead of lognormal; • Other financial variables may be normally distributed, but there is no reason to expect that a priori; • Light intensity • The intensity of laser light is normally distributed; • Thermal light has a Bose-Einstein distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem. Of relevance to biology and economics is the fact that complex systems tend to display power laws rather than normality. ### Photon counting Light intensity from a single source varies with time, as thermal fluctuations can be observed if the light is analyzed at sufficiently high time resolution. The intensity is usually assumed to be normally distributed. In the classical theory of optical coherence, light is modelled as an electromagnetic wave,and correlations are observed and analyzed up to the second order, consistently with the assumption of normality. (See Gaussian stochastic process) However, non-classical correlations are sometimes observed. Quantum mechanics interprets measurements of light intensity as photon counting. The natural assumption in this setting is the Poisson distribution. When light intensity is integrated over times longer than the coherence time and is large, the Poisson-to-normal limit is appropriate. Correlations are interpreted in terms of "bunching" and "anti-bunching" of photons with respect to the expected Poisson behaviour. Anti-bunching requires a quantum model of light emission. Ordinary light sources producing light by thermal emission display a so-called blackbody spectrum (of intensity as a function of frequency), and the number of photons at each frequency follows a Bose-Einstein distribution (a geometric distribution). The coherence time of thermal light is exceedingly low, and so a Poisson distribution is appropriate in most cases, even when the intensity is so low as to preclude the approximation by a normal distribution. The intensity of laser light has an exactly Poisson intensity distribution and long coherence times. The large intensities make it appropriate to use the normal distribution. It is interesting that the classical model of light correlations applies only to laser light, which is a macroscopic quantum phenomenon. On the other hand, "ordinary" light sources do not follow the "classical" model or the normal distribution. ### Measurement errors Normality is the central assumption of the mathematical theory of errors. Similarly, in statistical model-fitting, an indicator of goodness of fit is that the residuals (as the errors are called in that setting) be independent and normally distributed. Any deviation from normality needs to be explained. In that sense, both in model-fitting and in the theory of errors, normality is the only observation that need not be explained, being expected. Repeated measurements of the same quantity are expected to yield results which are clustered around a particular value. If all major sources of errors have been taken into account, it is assumed that the remaining error must be the result of a large number of very small additive effects, and hence normal. Deviations from normality are interpreted as indications of systematic errors which have not been taken into account. ### Physical characteristics of biological specimens The overwhelming biological evidence is that bulk growth processes of living tissue proceed by multiplicative, not additive, increments, and that therefore measures of body size should at most follow a lognormal rather than normal distribution. Despite common claims of normality, the sizes of plants and animals is approximately lognormal. The evidence and an explanation based on models of growth was first published in the classic book Huxley, Julian: Problems of Relative Growth (1932) Differences in size due to sexual dimorphism, or other polymorphisms like the worker/soldier/queen division in social insects, further make the joint distribution of sizes deviate from lognormality. The assumption that linear size of biological specimens is normal leads to a non-normal distribution of weight (since weight/volume is roughly the 3rd power of length, and Gaussian distributions are only preserved by linear transformations), and conversely assuming that weight is normal leads to non-normal lengths. This is a problem, because there is no a priori reason why one of length, or body mass, and not the other, should be normally distributed. Lognormal distributions, on the other hand, are preserved by powers so the "problem" goes away if lognormality is assumed. On the other hand, there are some biological measures where normality is assumed or expected: • blood pressure of adult humans is supposed to be normally distributed, but only after separating males and females into different populations (each of which is normally distributed) • The length of inert appendages such as hair, nails, teeth, claws and shells is expected to be normally distributed if measured in the direction of growth. This is because the growth of inert appendages depends on the size of the root, and not on the length of the appendage, and so proceeds by additive increments. Hence, we have an example of a sum of very many small increments (possibly lognormal) approaching a normal distribution. Another plausible example is the width of tree trunks, where a new thin ring is produced every year whose width is affected by a large number of factors. ### Financial variables Because of the exponential nature of interest and inflation, financial indicators such as interest rates, stock values, or commodity prices make good examples of multiplicative behavior. As such, they should not be expected to be normal, but lognormal. Benoît Mandelbrot, the popularizer of fractals, has claimed that even the assumption of lognormality is flawed, and advocates the use of log-Levy distributions. It is accepted that financial indicators deviate from lognormality. The distribution of price changes on short time scales is observed to have "heavy tails", so that very small or very large price changes are more likely to occur than a lognormal model would predict. Deviation from lognormality indicates that the assumption of independence of the multiplicative influences is flawed. ### Lifetime Other examples of variables that are not normally distributed include the lifetimes of humans or mechanical devices. Examples of distributions used in this connection are the exponential distribution (memoryless) and the Weibull distribution. In general, there is no reason that waiting times should be normal, since they are not directly related to any kind of additive influence. ### Test scores A great deal of confusion exists over whether or not IQ test scores and intelligence are normally distributed. As a deliberate result of test construction, IQ scores are always and obviously normally distributed for the majority of the population. Whether intelligence is normally distributed is less clear. The difficulty and number of questions on an IQ test is decided based on which combinations will yield a normal distribution. This does not mean, however, that the information is in any way being misrepresented, or that there is any kind of "true" distribution that is being artificially forced into the shape of a normal curve. Intelligence tests can be constructed to yield any kind of score distribution desired. All true IQ tests have a normal distribution of scores as a result of test design; otherwise IQ scores would be meaningless without knowing what test produced them. Intelligence tests in general, however, can produce any kind of distribution. For an example of how arbitrary the distribution of intelligence test scores really is, imagine a 20-item multiple-choice test entirely composed of problems that consist mostly of finding the areas of circles. Such a test, if given to a population of high-school students, would likely yield a U-shaped distribution, with the bulk of the scores being very high or very low, instead of a normal curve. If a student understands how to find the area of a circle, he can likely do so repeatedly and with few errors, and thus would get a perfect or high score on the test, whereas a student who has never had geometry lessons would likely get every question wrong, possibly with a few right due to guessing luck. If a test is composed mostly of easy questions, then most of the test-takers will have high scores and very few will have low scores. If a test is composed entirely of questions so easy or so hard that every person gets either a perfect score or a zero, it fails to make any kind of statistical discrimination at all and yields a rectangular distribution. These are just a few examples of the many varieties of distributions that could theoretically be produced by carefully designing intelligence tests. Whether intelligence itself is normally distributed has been at times a matter of some debate. Some critics maintain that the choice of a normal distribution is entirely arbitrary. Brian Simon once claimed that the normal distribution was specifically chosen by psychometricians to falsely support the idea that superior intelligence is only held by a small minority, thus legitimizing the rule of a privileged elite over the masses of society. Historically, though, intelligence tests were designed without any concern for producing a normal distribution, and scores came out approximately normally distributed anyway. American educational psychologist Arthur Jensen claims that any test that contains "a large number of items," "a wide range of item difficulties," "a variety of content or forms," and "items that have a significant correlation with the sum of all other scores" will inevitably produce a normal distribution. Furthermore, there exists a number of correlations between IQ scores and other human characteristics that are more provably normally distributed, such as nerve conduction velocity and the glucose metabolism rate of a person's brain, supporting the idea that intelligence is normally distributed. Some critics, such as Stephen Jay Gould in his book The Mismeasure of Man, question the validity of intelligence tests in general, not just the fact that intelligence is normally distributed. For further discussion see the article IQ. The Bell Curve is a controversial book on the topic of the heritability of intelligence. However, despite its title, the book does not primarily address whether IQ is normally distributed. ## See also • Normally distributed and uncorrelated does not imply independent (an example of two normally distributed uncorrelated random variables that are not independent; this cannot happen in the presence of joint normality) • Lognormal distribution • Multivariate normal distribution • Probit function • Statistical sample parameters • Student's t-distribution • Behrens-Fisher problem ## References • John Aldrich. Earliest Uses of Symbols in Probability and Statistics. Electronic document, retrieved March 20, 2005. (See "Symbols associated with the Normal Distribution".) • Abraham de Moivre (1738). The Doctrine of Chances. • Stephen Jay Gould (1981). The Mismeasure of Man. First edition. W. W. Norton. ISBN 0393014894. • R. J. Herrnstein and Charles Murray (1994). The Bell Curve: Intelligence and Class Structure in American Life. Free Press. ISBN 0029146739. • Pierre-Simon Laplace (1812). Analytical Theory of Probabilities. • Jeff Miller, John Aldrich, et al. Earliest Known Uses of Some of the Words of Mathematics. In particular, the entries for "bell-shaped and bell curve", "normal" (distribution), "Gaussian", and "Error, law of error, theory of errors, etc.". Electronic documents, retrieved December 13, 2005. • S. M. Stigler (1999). Statistics on the Table, chapter 22. Harvard University Press. (History of the term "normal distribution".) • Eric W. Weisstein et al. Normal Distribution at MathWorld. Electronic document, retrieved March 20, 2005. • Marvin Zelen and Norman C. Severo (1964). Probability Functions. Chapter 26 of Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, ed, by Milton Abramowitz and Irene A. Stegun. National Bureau of Standards. # Photos Add a Photo 6,465photos on this wiki • by Dr9855 2013-05-14T02:10:22Z • by PARANOiA 12 2013-05-11T19:25:04Z Posted in more... • by Addyrocker 2013-04-04T18:59:14Z • by Psymba 2013-03-24T20:27:47Z Posted in Mike Abrams • by Omaspiter 2013-03-14T09:55:55Z • by Omaspiter 2013-03-14T09:28:22Z • by Bigkellyna 2013-03-14T04:00:48Z Posted in User talk:Bigkellyna • by Preggo 2013-02-15T05:10:37Z • by Preggo 2013-02-15T05:10:17Z • by Preggo 2013-02-15T05:09:48Z • by Preggo 2013-02-15T05:09:35Z • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 133, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9051229953765869, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/189575/evaluate-the-integral-int-02-pi-cos2-theta-over-a-b-cos-theta?answertab=oldest
# Evaluate the integral $\int_0^{2 \pi} {\cos^2 \theta \over a + b \cos \theta}\; d\theta$ Given $a > b > 0$ what is the fastest possible way to evaluate the following integral using Residue theorem. I'm confused weather to take the imaginary part of $z^2$ or whole integral. $$\int_0^{2 \pi} {\cos^2 \theta \over a + b \cos \theta}\; d\theta$$ - ## 3 Answers If $z=e^{i\theta}$, then $\cos\theta=\dfrac 1 2\left(z+\dfrac1z\right)$, and $dz=ie^{i\theta}\,d\theta/2=iz\,d\theta/2$, so $-2i\dfrac{dz}{z} = d\theta$. Then $$\int_0^{2\pi} \frac{\cos^2\theta}{a+b\cos\theta} d\theta = \int\limits_\text{circle} \frac{\frac14\left(z+\frac1z\right)^2}{a+\frac b2\left(z+\frac1z\right)}(-i)\frac{dz}{z} = -i\int\limits_\text{circle} \frac{z^4+2z^2+1}{2z^2(2az+b(z^2+1))} dz.$$ This function has a double pole at $z=0$ and simple poles at $\dfrac{-a\pm\sqrt{a^2-b^2}}{b}$. So the question is: for which values of $a,b$ are the simple poles inside the circle? If there's just one simple pole inside the circle at $c$, then the integral becomes $$\int\limits_\text{circle} \frac{g(z)}{z-c} dz = 2\pi i g(c),$$ where $g(z)$ is whatever's left after you've factored out $1/(z-c)$. If there's more than one simple pole, you need a sum: take values of $g$ at those points and sum them. - Actually I found I could substitute $\cos^2 \theta = Im(z^2)$ and simplify things a lot :D the whole integral will be $Im ( \int z^2/...)$ – hasExams Sep 1 '12 at 17:41 We have $e^{2i\theta}=\cos[2\theta]+i\sin[2\theta]$. You take the real part ($\cos^{2}[\theta]-\sin^{2}[\theta]$), not the imaginary part (which is $2\sin[\theta]\cos[\theta]$ instead). – user32240 Sep 2 '12 at 6:17 1 @Michael: We have $\cos[\theta]=\frac{1}{2}(\cos[\theta]+i\sin[\theta])+(\cos[\theta]-i\sin[\theta‌​])$, you missed a factor. – user32240 Sep 2 '12 at 6:27 @user32240 : Fixed now, I hope. – Michael Hardy Sep 2 '12 at 18:34 We should have $$\frac{\cos^{2}\theta}{a+b\cos\theta}=\frac{1}{b}\cos\theta-\frac{\frac{a}{b}\cos\theta}{a+b\cos\theta}.$$ The first part of the definite integral is easily evaluated. The second part is the same as evaluating $$\int \frac{\cos\theta}{a+b\cos\theta}d\theta=\int\frac{1}{b}\left(1-\frac{a}{a+b\cos\theta}\right)d\theta.$$ So we only need to evaluate $$\int\frac{1}{a+b\cos\theta}d\theta.$$ This can be done by various trignometry identities such as using $\tan(\theta/2)$. A detailed step by step proof can be found here (click show steps). - thanks for idea ... I think i can evaluate $\frac{1}{a+b\cos[\theta]}d\theta$ very easily – hasExams Sep 1 '12 at 8:25 well, I am not sure if this is the "fastest possible" way to do it. I guess you are expecting some contour integral or manipulations involving $\Gamma$ functions. – user32240 Sep 1 '12 at 8:28 I'm supposed to use contour integral ... the $z^2$ part is awful – hasExams Sep 1 '12 at 8:40 Hint: put $z={\rm e}^{i\theta}$ and change the integral to the form $\int_{|z|=1}f(z)\,dz \,,$ then use residue theorem. Exploit the identity $$\cos\theta = \frac{1}{2}\left({\rm e}^{i \theta} + {\rm e}^{- i \theta}\right)\,.$$ - Are you suggesting doing an integral of the form $\int_{|z|=1}\frac{z^{2}}{a+bz}$? But though it has an obvious pole at $z=-\frac{a}{b}$(which is outside of the contour), I do not see how to get the original integral from $\cos[\theta]=\frac{z+\overline{z}}{2}$. But thanks for the hint. – user32240 Sep 1 '12 at 8:37 yeah ... kinda something like that. But instead of squaring ... you can take Real part of $z^2$ which makes the problem very simple. Kinda confused on that – hasExams Sep 1 '12 at 8:38 I am not sure how to deal with the bottom part, $(a+b\cos[\theta])$ is not easily treatable. – user32240 Sep 1 '12 at 8:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9097309708595276, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/154332-sets-proof.html
# Thread: 1. ## A sets proof Hi again, another question I am having trouble starting: Let $I$ be a nonempty subset of $\mathbb{Z}$ such that: $(\forall x \in I)(\forall y \in I)[(x-y) \in I]$ and $(\forall z \in \mathbb{Z})(\forall x \in I)[z \cdot x \in I]$. Show that for some $n \in I, I = \{z \in \mathbb{Z} \colon z = xn \ for \ some \ x \in \mathbb{Z}\}$. Please no full solutions, but if someone could show me how to proceed 2. Originally Posted by nzmathman Hi again, another question I am having trouble starting: Let $I$ be a nonempty subset of $\mathbb{Z}$ such that: $(\forall x \in I)(\forall y \in I)[(x-y) \in I]$ and $(\forall z \in \mathbb{Z})(\forall x \in I)[z \cdot x \in I]$. Show that for some $n \in I, I = \{z \in \mathbb{Z} \colon z = xn \ for \ some \ x \in \mathbb{Z}\}$. Please no full solutions, but if someone could show me how to proceed 1) Take a minimal positive element in I (why is there such an element?) 2) Applying Euclides algorithm show that any element in I is an integer multiple of the element you found in (1) 3) Go grab a beer and be happy. Tonio 3. Originally Posted by tonio 1) Take a minimal positive element in I (why is there such an element?) 2) Applying Euclides algorithm show that any element in I is an integer multiple of the element you found in (1) 3) Go grab a beer and be happy. Tonio I know why a minimal element of I exists, but why can we conclude a minimal positive element exists? Also, how would I apply Euclidean algorithm to something this abstract? 4. I know why a minimal element of I exists, but why can we conclude a minimal positive element exists? There is no minimal element of $I$. Take any $x\in I$. Then the set $\{zx\mid z\in\mathbb{Z}\}\subseteq I$ does not have a minimal element. To prove that there is a minimal positive element, it is sufficient to know that there is any positive element. This is natural numbers we are talking about Also, how would I apply Euclidean algorithm to something this abstract? Take any (w.l.o.g. positive) $x\in I$ and the minimal positive element $m$ of $I$. Then the GCD of $x$ and $m$ is a linear combination of $x$ and $m$ by Bézout's identity, an application of the Euclid's algorithm. From there it is easy to show that $m$ divides $x$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904580295085907, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/general-relativity
# Tagged Questions A theory that describes how matter produces and responds to the geometry of space and time. It was first published by Einstein in 1915 and is currently used to study the structure and evolution of the universe, as well as having practical applications like GPS. 1answer 38 views ### Derivation of Weyl tensor I want to derive the Weyl tensor along the lines of this derivation, but I am unable to complete it. (I am only interested in $4$ dimension for now.) Every contraction I perform gives either \$0=R + 3 ... 0answers 54 views ### How can superstring theories unify general relativity and quantum theory when no prediction can be made? I am a newbie to superstring theories, but I came into this question: so superstring theories purport to unify general relativity and quantum theory. However, there is yet no definitive way to test ... 1answer 52 views ### Some sort of conservation equation As far as I know, in General Relativity, an expression of the kind $\nabla_{\mu} X = 0$ states that, associated to $X$, there exist a charge which is conserved. The first example that comes to mind is ... 0answers 51 views ### Gravitational redshift of Hawking radiation How can Hawking radiation with a finite (greather than zero) temperature come from the event horizon of a black hole? A redshifted thermal radiation still has Planck spectrum but with the lower ... 1answer 76 views ### Killing vector argument gone awry? What has gone wrong with this argument?! The original question A space-time such that $$ds^2=-dt^2+t^2dx^2$$ has Killing vectors \$(0,1),(-\exp(x),\frac{\exp(x)}{t}), ... 1answer 58 views ### Assuming space is infinite can our observable universe be an island amongst an archipelego? According to recent measurements our observable universe is roughly 93 billion light years in diameter; also it appears (according to WMAP measurements) that spacetime is flat. Supposing space is ... 2answers 108 views ### Geodesic equations I am having trouble understanding how the following statement (taken from some old notes) is true: For a 2 dimensional space such that $$ds^2=\frac{1}{u^2}(-du^2+dv^2)$$ the timelike geodesics ... 1answer 42 views ### Does the actual curvature of spacetime hold energy? My understanding of GR is that curvature of spacetime reflects the density of energy-matter. Does the curvature itself have energy? Or if energy is assigned to curvature it simply reflects the energy ... 2answers 161 views ### Excluding big bang itself, does spacetime have a boundary? My understanding of big bang cosmology and General Relativity is that both matter and spacetime emerged together (I'm not considering time zero where there was a singularity). Does this mean that ... 1answer 131 views ### Our Universe Can't be Looped? [duplicate] With reference to the Twin-Paradox (I am new with this), now information of who has actually aged comes from the fact that one of the twins felt some acceleration. So if universe was like a loop, and ... 0answers 50 views ### How to keep the clock of a spaceship synchronised to the clock of an observer? [duplicate] I read that the clocks of GPS satellites seem to run slower than the clock of stationary observer, because of their speed (special relativity) and seem to run faster than the clock of stationary ... 0answers 67 views ### Curvature and spacetime Suppose that it is given that the Riemann curvature tensor in a special kind of spacetime of dimension $d\geq2$ can be written as $$R_{abcd}=k(x^a)(g_{ac}g_{bd}-g_{ad}g_{bc})$$ where $x^a$ is a ... 1answer 59 views ### Evaluating the Ricci tensor effectively If given a metric of the form $$ds^2=\alpha^2(dr^2+r^2d\theta^2)$$ where $\alpha=\alpha(r)$, then can one immediately conclude that $$R_{\theta\theta}=r^2R_{rr}$$ where $R_{ab}$ is the Ricci tensor, ... 3answers 186 views ### Why Can We Observe Space Curvature / Warping At All? I don't understand why we are able to see and measure curvature / warping of space at all. Space as I understand it determines distances between objects, so if space were "compressed" or warped, ... 1answer 129 views +50 ### Is period of rotation relative? My question is inspired by the following answer by voix to another problem: "There is a real object with relativistic speed of surface - millisecond pulsar. The swiftest spinning pulsar currently ... 2answers 78 views ### Are there problems solvable with Newtonian physics, GR and QM? First I must let you know that I don't have much understanding of neither GR nor quantum mechanics, and therefore this question. I've mentally pictured Newtonian physics, GR and quantum mechanics all ... 0answers 33 views ### metric extension outside the light cone Could anyone explain what "extending the solution" beyond the past light cone means? Say, for example, if I have a metric (no coordinate singularities), how can I extend it to the outside of the past ... 2answers 100 views ### Negative potential energy of gravity Does the negative potential energy in the gravitational field have to be considered in calculating the total mass of the system in question (because of $E=mc^2$)? If so it seems to me that the ... 1answer 48 views ### Why does the Kruskal diagram extend to all 4 quadrants? Why is it that the Kruskal diagram is always seen extended to all 4 quadrants when the definitions of the $U,V$ coordinates don't seem to suggest that the coordinates are not defined in, say, the 3rd ... 1answer 109 views ### Christoffel symbol for Schwarzschild metric I know that the christoffel (second kind) can be defined like this: \Gamma^m_{ij} = \frac{1}{2} g^{mk}(\frac{\partial g_{ki}}{\partial U^j}+\frac{\partial g_{jk}}{\partial U^i}-\frac{\partial ... 1answer 85 views ### When a variation of a tensor is not a tensor? In a comment about variation of metric tensor it was shown that $$\delta g_{\mu\nu}=-g_{\mu\rho}g_{\nu\,\sigma}\delta g^{\rho\,\sigma}$$ which is contrary to the usual rule of lowering indeces of a ... 0answers 63 views ### Ising Hamiltonian for relativistic particles An Ising system is described by the simple Hamiltonian: $$H = \sum\limits_{i} c_{1i} x_{i} + \sum\limits_{i,j} c_{2ij} x_i x_j \,\,\,\,\,\,\,\,\,\,(1)$$ Here the $x_i$ are spins (+1 or -1 in units ... 0answers 34 views ### Null vector fields given Bondi metric I'm trying to understand how to compute the null future-directed vector fields if I have a given (Bondi) metric $g=-e^{2\nu}du^{2}-2e^{\nu+\lambda}dudr+r^{2}d\Omega$ with $d\Omega$-standard metric ... 3answers 112 views ### Do velocity and acceleration time dilation factors add? For a spinning space station such as in 2001, A Space Odyssey, what would be the time slowing in the perimeter of the spinning space station with respect to the center axis of the station? The ... 2answers 94 views ### Is time going backwards beyond the event horizon of a black hole? For an outside observer the time seems to stop at the event horizon. My intuition suggests, that if it stops there, then it must go backwards inside. Is this the case? This question is a followup ... 1answer 83 views ### Stress energy tensor of a perfect fluid and four-velocity In the following demonstration, there is an error, but I cannot find where. (I explicitely put the $c^2$ to keep track of units). We consider a metric $g_{\mu\nu}$ with a signature $(-, +, +, +)$ : ... 2answers 104 views ### What is a sudden singularity? I've seen references to some sort of black hole (or something) referred to as a sudden singularity, but I haven't seen a short clear definition of what this is for the layman. 1answer 74 views ### The most general form of the metric for a homogeneous, isotropic and static space-time What is the most general form of the metric for a homogeneous, isotropic and static space-time? For the first 2 criteria, the Robertson-Walker metric springs to mind. (I shall adopt the (-+++) ... 1answer 61 views ### Sign crazyness on the stress energy tensor? I would like to know on what depends the sign of the stress energy tensor in the following formula : $T_{\mu\nu}=\pm(\rho c^2+P)u_{\mu}u_{\nu} \pm P g_{\mu\nu}$ In my case the metric is equal to ... 2answers 83 views ### Are gravitational time dilation and the time dilation in special relativity independent? There are two kinds of time dilation: One because the other clock moves fast relative to me (special relativity). Another one because the other clock is in a stronger gravitational field (general ... 1answer 42 views ### “WLOG” re Schwarzschild geodesics Why, when studying geodesics in the Schwarzschild metric, one can WLOG set $$\theta=\frac{\pi}{2}$$ to be equatorial? I assume it is so because when digging around the internet, most references seem ... 1answer 55 views ### Gravitational time delay and contraction of matter How can any matter contract to its Schwarzschild radius if gravitational time dilation clearly states that all clocks stop at that point. So any contraction any movement would stop. If that is so why ... 0answers 59 views ### Do we expect that the universe is simply-connected? [duplicate] I heard recently that the universe is expected to be essentially flat. If this is true, I believe this means (by the 3d Poincare conjecture) that the universe cannot be simply-connected, since the ... 1answer 133 views ### General relativity and the conservation of momentum I'm trying to understand the conservation of momentum in general relativity. Due to the curvature of space-time by matters and energy, the path of a linear motion appears to be distorted. Therefore ... 0answers 44 views ### Einstein's Equation [closed] Rewrite $T_{ab}-\frac{1}{2}R_{ab}=-8\pi T_{ab}$ in the form $$R_{ab}= \alpha T_{ab}+ \beta T^c_cg_{ab},$$ Where $\alpha$ and $\beta$ are constants. Please, any gist on this? 0answers 48 views ### Singularities in Schwarzchild space-time Can anyone explain when a co-ordinate and geometric singularity arise in Schwarzschild space-time with the element ... 2answers 119 views ### First and second fundamental forms I'm writing notes about the 3+1 formalism in general relativity, for myself. Inevitably I came across the notions of first and second fundamental forms. Mathematically, it is clear how these objects ... 0answers 56 views ### Lecture Notes confusion: Constructing the Einstein Equation This question is on the construction of the Einstein Field Equation. In my notes, it is said that The most general form of the Ricci tensor $R_{ab}$ is $$R_{ab}=AT_{ab}+Bg_{ab}+CRg_{ab}$$ ... 0answers 97 views ### How to calculate Riemann and Ricci tensors for a sphere? [closed] Let's have the metric for a sphere: $$dl^{2} = R^{2}\left(d\psi ^{2} + sin^{2}(\psi )(d \theta ^{2} + sin^{2}(\theta ) d \varphi^{2})\right).$$ I tried to calculate Riemann or Ricci tensor's ... 1answer 38 views ### Contraction of the metric tensor This is perhaps a simple tensor calculus problem -- but I just can't see why... I have notes (in GR) that contains a proof of the statement In space of constant sectional curvature, $K$ is ... 0answers 53 views ### Stress-energy tensor of point particle when the trajectory is a transcendental equation? I'm working through Carroll's GR book, and Problem 7.8 is not coming together. I'm missing something idiotically simple, but I'm not sure if I can cleanly write a stress-energy tensor for a point ... 2answers 102 views ### Is Earth's orbit around the sun affected by the ~8 minutes light delay? Gravitational change occurs at the speed of light. As a consequence, we experience on Earth the gravitational attraction of the sun based on its position relative to us ~8 minutes ago. How does this ... 1answer 55 views ### A physical sense of an Inertial frame Definition clarification needed, please: I am hoping to get physical sense of an "inertial frame". Do inertial reference frames all have zero curvature for their spacetime? So is an inertial frame ... 1answer 51 views ### Local inertial coordinates It is said that we can introduce local inertial coordinates for any timelike geodesic. But why only for timelike geodesics? What about null geodesics? Perhaps it has to do with invertibility or ... 5answers 137 views ### How universal gravitation falls short As a non physicist I can understand how Newtonian mechanics falls short in cases of high velocity etc. and is properly generalized by the special theory of relativity. What is not clear to me is how ... 0answers 48 views ### The interior of a cylinder as an Einstein manifold The interior of a curved cylinder is an Einstein manifold (the Ricci Curvature Tensor is proportional to the Metric $R_{\mu\nu}=kg_{\mu\nu}$) since it has a constant curvature. However, I was unable ... 1answer 36 views ### Zero-zero (lower indicies) term for affine connection ($\Gamma_{00}^\lambda$), why do some terms dissapear? More simply a tensor algebra question, but in General relativity I have the following when I calculate $\Gamma_{00}^\lambda$:- \Gamma_{00}^\lambda = \frac{1}{2}g^{\nu\lambda}\left( \frac{\partial ... 1answer 50 views ### Does non-mass-energy generate a gravitational field? At a very basic level I know that gravity isn't generated by mass but rather the stress-energy tensor and when I wave my hands a lot it seems like that implies that energy in $E^2 = (pc)^2 + (mc^2)^2$ ... 0answers 58 views ### Wald problem 11.4 Consider a stationary solution with stress-energy $T_{ab}$ in the context of linearized gravity. Choose a global inertial coordinate system for the flat metric $\eta_{ab}$ so that the "time direction" ... 0answers 52 views ### How can I simulate metric equations in relativity theories? [closed] I want to simulate tensor equations of general relativity like Einstein's field equation, what I have to do? What PC program I've to use?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186563491821289, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-math-topics/34903-algorthims.html
# Thread: 1. ## algorthims i need to verify that the following algorthims works on the list 2, 3,6,2,6 Begin input $x_1$ $x_2$... $x_n$ count := 0 for i:= 2 to n do begin for j:=1 to (i-1) do begin if $x_i = x_j$ then begin count := count +1 end end output count end much thanks 2. What's it supposed to do? If you are counting duplicates, that's not it. Try 2 3 6 2 6 5 5 5 5 to disprove it. 3. it is meant to count the number of pairs of integers in a list 4. In your universe of inputs, will there EVER be more than two of any single integer? Your algorithm will produce 3 from 2 2 2. If the trio condition never occurs, then perhaps you are done. 5. Originally Posted by TKHunny Your algorithm will produce 3 from 2 2 2. If the trio condition never occurs, then perhaps you are done. i don't understand that how can you get 2 2 2, when there are only two 2's in the list 6. I'm reviewing your algorithm, not your data. I made up my own data. Why would you need an algorithm to count pairs if the data you provided was the entire universe of data? Just count them. Pair of 2's Pair of 6's. There, 2 pairs. Done. Should your algorithm work with ANY data, or just some clear subset, such as that you have provided? If it needs to work on ANY data, then you'll get some double counting. If you can guarantee that there never will be a trio or a quartet or worse, then perhaps you are done. 7. it only need to work with the subsetthat i gace of 2.3.6.2.6 many many thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363287687301636, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/52060/what-is-the-shortest-proof-of-the-existence-of-a-prime-between-p-and-p2-ot/52096
## What is the shortest proof of the existence of a prime between $p$ and $p^2$ ? other examples? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) 1) It is well known that between a prime $p$ and $p^2$ always exist a prime, but what is the shortest proof of that (by elementary methods or not)? (One can say that we can have it as a collorary of Bertrand's postulate, but it is a stronger result.) 2) I ask it as an example, are there any characteristic examples of results that their first proof was really large comparing to some proof that someone found later? 3) Or of results that their only known proof/proofs is/are a collorary of the proof of something stronger? (Maybe the one that I give is not an example for this.) NOTE:I was asked to change the title to a more precise - 1 Maybe your (second) question has already been answered at mathoverflow.net/questions/43820/… – Gerry Myerson Jan 14 2011 at 11:14 2 So far as I know, the bit about finding a prime between $p$ and $p^2$ is a good example of what you're asking for: off the top of my head, I don't know a shorter proof than the one which establishes Bertrand's Postulate (i.e., an elementary but somewhat tricky couple of pages), which is a much stronger result. But I think your question "[A]re there any results that their first proof was large comparing to some proof that someone found later?" is far too broad for this site. (Certainly the answer is yes: a large percentage of first proofs are longer than what is eventually found.) – Pete L. Clark Jan 14 2011 at 11:22 3 There is no considerable advantage of enlarging the interval: this only simplifies the starting verification. I vote to close as see no mathematical question. If the question is nevertheless of interest, community wiki mode sounds more appropriate. – Wadim Zudilin Jan 14 2011 at 12:07 1 Asterios, could you make the title more precise? "Shortest proof" of what? – arsmath Jan 14 2011 at 14:05 4 -1. The answer to 2. is "yes, lots" and the answer to 3. is "yes, you can generate them more or less randomly and you will not derive any insights from the answers. E.g. Every holomorphic function $f:\mathbb{C}\rightarrow\mathbb{C}$ satisfying $f(5)\neq \pi$ is infinitely differentiable." I dare you to prove this statement without proving something stronger at the same time. – Alex Bartel Jan 14 2011 at 14:34 show 6 more comments ## 3 Answers It is possible to shorten Bertrand's Postulate's proof so it proves only the above. We can throw away the usually-proven upper bound on the primoral. Explicitly, following Wikipedia's "Proof of Bertrand's postulate": Lemma 1: $$\frac{4^{\lfloor n^2/2 \rfloor}}{2\lfloor n^2/2 \rfloor+1} < \binom{n^2}{\lfloor n^2/2 \rfloor}$$ For a fixed prime $p$, define $R(p,n)$ to be the highest natural number $x$, such that $p^x$ divides $\binom{n}{\lfloor n/2 \rfloor}$. Lemma 2: $$p^{R(p,n)} \le n+1$$ If there are no primes between $n$ and $n^2$, then: $$\binom{n^2}{\lfloor n^2/2 \rfloor } = \prod_{p\le n} p^{R(p,n^2)} < (n^2+1)^n$$ This violates lemma 1 as soon as $n \ge 7$. (* the floors where put in a bit hastily) - @Dror, that exactly what I meant in my comment above: the only range to check by hand is $n<7$. I know this because that's one of the problems for my NT class, not the best one however... – Wadim Zudilin Jan 15 2011 at 9:35 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. An instance of (2): the proof of the individual ergodic theorem by Garcia, using a "maximal ergodic lemma" is considerably shorter and simpler than the original one of Birkhoff. - An instance of (2): the proof of the individual ergodic theorem by Garcia, using a "maximal ergodic lemma" is considerably shorter and simpler than the original one of Birkhoff. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482325315475464, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/07/13/pulling-back-forms/?like=1&source=post_flair&_wpnonce=c8d80d4293
# The Unapologetic Mathematician ## Pulling Back Forms We’ve just seen that smooth real-valued functions are differential forms with grade zero. We also know that functions pull back along smooth maps; if $g\in\mathcal{O}_NV$ is a smooth function on an open subset $v\subseteq N$ and if $f:M\to N$ is a smooth map, then $g\circ f:f^{-1}(V)\to\mathbb{R}$ is a smooth function — $g\circ f\in\mathcal{O}_{f^{-1}(V)}M$. It turns out that all $k$-forms pull back in a similar way. But the “value” of a $k$-form doesn’t only depend on a point, but on $k$ vectors at that point. Functions pull back because smooth maps push points forward. It turns out that vectors push forward as well, by the derivative. And so we can define the pullback of a $k$-form $\alpha$: $\displaystyle \left[\left[f^*\alpha\right](p)\right](v_1,\dots,v_k)=\left[\alpha(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))$ Here $\alpha$ is a $k$-form on a region $V\subseteq N$, $p$ is a point in $f^{-1}(V)\subseteq M$, and the $v_i$ are $k$ vectors in $\mathcal{T}_pM$. Since the differential $f_{*p}:\mathcal{T}_pM\to\mathcal{T}_{f(p)}N$ is a linear function and $\alpha(f(p))$ is a multilinear function on $\mathcal{T}_{f(p)}N^{\otimes k}$, $\left[f^*\alpha\right](p)$ is a multilinear function on $\mathcal{T}_pM^{\otimes k}$, as asserted. This pullback $f^*:\Omega_N(V)\to\Omega_M(f^{-1}(V))$ is a homomorphism of graded algebras. Since it sends $k$-forms to $k$-forms, it has degree zero. To show that it’s a homomorphism, we must verify that it preserves addition, scalar multiplication by functions, and exterior multiplication. If $\alpha$ and $\beta$ are $k$-forms in $\Omega_N(V)$, we can check $\displaystyle\begin{aligned}\left[\left[f^*(\alpha+\beta)\right](p)\right](v_1,\dots,v_k)&=\left[[\alpha+\beta](f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=\left[\alpha(f(p))+\beta(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=\left[\alpha(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))+\left[\beta(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=\left[\left[f^*\alpha\right](p)\right](v_1,\dots,v_k)+\left[\left[f^*\beta\right](p)\right](v_1,\dots,v_k)\end{aligned}$ so $f^*(\alpha+\beta)=f^*\alpha+f^*\beta$. Also if $g\in\mathcal{O}(V)$ we can check $\displaystyle\begin{aligned}\left[\left[f^*(g\alpha)\right](p)\right](v_1,\dots,v_k)&=\left[[g\alpha](f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=\left[g(f(p))\alpha(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=g(f(p))\left[\alpha(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=\left[f^*g\right](p)\left[\left[f^*\alpha\right](p)\right](v_1,\dots,v_k)\end{aligned}$ As for exterior multiplication, we will use the fact that we can write any $k$-form $\alpha$ as a linear combination of $k$-fold products of $1$-forms. Thus we only have to check that $\displaystyle\begin{aligned}\left[f^*(\alpha^1\wedge\dots\wedge\alpha^k)\right](v_1,\dots,v_k)&=\left[(\alpha^1\wedge\dots\wedge\alpha^k)\circ f\right](f_{*p}v_1,\dots,f_{*p}v_k)\\&=\det\left(\left[\alpha_i\circ f\right](f_{*p}v_j)\right)\\&=\det\left(\left[f^*\alpha_i\right](v_j)\right)\\&=\left[(f^*\alpha_1)\wedge\dots\wedge(f^*\alpha_k)\right](v_1,\dots,v_k)\end{aligned}$ Thus $f^*$ preserves the wedge product as well, and thus gives us a degree-zero homomorphism of the exterior algebras. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 9 Comments » 1. [...] now we know how to translate -forms by pulling back, and thus we can define another Lie [...] Pingback by | July 13, 2011 | Reply 2. [...] seen that if is a smooth map of manifolds that we can pull back differential forms, and that this pullback is a degree-zero homomorphism of graded algebras. But [...] Pingback by | July 21, 2011 | Reply 3. [...] How shall we define the “integral” of over ? The most natural thing in the world is to pull back the form along to get a -form on . Then we can [...] Pingback by | August 3, 2011 | Reply 4. [...] find this pullback of we must work out how to push forward vectors from . That is, we must work out the derivative of [...] Pingback by | August 24, 2011 | Reply 5. [...] is orientable — we can just use to orient — and given a choice of top form on we can pull it back along to give an orientation of [...] Pingback by | August 29, 2011 | Reply 6. [...] further, let’s say we have a compactly-supported -form on . We can use to pull back from to . Then I say [...] Pingback by | September 12, 2011 | Reply 7. [...] manifold to another so that we can compare them, but we’ve seen one case where we can do it: pulling back differential forms. This works because differential forms are entirely made from contravariant vector fields, so we [...] Pingback by | September 27, 2011 | Reply 8. [...] take to be an orientation-preserving embedding — a singular cube of top dimension. Then the pullback for some strictly-positive function . We conclude [...] Pingback by | November 24, 2011 | Reply 9. [...] forms — together with the exterior derivative — gives us a chain complex. Since pullbacks of differential forms commute with the exterior derivative, they define a chain map between two [...] Pingback by | December 2, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189023971557617, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/16197/at-what-size-will-self-gravitation-contribute-more-to-stability-than-surface-ten
At what size will self-gravitation contribute more to stability than surface tension? The governments of Earth have embarked on an experiment to place a massive ball of water in orbit. (umm... special water that doesn't freeze) Imagine this to be a fluid with a given density, $\rho$ ($kg/m^3$), surface tension, $\sigma$ ($J/m^2$), and formed in a sphere of radius $R$ ($m$). I think that the viscosity $\mu$ is not needed for this question, but correct me if I'm wrong. At what size will the restorative forces from gravity (after some small perturbation) become more significant than that from surface tension? Would the type of perturbation make a difference? Just for fun, here's a video of a ball of water stabilized by surface tension. - @Georg doing so only gives me greater confidence that I used it correctly, so you'll need to be more specific if your intent was to show that something is amiss. – AlanSE Oct 26 '11 at 12:44 That use of afford was somewhat unusual to me. Is this homework? Another question: what is Your idea of "stability" in this context? Surface tension leads to an internal pressure of the same magnitude throughout, whereas self-gravity leads to a pressure which is zero at surface and maximum in the center of the sphere. How would You compare both? Self-gravity/surface tension of a water blob was a topic here some time ago. – Georg Oct 26 '11 at 15:35 @Georg I can't find a prior question that fits this. I've asked various things about self-gravitation and 1/r^2 integrals before, but in no relation to surface tension. This isn't homework. I mean, you could give it as homework, I don't know how to solve it for one. Without viscous friction, both surface tension and self-gravitation should lead to oscillations in response to a disturbance. Formalizing the criteria for when it oscillates versus tears apart is something I also don't know how to do. – AlanSE Oct 26 '11 at 15:48 – Georg Oct 26 '11 at 15:57 BTW--until your blob gets very big indeed you'll need special water that doesn't boil as well: water is not stable as a liquid at very low pressures. – dmckee♦ Nov 28 '11 at 1:01 2 Answers Let us do a quick estimation. Let $R_{cr}$ be a critical radius of the ball so that the condition of stability for the ball is expressed as $$R<R_{cr}$$ What can $R_{cr}$ depend on? The most important properties are inertia, gravity and surface tension, which are characterized by the density $\rho$, the gravitational constant $G$ and surface tension coefficient $\sigma$. So $R_{cr}$ can be a function only of mentioned parameters: $$R_{cr}=f(\rho , \sigma , G)$$ By dimensional analysis, the dimension of $R_{cr}$ is meter. The combination $\left (\frac{\sigma}{\rho^2 G}\right)^{1/3}$ has also the dimension of meter. Therefore we can write: $$R_{cr}=C\left(\frac{\sigma}{\rho^2 G}\right)^{1/3}$$ where $C$ is a dimensionless constant of orders of magnitude close to 1. For water, $\sigma=0.07\frac{J}{m^2}$, $\rho=10^3\frac{kg}{m^3}$ and also $G=6.67\cdot10^{-11}\frac{Nm^2}{kg^2}$ So, a rough estimation:$$R_{cr}\approx\left(\frac{\sigma}{\rho^2 G}\right)^{1/3}=10m$$ - This might be a "realistic" size for a swimming pool at the International Space Station. :=) – Georg Oct 27 '11 at 9:53 The fact that the answer seems to be "about a swimming pool" makes this question far more entertaining that what I anticipated. – AlanSE Oct 27 '11 at 16:54 The answer by Martin is good, but I still want to continue along the thought path I had in mind. I hope I can give a different physical basis for confirmation of the number. I'm sure there are better ways to do this, but I want to do it using only the information I have. I want to consider the transfer of some amount of mass ($m$ for now) from near the surface of the sphere to the inside of the sphere (fully integrating it). In both cases we can asses some amount of energy difference between the spherical blob of mass $M+m$ and the state of the sphere with mass $M$ with the $m$ mass hovering just above the surface. So the two states under consideration are: • State 1: (M+m) big ball • State 2: (M) ball next to (m) ball I have no problem assuming $m \ll M$. Now, I want to write expression for both the gravitational binding energy as well as the surface binding energy of a ball. I'll do this for a generic sphere with a mass of $M$ and uniform density. $$E_g(M) = - \frac{3 G M^2}{5 R(M)}$$ $$E_s(M) = - 4 \pi R(M)^2 \sigma$$ I'm leaving it in this form because we'll all agree that given the mass and the density, finding $R(M)$ isn't a problem. Now I want to write expressions for the difference in energy from state 2 to state 1. This is straight forward for the surface tension energy because the bodies are non-interacting. However, for the gravitational binding energy, there is still a binding energy between the large ball and the small ball that must be included. Keep in mind that state 1 is the lower energy state. $$\Delta E_s = E_s(M) + E_s(m) - E_s(M+m)$$ $$\Delta E_g = E_g(M) + E_g(m) - \frac{G M m}{R(M)} - E_g(M+m)$$ Now, obviously, the idea would be to set these equal, assume that $m$ is small, and then find the $M$ as a solution to that equation. But that doesn't work! I think I have a major conceptual flaw in this approach, where the scaling of the surface area of the small $m$ blob just doesn't follow a scaling that works. I didn't know what to do, so I just removed the $E_s(m)$, abandoning all logical reasoning behind my work. But when I did this and used Martin's values, I obtained the following: $$M=1.05 \times 10^6 kg$$ This was assuming $m=0.1 kg$. If I change that value it doesn't change $M$ very much, which is encouraging. This is a satisfying answer for me, because Martin's answer comes out to around 0.5 million kg. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9604685306549072, "perplexity_flag": "head"}
http://cms.math.ca/Events/winter12/res/eg
2012 CMS Winter Meeting Fairmont Queen Elizabeth (Montreal), December 7 - 10, 2012 Epidemiology - Genomics Org: Erica Moodie and David Stephens (McGill) [PDF] KATIA CHARLAND, McGill University and Children's Hospital Boston An application of gravity models to identify factors related to community pandemic influenza A/H1N1 vaccine coverage  [PDF] Nineteen vaccination clinics were established in Montreal, Canada, as part of the 2009 A/H1N1p mass vaccination campaign. Though approximately 50 percent of the population was vaccinated (e.g. compared to 8 percent in France), there was considerable geographic variation in vaccine coverage. Analysis of the geographic variation in healthcare utilization could potentially reveal underlying barriers of access to healthcare services. In this talk I discuss an application of gravity models to identify characteristics of the communities and the mass vaccination clinics that were associated with vaccine uptake. Gravity models are well suited to this problem as they examine the features of origin and destination that affect traffic, or flow, from origin to destination. Typically, the shorter the distance between origin and destination and the larger the mass of the origin/destination (e.g. population size/clinic capacity), the greater the ‘gravitational pull’ or flow between origin and destination. We identified several factors associated with rates of vaccinations. For example, communities in which only a small proportion of the population spoke English or French, tended to have low vaccine coverage. Clinics placed in materially deprived neighbourhoods with high residential density and high violent crime rates did not perform well, in general. JAMES DAI, Fred Hutchinson Cancer Research Center Adherence, ARV drug concentration and estimation of PrEP efficacy for HIV prevention  [PDF] Assays to detect antiretroviral drug levels in study participants are increasingly popular in PrEP trials as they provide an objective measure of adherence. Current correlation analyses of drug concentration data are prone to bias because the comparisons are not protected by randomization. In this talk, I will discuss the causal estimand of prevention efficacy among drug compliers, those who would have had a level of drug concentration had they been assigned to the drug arm. Both dichotomous drug detection status and continuous drug concentration measure are considered. The identifiability of the causal estimand is facilitated by either exploiting the exclusion restriction that drug noncompliers do not acquire any prevention benefit, or imputing drug measure by correlates of adherence. For the former approach, we develop sensitivity analysis that relaxes the exclusion restriction. For the latter approach, we study the performance of regression calibration. Applications to published data from existing PrEP trials suggest high efficacy estimates among drug compliers. In summary, the proposed inferential method provides an unbiased assessment of PrEP efficacy among drug compliers, thus adding to the primary intent-to-treat analysis. RAPHAEL FONTENEAU, University of Liège (Belgium) / Inria Lille - Nord Europe (France) Batch Mode Reinforcement Learning based on the Synthesis of Artificial Trajectories  [PDF] Batch mode reinforcement learning (BMRL) is a field of research which focuses on the inference of high-performance control policies when the only information on the control problem is gathered in a set of trajectories. Such situations occur for instance in the case of clinical trials, for which data are collected in the form of batch time series of clinical indicators. When the (state, decision) spaces are large or continuous, most of the techniques proposed in the literature for solving BMRL problems combine value or policy iteration schemes from the Dynamic Programming (DP) theory with function approximators representing (state-action) value functions. While successful in many studies, the use of function approximators for solving BMRL problems has also drawbacks. In particular, the use of function approximator makes performance guarantees difficult to obtain, and does not systematically take advantage of optimal trajectories. In this talk, I will present a new line of research for solving BMRL problems based on the synthesis of artificial trajectories'' which opens avenues for designing new BMRL algorithms. In particular, it avoids the two above-mentioned drawbacks of the use of function approximator. BRIAN INGALLS, University of Waterloo Sensitivity Trade-offs in Systems Biology  [PDF] The stabilizing effect of negative feedback is key to biological self-regulation (homeostasis). Feedback allows a system to maintain its preferred behaviour in an unpredictable environment. The prevailing wisdom is that negative feedback typically stabilizes a system (making it less sensitive to external perturbations), while positive feedback is destabilizing (i.e. it increases sensitivity). Of course, negative feedback can also generate instability, for example in producing oscillations. However, even when acting to improve a system’s robustness, negative feedback typically redistributes sensitivity within a network, rather than directly reducing it. In some cases, this redistribution is governed by an explicit constraint: a conservation of sensitivity. This talk will introduce sensitivity conservation statements commonly used in control engineering and molecular systems biology, and introduce a unifying formulation. MICHAEL MACKEY, McGill University Molecular distributions in gene regulatory networks  [PDF] Extending the work of Friedman et al. (2006), we study the stationary density of the distribution of molecular constituents in the presence of noise arising from either bursting transcription or translation, or noise in degradation rates. We examine both the global stability of the stationary density as well as its bifurcation structure. We have compared our results with an analysis of the same model systems (either inducible or repressible operons) in the absence of any stochastic effects, and shown the correspondence between behaviour in the deterministic system and the stochastic analogs. We have identified key dimensionless parameters that control the appearance of one or two stable steady states in the deterministic case, or unimodal and bimodal densities in the stochastic systems, and detailed the analytic requirements for the occurrence of different behaviours. This approach provides, in some situations, an alternative to computationally intensive stochastic simulations. Our results indicate that, within the context of the simple models we have examined, bursting and degradation noise cannot be distinguished analytically when present alone. DAVID MCMILLEN, University of Toronto Mississauga Design methods and constraints for biological integral control  [PDF] Synthetic biology includes an effort to use design-based approaches to create novel controllers,'' biological systems aimed at regulating the output of other biological processes. The design of such controllers can be guided by results from control theory, including the strategy of integral feedback control, which is central to regulation, sensory adaptation, and long-term robustness. Realization of integral control in a synthetic network is an attractive prospect, but the nature of biochemical networks can make the implementation of even basic control structures challenging. Here we present a study of the general challenges and important constraints that will arise in efforts to engineer biological integral feedback controllers, or to analyze existing natural systems. Constraints arise from the need to identify target output values that the combined process plus controller system can reach; and to ensure that the controller implements a good approximation of integral feedback control. These constraints depend on mild assumptions about the shape of input-output relationships in the biological components, and thus will apply to a variety of biochemical systems. We summarize our results as a set of variable constraints intended to provide guidance for the design or analysis of a working biological integral feedback controller. THEODORE PERKINS, Ottawa Hospital Research Institute What Do Molecules Do When We're Not Looking? State Sequence Analysis for Stochastic Chemical Systems  [PDF] Many biomolecular systems depend on orderly sequences of chemical transformations or reactions to carry out their functions. Yet, the dynamics of single molecules or small-copy-number molecular systems are significantly stochastic. I will describe State Sequence Analysis, a new approach for predicting or visualizing the behaviour of stochastic molecular systems by computing maximum probability state sequences based on initial conditions or boundary conditions. I demonstrate this approach by analyzing the acquisition of drug-resistance mutations in the HIV genome, which depends on rare events occurring on the timescale of years, and the stochastic opening and closing behaviour of a single sodium ion channel, which occurs on the timescale of milliseconds. In both cases, the approach yields novel insights into the stochastic dynamical behaviour of these systems, including insights that are not correctly reproduced in standard time-discretization or stochastic-simulation approaches to trajectory analysis. JANET RABOUD, University of Toronto Left truncation in the context of competing risks  [PDF] An analysis involving left truncation in the context of competing risks will be presented. The goal of the analysis is to estimate the effect of co-infection with hepatitis C on the risk of developing cardiovascular (CVD) disease among HIV infected individuals. Time is measured from the date of initiation of antiretroviral therapy. Non CVD deaths are a competing risk to the event of interest. Left truncation occurs due to the fact that antiretroviral therapy may have been initiated before enrolment into the cohort. While left truncation of the competing risk, non-CVD death, is complete, some information is available on CVD events prior to enrolment into the study through the collection of medical histories. The degree of completeness of this data varies by calendar time and site, in this multi-site cohort study. The analysis is further complicated by the desire to model hepatitis C with a time dependent variable, since infection may clear spontaneously or with treatment. Results of this work-in-progress analysis will be presented, as well as plans for further work. MARC ROUSSEL, University of Lethbridge Stochastic effects in gene transcription  [PDF] Gene transcription is typically the major source of noise in gene expression. We have developed models of gene transcription for both prokaryotes and eukaryotes. These models allow us to examine the effects of the kinetics of various elementary reaction steps on the overall statistical behavior of transcription, and in particular on the distribution of transcription times. Here we review a few results obtained from our models, emphasizing how these results impact large-scale gene network modeling. ANDREW RUTENBERG, Dalhousie University Stochastic Models of Plastic Development in Cyanobacterial Filaments  [PDF] When deprived of fixed nitrogen, filamentous cyanobacteria differentiate nitrogen fixing heterocyst cells in a regular pattern. By including uniform cellular fixed-nitrogen storage in a filamentous model of nitrogen dynamics, growth, and heterocyst differentiation we can explain the stochastic timing of heterocyst commitment. Stochasticity arises mostly from the natural population structure of cell lengths in the filament. Later events in heterocyst differentiation were consistent with deterministic heterocyst development following commitment. Our computational model has qualitatively reproduced many of the measurements associated with heterocyst differentiation including the initial and steady state heterocyst patterns. MARK VAN DER LAAN, UC Berkeley Statistical Methods for Causal Inference In HIV Research  [PDF] In this talk I will review some of the statistical challenges we encountered in our collaborations in HIV research. One collaboration was concerned with the assessment of the effects of mutations in the HIV virus on drug-resistance, involving interval censored time to event outcomes, confounding by the patient history and other mutations. In another collaboration we are concerned with estimation of subgroup-specific causal effects of treatment on time to death and viral failure based on a randomized controlled trial in which subjects are lost to follow up in response to time-dependent markers. We have also worked on estimation of individualized rules for when to switch a drug regimen and when to start treatment for HIV infected patients based on a variety of observational studies. Currently, we are involved in designing a RCT for comparing a ''treat early'' intervention with the current standard w.r.t. HIV prevention at the community level, and in determining optimal rules for triggering HIV testing based on observing the history of subjects including their adherence profile. We will demonstrate that we employed a general roadmap for targeted learning of causal effects, involving the most recent advances in modeling, estimation, and inference. YONGLING XIAO, Institut national d'excellence en santé et en services sociaux (INESSS) Flexible marginal structural models for estimating cumulative effect of time-varying treatment on the hazard  [PDF] Many longitudinal studies deal with both time-dependent (TD) exposures or treatments and TD confounders. When a TD confounder acts also as a mediating variable, marginal structural Cox models (Cox MSM) can be used to consistently estimate the causal effect of the TD exposure. On the other hand, modeling of the effect of a TD exposure requires specifying the relationship between the hazard at time t and the entire past exposure history, up to time t. Flexible modeling of the weighted cumulative exposure (WCE) has been proposed to address this challenge. However, the existing WCE models do not permit accurate adjustment for TD confounding/mediating variables, while the existing MSMs do not incorporate flexible estimation of the cumulative effects of TD exposures. In this study, we propose a flexible marginal structural Cox model with weighted cumulative exposure modeling (WCE MSM), which combines the Cox MSM and WCE approaches, thereby simultaneously addressing the two aforementioned analytical challenges. Specifically, by controlling for confounders using the inverse-probability-of-treatment weights and estimating the WCE with cubic regression splines, the new WCE MSM can estimate the total causal treatment effect, that accounts for both direct cumulative effects of past treatments and their `indirect effects', mediated by the TD mediators. Simulation results confirm that the proposed WCE MSM yields accurate estimates of the causal treatment effect under settings of complex exposure effects and time-varying confounding. The new method was applied to the Swiss HIV Cohort Study data to reassess the association between antiretroviral treatment abacavir (ABC) and cardiovascular risk. JESSICA YOUNG, Harvard School of Public Health Simulation from a known Cox MSM using standard parametric models for the g-formula  [PDF] Inverse probability (IP) weighted estimation of Cox Marginal Structural Models (MSMs) is now a popular approach for estimating the effects of time-varying antiretroviral therapy regimes on survival in HIV-infected patients. Unlike standard estimates, IP weighted estimates of the parameters of a correctly specified Cox MSM may remain unbiased for the causal effect of following one regime over another in the presence of a time-varying confounder affected by prior treatment (e.g. CD4 cell count). A standard estimate might be a likelihood-based estimate of the parameters on treatment in a time-dependent Cox model for the observed failure hazard at each time conditional on past measured treatment and confounders. Previously proposed methods for simulating data according to a known Cox MSM are useful for studying the performance of an IPW estimator as they involve explicit knowledge of quantities required for unbiased IPW estimation. These approaches are limited, however, for studying bias in a standard estimate due only to the presence of time-varying confounders affected by prior treatment as they lack explicit knowledge of the observed conditional failure hazard. Here an alternative approach to Cox MSM data generation is considered that addresses this problem by generating data from a standard parametrization of the observed data distribution. In this case, the true Cox MSM parameters may be derived by the relation between a Cox MSM and the g-formula. This talk will review this relationship in general and work through an example. This approach has limitations including those implied by the g-null paradox theorem. DANIEL ZENKLUSEN, Université de Montréal Imaging Single Transcripts Resolves Kinetics of Gene Expression Processes  [PDF] Many cellular processes involve a small number of molecules and undergo stochastic fluctuations in their levels of activity; consequentially, biological processes are probably not executed with the precision often assumed. Hence, expression levels of proteins and mRNAs can vary considerably over time and between individual cells, limiting considerably the value of experimental datasets acquired through ensemble measurements that rely in pooling thousands of cells, as these datasets will only reflect an average behavior of a particular process. This underlines that cells should not be studied as ‘the average cell’ but as individual entities. Truly understanding ‘cellular biochemistry’ therefore requires the ability to study the behavior of individual cells, and ideally, individual molecules, only this results in datasets that represent the whole range of possible scenarios that can occur within a cell. We will summarize recent advances in single molecule RNA imaging approaches that have facilitated single molecule studies in cells and illustrate how we apply these techniques to determine how cells execute different processes along the gene expression pathway, in particular transcription. ## Sponsors © Canadian Mathematical Society © Canadian Mathematical Society : http://www.cms.math.ca/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9103649854660034, "perplexity_flag": "middle"}
http://meta.math.stackexchange.com/questions/8424/questions-that-ask-to-verify-solution-to-problem
# Questions that Ask to Verify Solution to Problem Is it okay to ask questions where you give the solution and ask people to review it to see if it is correct? - 7 – Rahul Narain Feb 8 at 5:34 ## 3 Answers - 1 – zyx Feb 11 at 16:00 Why not? This question of mine, How do I prove that $\det A= \det A^t$? led to an interesting discussion. I could not only verify my solution but I could also grab a few interesting ideas. - This is definitely ok for me. After all, we want the askers to show their own efforts they have put into the problem so far. If it turns out that their effort was good enough to be a complete solution, even better as they show they did not simply lose hope right in the beginning. One might argue that then they should rather ask the original problem question and give their own complete efforts as a self-answer, but apparently they are not really sure that their attempt is a valid answer, so this cannot be recommended, just in case the own attempt is wrong or lousy. Another idea might be to transform Question: "I am asked to solve Q. My answer attempt is A, but I am not sure. Is it correct? -- Noob" Answer: "Yes, that's fine. --Expert" ultimately to Question: "How to solve Q? -- Noob" Accepted Answer: "A. --Noob" but I won't prefer that (even though "Expert" might not be harmed too much by the reputation loss for a simple "Yes") -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9761263132095337, "perplexity_flag": "middle"}
http://en.m.wiktionary.org/wiki/covering_number
# covering number ## English ### Noun covering number 1. () The number of vertices in a minimum vertex cover of a graph, often denoted as $\tau = \tau(G)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930986762046814, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/engineering?sort=votes&pagesize=15
# Tagged Questions The engineering tag has no wiki summary. 1answer 2k views ### How does a digital radio tuner work? I believe I understand how tuning a radio with an analog tuner works: turning the dial physically changes the length of the antenna, which determines which broadcast wavelength will resonate in the ... 4answers 773 views ### Is the EmDrive, or “Relativity Drive” possible? In 2006, New Scientist magazine published an article titled Relativity drive: The end of wings and wheels1 [1] about the EmDrive [Wikipedia] which stirred up a fair degree of controversy and some ... 2answers 109 views ### Would it be possible for geophysicists/geoengineers to develop an artificial way of trapping carbon in the ocean? There's a mechanism by which the southern ocean sequesters carbon from the atmosphere. It happens when strong winds displace a large slab of surface water, accumulating in a specific region where the ... 1answer 222 views ### Water flushed down, water pumped up (in buildings) I live in a tall building (20 floors) on a mountain. Because the water pressure from the water company is not enough, there is a water pump at the last floor which is activated each time someone is ... 2answers 6k views ### Hollow Tube Stronger than Solid bar of same Outside Diameter (O.D.)? I was listening to some co-workers talking about problems meeting stiffness requirements. Someone said that even with a solid metal rod (instead of the existing tube) we could not meet stiffness ... 1answer 111 views ### Why do wind power plants have just 3 blades? [duplicate] Why do wind power plants have just 3 blades? It seems that adding more blades would increase the area that interacts with the wind and gather more energy. (Image from Wikipedia.) 2answers 149 views ### What properties would the ideal material for spacecraft construction possess? Assuming we develop the capability to send a robot to study Gliese 518, or any of the Earth-like planets discovered in the neighbourhood; the spacecraft would need to travel through the Solar System ... 4answers 962 views ### Why do aircraft inner wings lose lift when turning? first question here, so please be gentle! I'm reading an entry-level engineering course book and am currently up to discussion of aircraft design. There's one particular statement that is unclear to ... 5answers 351 views ### Tsunami dampening mechanisms Encouraged by the zeitgeist let me ask the following: Is it feasible (now or in the future) to build systems a certain distance of a vulnerable coastline which can serve to dampen a tsunami before it ... 2answers 1k views ### Shape of Fan Blades How is the shape of the blades of an air fan determined? Trial and error, or is there a theory behind it? What are they trying to maximize, volume of air dislocated per rotation? 3answers 243 views ### Can anyone estimate what proportion of water remains after I flush a toilet? Imagine I have a clean toilet with some amount of water in the bowl. When I flush the toilet much of that water will be displaced by the tank's water. I want to work out (or model really) what amount ... 1answer 398 views ### A conceptual problem with Euler-Bernoulli beam theory and Euler buckling Euler-Bernoulli beam theory states that in static conditions the deflection $w(x)$ of a beam relative to its axis $x$ satisfies $$EI\frac{\partial^4}{\partial x^4}w(x)=q(x)\ \ \ \ (1)$$ where $E$ is ... 3answers 2k views ### How do I calculate DC motor speed for a given load? Suppose I have a robot of a given mass, and I'm choosing between 2 different wheels and 2 different motors to put on it. For each wheel I have the diameter, and for each motor I know the stall torque ... 0answers 226 views ### Why don't Turbojet engines use the De Laval Nozzle? Why don't Turbojet engines use the De Laval Nozzle? In fact, it seems that in a typical turbojet, the output nozzle contracts, not expands. As the temperature of exhaust gases is high -- it should ... 3answers 179 views ### How to determine whether a large container is air-tight? In constructing a kitchen-waste digester at home, I use a 50 Litre HDPE drum. The base of the drum is holed with a plug fitted to allow drainage when necessary. The top has two openings - one for ... 3answers 198 views ### How does the shape (form; not cross-section) of a spring impact performance? Cylindrical compression springs are everywhere and then some applications choose other forms like rectangular or unique polygonal form. What impact does the form of a compression spring have and how ... 1answer 250 views ### Tear drop shape i heard that a tear drop shap is the most aerodynamic shape possible or the best is this true? If this isn't true what is since i need to make a fast ROV? Also since i need to have a propeller to ... 1answer 166 views ### How to estimate the Kolmogorov length scale My understanding of Kolmogorov scales doesn't really go beyond this poem: Big whirls have little whirls that feed on their velocity, and little whirls have lesser whirls and so on to viscosity. ... 1answer 418 views ### Air Turbulence and DIY laminar flow hoods So for years on the mycology, plant tissue culture, and DIY laboratory websites there has been this ongoing debate on how to achieve laminar flow in a home built laminar flow hood. Flow hood link! ... 1answer 59 views ### Amount of energy to separate Gases - relationship to concentration I want to understand the efficiencies of separating mixed gases, and for that I want to understand the thermodynamic limit case. Looking at the wikipedia page for entropy of mixing, I find the ... 4answers 748 views ### Are quantum mechanics calculations useful for engineering? I heard it's is pretty tough to get results for more than a few quantum particles. Are quantum mechanical calculations useful at all for any technology that is being sold? Or do they use ... 2answers 204 views ### Piston movements in four stroke cycle? I was reading about a four stroke cycle. Here's what I understood: In the first stroke, the piston starts at the top and moves down. In the second stroke, the piston moves upwards. In the third ... 3answers 142 views ### Do processes $P\propto\frac{1}{V^2}$, $P\propto\frac{1}{V^3}$, $P\propto\frac{1}{V^4}$, etc., exist in the real world? Is there any real process in which $PV^n=C$ where $P,V$ stands for pressure, volume respectively. $C$ is a constant and $n$ is a positive integer? I am familiar with Boyle's law that states that ... 2answers 109 views ### How Safe Are Heat Ray Guns? Could a little meddling with the frequencies of the Heat Ray Gun beam result in frying crowds rather than dispersing them? 1answer 349 views ### Equations instead of psychrometric charts I want to create a program that will accurately simulate a condensor. I want to use the data in psychrometric charts. But I cannot and hence want to use equations that show similar data. Any idea ... 1answer 314 views ### How does holographic radar work? I'm trying to figure out how the mechanics of holographic radar work. AFAIK there is a continuous 3d transmission signal (a dome-shaped antenna?) But because there isn't a direction or time-domain, ... 3answers 505 views ### Statics software for structural engineering I'm attempting to expand my knowledge of engineering software. I've found comsol and ansys for acoustics and thermodynamics/fluid dynamics (not necessarily in that order), now I'd like to see if I can ... 2answers 842 views ### Angles on swing sets I'm building a swing set for my children. All of the designs I've seen involve building two A-frames and connecting them at the top with a crossbar/beam from which hang the swings. The A-frames are ... 0answers 89 views ### Why did increasing the Ackermann geometry in my race car make it faster in corners? Ackermann geometry is used to account for the different radius arcs that the front tires follow when the steering wheel is turned from center. It's often expressed as a percentage: e.g. 25% Ackermann, ... 0answers 89 views ### Internal moment in the hull of a pressure vessel This question is related to the course structural analysis. As part of our exam grade every student has been given different multiple homework assignments which we have to solve. One of the problems ... 1answer 161 views ### Compressible fluid flow through a branched pipe junctions Backdrop: Designing a dust extraction system (LEV) with a branched junction. What principles allow for the calculation of volume flow rate and pressure at the inlets if the volume flow rate and ... 3answers 5k views ### What is the difference between a moment and a couple? In mechanical engineering, the torque due to a couple is given by $\tau = P\times d$, where $\tau$ is the resulting couple, $P~$ is one of the force vectors in the couple and $d$ is the arm of the ... 3answers 367 views ### Can I study Quantum Computing or Quantum Mechanics with an Engineering background? Sir, I am studying Electrical & Elctronics Engg. now. I wish to pursue Quantum Mechanics or Quantum Computing as my research subject. Is it possible for me to do my M.Tech. & then pursue ... 3answers 2k views ### Factors affecting torque and RPM of a motor I'm not a physics guy, not even basic concept of a DC motor is easy for me. My question is, how these parts of a motor affects it's RPM and Torque? I had my research a while ago so I filled out some, ... 3answers 1k views ### Could a real-life X-Wing fly in Earth's atmosphere? From an aerodynamic point of view, could a full-size aircraft of X-Wing design fly in Earth's atmosphere? Assuming you were free to add control surfaces here and there, could the wings in open ... 2answers 259 views ### History of the use of the concept of phase space in engineering Engineering textbooks constantly use the concept of 'phase space' (see e.g. http://www.cs.cmu.edu/~baraff/sigcourse/notesc.pdf). That is, they think of the state of a mechanical system as a ... 1answer 90 views ### How to interpret a negative failure rate? In statistical engineering the "hazard rate" of a distribution is defined as: $$r(x)=\frac{f(x)}{1-F(x)}$$ where $f(x)$ and $F(x)$ are the PDF and CDF. Basically $r(x)$ is the odds that, having ... 4answers 308 views ### What is a strain gauge and how do I use one? As the title says, I have no idea what these things are or how to get or use one. Can I receive a simple explanation or links to one? I'm a computer engineer so I have very little physics/mechanical ... 2answers 558 views ### Calculate the weight a simple plank can support I'd like to build a simple desk; just a single plank of wood (or a few side-by-side) with solid supports on each end of the desk. What I'm trying to figure out is how thick a plank I want to use for ... 2answers 161 views ### Can someone explain the Hall effect thruster to me? I am in high school, and am doing a major research project on Russia. Part of that is a section on the space race, and ion engines/hall effect engines have come up several times. Unfortunately, Google ... 2answers 2k views ### The most efficient Fan Blade Design What could be the most efficient fan blade design? There are three main factors for a good fan: one is speed at which air is circulated; second,the volume of air it can circulate; and the third is ... 2answers 1k views ### How do I find the Gain of this Transfer Function I found the transfer function for the spring mass damper system to be $$G(s)=\frac{1}{ms^{2}+bs+k},$$ and now I need to find the gain of this transfer function. I know that the gain is ... 1answer 83 views ### How to find the value of the parameter $a$ in this transfer function? I am given a transfer function of a second-order system as: $$G(s)=\frac{a}{s^{2}+4s+a}$$ and I need to find the value of the parameter $a$ that will make the damping coefficient $\zeta=.7$. I am not ... 0answers 18 views ### The stress in copper and steel parallel compound members [closed] Came across this question and just needed abit of help undertanding how i should go about this question I have also attempted to solve it to the best of my ability but im stuck. A concrete column of ... 2answers 182 views ### Fluid flow through a diverging pipe Relevant diagram loosely based on my real problem: Description of problem: A fan creates a pressure sink that drives fluid flow through a gently diverging pipe (please note that the diagram is not ... 2answers 140 views ### Total Mechanical Advantage How do you find the net Mechanical Advantage (MA) of two joint machines. Do you add or multiply the individual MA? Suppose I have two sets of wheel and axle connected by a fixed pulley. Each of the ... 0answers 294 views ### propeller flying physics for the layman I'm starting a (quad?)copter build, and i can find plenty of knowledge about stabilizing the craft and things related to gyroscopy. but there's absolutely zero information on things that help me ... 0answers 93 views ### Can a the volume flow through a positive displacement pump be reliably measured by tracking revolutions of the drive shaft? [closed] The question refers to non-compressible fluids. I'm mostly thinking about monopumps (excentric screw pumps), but this also should apply to any type of gear pumps as well, possibly also to piston ... 0answers 154 views ### Centrifugal Compressor Flow Rate For a centrifugal compressor, as found in most turbochargers on internal combustion engines, is there a noticeable change in flow rate versus a naturally aspirated flow rate? In other words, does the ... 1answer 209 views ### How to find the value of the parameter a in this transfer function? [duplicate] Possible Duplicate: How to find the value of the parameter $a$ in this transfer function? I am given a transfer function of a second-order system as: $$G(s)=\frac{a}{s^{2}+4s+a}$$ and I ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271997809410095, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/15707/largest-possible-order-of-a-nilpotent-permutation-group
## Largest possible order of a nilpotent permutation group? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm trying to obtain a bound for the order of some finite groups, and part of it comes down to the order of a permutation group of degree $n$ that is nilpotent. I imagine these have to be much smaller than the full symmetric group, and a bound that is sub-exponential in $n$ would seem reasonable (given that permutation $p$-groups fall a long way short of having exponential order), but I haven't seen this written down anywhere. I found one reference that looks promising: P. Palfy, Estimations for the order of various permutation groups, Contributions to general algebra, 12 (Vienna, 1999), 37-49, Heyn, Klagenfurt, 2000. However, I can't actually find the article anywhere online. Any suggestions? - 2 From springerlink.com/content/d6n24hgu6x180r64 , it follows that any nilpotent subgroup of $S_n$ has order less than $\sqrt{2n!}$. I can't access the article, but from the introduction, it seems this bound may be greatly improved within the paper; it talks about maximal nilpotent subgroups of $A_n$ corresponding to Sylow subgroups. – Steve D Feb 18 2010 at 14:24 ## 1 Answer The paper of Vdovin mentioned by Steve shows that the nilpotent subgroups of the symmetric groups of maximal order are either the Sylow 2-subgroups P(n) of Sym(n), or P(n-3) x Alt(3) when n = 2(2k+1)+1. Vdovin, E. P. "Large nilpotent subgroups of finite simple groups." Algebra Log. 39 (2000), no. 5, 526-546, 630; translation in Algebra and Logic 39 (2000), no. 5, 301-312. http://www.ams.org/mathscinet-getitem?mr=1805754 http://dx.doi.org/10.1007/BF02681614 It looks at the orbits of the center of the nilpotent group, and the action of the quotient on those orbits to give a reasonable bound. Then it shows that all nilpotent subgroups of maximal order are conjugate to the types mentioned. The paper also handles alternating groups, groups of Lie type, and sporadic groups. For groups of Lie type, the maximal order nilpotent is usually a Sylow p-subgroup for p the characteristic. Sporadics are only handled briefly: the nilpotent subgroups of maximal order are always Sylows and they satisfy the same general bound as the other simple groups. - Thanks, that sounds like exactly what I needed. – Colin Reid Feb 18 2010 at 15:21 The 2-sylow of S_n has order 2^([n/2]+[n/4]+...) <= 2^n, where [x] is the integer part of x. So your conjectured exponential bound is correct. – David Speyer Feb 18 2010 at 18:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8934449553489685, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/280207/surjectivity-of-projective-maps-from-an-inverse-limit-to-an-element-of-the-direc?answertab=votes
# Surjectivity of projective maps from an inverse limit to an element of the direct product I'm doing some self-study and I'm stuck on a problem involving inverse/projective limits. Although this is NOT a homework problem, I'd really appreciate some hints rather than a completely worked out solution. The problem is from Dummit & Foote: I'm having a rather difficult time figuring out how to prove part (b). I think I need to establish that for every $a_{i}\in A_{i}$, there exists some $$\alpha=(\,\alpha_{1},\alpha_{2},\ldots,\alpha_{j},\ldots)\in\prod_{i\in I}A_{i}$$ such that $\alpha\in P$ and $\mu_{i}(\alpha)=a_{i}$. I'm just not really sure how to do that. - ## 1 Answer Let $a_m \in A_m$. To get what you want, you need to extend this to a sequence $(a_1, a_2, a_3, \ldots)$ such that $a_i = \mu_{j,i} (a_j)$ for all $i \le j$. Given the hypotheses, there's really only one way to proceed: by induction. If you're worried about set-theoretic details, then I will also point out that you will need the axiom of dependent choice. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9662254452705383, "perplexity_flag": "head"}
http://mathoverflow.net/questions/116687?sort=votes
## Number of matrices of a given rank satisfying this condition ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $A_1$ and $A_2$ be two arbitrary $n\times n$ matrices with entries in $Z_p$. How many $n\times n$ matrices $B$ are there so that both $A_1-B$ and $A_2-B$ are of rank $n-1$ or less? What is the formula if the rank is $n-k$ (for both $A_1-B$ and $A_2-B$)? It is okay if I know a loose bound (but not trivial, of course). It looks like they are certainly less than $n^2.p^{n^2-k}$. Anything better would be really helpful. Thanks! - ## 2 Answers $\def\rk{\mathop{\rm rank}}\def\ZZ{{\mathbb Z}}$ We need to estimate the number of expansions of the form $A:=A_1-A_2=C_1-C_2$ with $\rk C_i\leq n-k$ (then $B=A_1-C_1=A_2-C_2$). Since the multiplication by a non-degenerate matrix does not change anything, this depends only on $\rk A$. Assume, for instance, that $A$ is non-degenerate. Fix two $(n-k)$-dimensional subspaces $V_1,V_2$ of the space of rows $V=\ZZ_p^n$ with $\dim V_i=n-k$; counting the bases, we obtain that there are there are $$N=\displaystyle \frac{(p^n-1)(p^n-p)\dots (p^n-p^{n-1})} {(p^{n-2k}-1)(p^{n-2k}-p)\dots(p^{n-2k}-p^{n-2k-1})\cdot \bigl((p^{n-k}-p^{n-2k})(p^{n-k}-p^{n-2k+1})\dots(p^{n-k}-p^{n-k-1})\bigr)^2}$$ such pairs of subspaces. Denote by $C_i^j$ the $j$th row of $C_i$. Now let us count all the pairs $(C_1,C_2)$ such that $\mathop{\rm span} (C_i^1,\dots,C_i^n)\subseteq V_i$. Let $V'=V_1\cap V_2$. Then $C_i^j\mod V'$ is determined uniquely, hence we have $p^{n-2k}$ variants for each $C_1^j$, and $C_2$ is reconstructed from $C_1$. Thus we have $p^{n(n-2k)}$ pairs. In total, we get $N\cdot p^{n(n-2k)}$ pairs. In fact, this is an upper bound, since the rank of $C_i$ may be less than $p-k$, and in this case one pair will correspond to several pairs $(V_1,V_2)$. You may easily obtain a bound for $N$, but it would be better to know the relation between $p$ and $n$... If $A$ is degenerate, then this bound should increase, since we just need $V_1+V_2$ to contain the space generated by the rows of $A$. On the other hand, we may restrict ourselves to the case when $(V_1+V_2)\mod V'=\mathop{\rm span}(A^1,\dots,A^n)\mod V'$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Thank you Ilya! This was very helpful! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524049758911133, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2009/02/16/generalized-eigenvectors/
# The Unapologetic Mathematician ## Generalized Eigenvectors Sorry for the delay, but exam time is upon us, or at least my college algebra class. Anyhow, we’ve established that distinct eigenvalues allow us to diagonalize a matrix, but repeated eigenvalues cause us problems. We need to generalize the concept of eigenvectors somewhat. First of all, since an eigenspace generalizes a kernel, let’s consider a situation where we repeat the eigenvalue ${0}$: $\displaystyle\begin{pmatrix}{0}&1\\{0}&{0}\end{pmatrix}$ This kills off the vector $\begin{pmatrix}1\\{0}\end{pmatrix}$ right away. But the vector $\begin{pmatrix}{0}\\1\end{pmatrix}$ gets sent to $\begin{pmatrix}1\\{0}\end{pmatrix}$, where it can be killed by a second application of the matrix. So while there may not be two independent eigenvectors with eigenvalue ${0}$, there can be another vector that is eventually killed off by repeated applications of the matrix. More generally, consider a strictly upper-triangular matrix, all of whose diagonal entries are zero as well: $\displaystyle\begin{pmatrix}{0}&&*\\&\ddots&\\{0}&&{0}\end{pmatrix}$ That is, $t_i^j=0$ for all $i\geq j$. What happens as we compose this matrix with itself? I say that for $T^2$ we’ll find the $(i,k)$ entry to be zero for all $i\geq {k}+1$. Indeed, we can calculate it as a sum of terms like $t_i^jtj^k$. For each of these factors to be nonzero we need $i\leq j-1$ and $j\leq k-1$. That is, $i\leq k-2$, or else the matrix entry must be zero. Similarly, every additional factor of $T$ pushes the nonzero matrix entries one step further from the diagonal, and eventually they must fall off the upper-right corner. That is, some power of $T$ must give the zero matrix. The vectors may not have been killed by the transformation $T$, so they may not all have been in the kernel, but they will all be in the kernel of some power of $T$. Similarly, let’s take a linear transformation $T$ and a vector $v$. If $v\in\mathrm{Ker}(T-\lambda1_V)$ we said that $v$ is an eigenvector of $T$ with eigenvalue $\lambda$. Now we’ll extend this by saying that if $v\in\mathrm{Ker}(T-\lambda1_V)^n$ for some $n$, then $v$ is a generalized eigenvector of $T$ with eigenvalue $\lambda$. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 3 Comments » 1. [...] Generalized Eigenvalues Our definition of a generalized eigenvector looks a lot like the one for an eigenvector. But finding them may not be as straightforward as our [...] Pingback by | February 17, 2009 | Reply 2. [...] turns out that generalized eigenspaces do capture this notion, and we have a way of calculating them to boot! That is, I’m asserting [...] Pingback by | February 19, 2009 | Reply 3. [...] Eigenvectors of an Eigenpair Just as we saw when dealing with eigenvalues, eigenvectors alone won’t cut it. We want to consider the [...] Pingback by | April 6, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9061878323554993, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/33083/what-sort-of-mass-is-explained-by-the-higgs-mechanism?answertab=votes
# What sort of “mass” is explained by the Higgs mechanism? When I asked this question (probably in a less neutral form) to physicists, their answer was something along the lines that it's not gravity (i.e. unrelated to gravitons) but inertial mass. (So I wondered whether this is an analogous mechanism to gravitons, only that it explains inertia.) Now after some weeks of thinking (and reading) about this, I think I finally figured out what they were trying to tell me. This is related to the following comment for a similar question: Have you made up your mind on what "mass" of a particle means to you in that question? Maybe that will help. For me, the obvious candidates what "mass" might mean are • gravitational mass • inertial mass • rest mass My current guess is that the Higgs mechanism explains why "other particles" (only fermions, or also other bosons?) have a non-zero rest mass. (I imagine it's some form of explanation for potential energy related to the mere presence of the particle, even in the absence of "interactions" with other particles.) However, at least some of the (popular science) explanations really seem to try to explain something related to motion and inertia, and I got the answer "inertial mass" so often that I wonder whether it's actually really the "inertial mass" (of fermions) that is "directly explained" by the Higgs mechanism (this doesn't preclude that this explanation might be "translated" into something equivalent to rest mass). - There isn't really a difference between inertial and rest mass. The term "rest mass" is used because, if you define inertial mass to be the ratio of force to rate of change of momentum, for large speeds relativity dictates that this "mass" increase without bound. The term rest mass is used to specify that this constant is taken in the particle's rest frame. – Emilio Pisanty Jul 29 '12 at 21:55 @EmilioPisanty When I think about rest mass, I'm mostly thinking about the potential energy related to the presence of the particle, including all interactions with other particles. So my question basically assumes that the Higgs mechanism only explains the part of the rest mass which isn't explained by other interactions. Even so rest mass and inertial mass are physically closely related, there is a difference between inertial and rest mass with respect to an explanation. (But I guess that explanations in terms inertial mass will be translatable into explanations in terms of rest mass.) – Thomas Klimpel Jul 29 '12 at 22:42 @EmilioPisanty: You said it wrong--- the ratio of force to time rate of change of momentum is 1, the ratio of force to time rate of change of velocity is the longitudinal/transverse mass. The relativistic mass is the ratio of momentum to velocity. – Ron Maimon Jul 30 '12 at 15:24 ## 2 Answers Think of the Higgs mechanism as affecting rest-mass. This is the mass that a particle has when it is sitting still (you can weigh it to figure it out). Think of gravity as affecting energy. More energy = more gravitational force. So an electron that is moving very quickly has a total energy of its rest mass (E = mc^2) + its kinetic energy. Consider an electron and a positron. These both have rest-mass. When they collide and turn into two photons, all rest-mass is gone. Energy is conserved, so the system still weighs the same at all times. But the Higgs mechanism only affects the electron and positron, not the photons. - General relativity doesn't care about the difference between mass and energy. In the stress-energy tensor T$_{00}$ is the energy density and mass is just treated as energy divided by $c^2$. GR doesn't care what the Higgs' mechanism does, and will work just as well above the electroweak transition where the particles (well, the vector bosons at least) are all massless. What the Higgs' mechanism does is explain why everything doesn't travel at the speed of light. It's the constant of proportionality between velocity and momentum. I guess in your terms it's the inertial mass/rest mass. - The constant of proportionality between velocity and momentum is the inertial mass in my terms. So for me, what you are hinting at is an explanation in terms of the inertial mass, and not in terms of the rest mass. An explanation in terms of the rest mass would have to be more related to the energy involved in creating/destroying the particle. I know that special relativity tells us that inertial mass and rest mass are closely related, this is what I meant by "(this doesn't preclude that this explanation might be "translated" into something equivalent to rest mass)" in my question. – Thomas Klimpel Jul 29 '12 at 22:25 Rest mass is just the inertial mass in a frame where the velocity of the particle is zero. For example it predicts the recoil when your stationary particle is hit by another particle. Your reply to Emilio's comment says "potential energy related to the presence of the particle, including all interactions with other particles" but the only such mass related interaction I can think of is gravity, and this doesn't distinguish between mass and energy. I think you are attaching a special significance to "rest mass" that doesn't exist. – John Rennie Jul 30 '12 at 6:25 – Thomas Klimpel Jul 30 '12 at 8:04 I was a bit surprised to see that general relativity is used in particle physics, because I heard that we still have no theory which satisfactorily unifies QM and GE. – Thomas Klimpel Aug 2 '12 at 22:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9579377174377441, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/22838-vector-dot-cross-products.html
# Thread: 1. ## Vector Dot and Cross Products OK I'm pretty new to Vectors and I'm missing something very basic here. Any help would be appreciated. I can recite equations on both Dot and Cross products until I am blue in the face. I can even understand where the Dot product comes from in terms of proving it using triangles and the Cosine Rule. The question I'm struggling to answer is exactly what are these products showing me and why in layman's terms? I've spent 2 days solidly reading everything I can on this and haven't found a satisfactory answer. My best guesses are as follows:- 1) For the Dot product, you are essentially applying two forces and working out a combined effect of both of them. Clearly the maximum effect is if both of these forces are in the same direction. As you move one of them away you can use triangles to work out the effective "strength" of the second force which will be between 0 and 1 times it's force magnitude. This answer can't be right because at 90 degrees, the Dot Product is zero but if you had two forces at 90 degrees of the same magnitude clearly there would be movement at 45 degrees. What have I missed here? 2) I suspect, the Cross Product is showing the resultant rotation from two forces acting on each other. Is this correct? If so, this may well help explain why we use sine theta because that would be the force line of force one exerted on force two with a rotation about the point where the vectors tails are. How does this work with displaced vectors though is they are NOT tail to tail? I'm sure I'm miles out with my thoughts but I don't want anyone thinking I haven't at least had a go at this myself. I particularly want to know this because my interest is in Electromagnetics and I need to userstand this before I start to move onto Div, Curl an d other frightening things 2. [QUOTE=oaksoft;85214] Originally Posted by oaksoft OK I'm pretty new to Vectors and I'm missing something very basic here. Any help would be appreciated. I can recite equations on both Dot and Cross products until I am blue in the face. I can even understand where the Dot product comes from in terms of proving it using triangles and the Cosine Rule.... Hello, I'll try to give you a very basic geometric explanation about the 2 vector products. 1. The result of the dot product of two vectors is a real number. The result of the cross product of two vectors is a vector. 2. Dot product: If you have 2 vectors $\vec a, \vec b$ then the dot product is: $\vec a \cdot \vec b = |\vec a | \cdot |\vec b| \cdot \cos(\alpha)$ Since $|\vec a |$ is the length of $\vec a$ and $|\vec b |$ is the length of $\vec b$ the product $|\vec a| \cdot \cos(\alpha)$ is the length of $\vec a$ projected perpendicular on the $\vec b$. And since the product of 2 numbers can be represented as the area of an rectangle the shaded rectangle represents the value of the dot product. 3. Cross product. two vectors which are not-collinear ( don't have the same direction) span a plane P. The cross product of $\vec a$ and $\vec b$ gives a new vector which is perpendicular to both vectors that means the new vector is the normal vector of the plane (a plane has only one normal vector!): $\vec n = \vec a \times \vec b$ The length of $\vec n$ is calculated by: $|\vec n|=|\vec a| \cdot |\vec b| \cdot \sin(\alpha)$ which describes the area of a parallelogram with $|\vec a|$ and $\vec b$ as it's sides. Attached Thumbnails 3. Hi. Thanks for having a go at helping but this doesn't really answer my questions. For example, I'm still none the wiser as to WHY the cross product creates a new perpendicular vector. Another example, WHY does the dot product need you to consider the portion of one vector imposed on the other? The problem here is that dot and cross product definitions seem to be presented as something which falls out of equations. What I'm looking for however is different. I think practical non-maths examples of use may help. Anyone know of any? 4. Originally Posted by oaksoft Hi. Thanks for having a go at helping but this doesn't really answer my questions. For example, I'm still none the wiser as to WHY the cross product creates a new perpendicular vector. Another example, WHY does the dot product need you to consider the portion of one vector imposed on the other? The problem here is that dot and cross product definitions seem to be presented as something which falls out of equations. What I'm looking for however is different. I think practical non-maths examples of use may help. Anyone know of any? I think the best reason for why the dot and cross products are what they are is because they are useful. There are probably an infinitude of ways to define a product between two vectors. In Physics the first basic use of the dot product comes in the form of the work equation: $W = \vec{F} \cdot \vec{s}$. Work in this sense is more or less the "effort" required to accomplish a task, and it makes sense (when you think about it for a while) that the work done should depend on the size of the angle between the applied force and displacement. The use of the cosine function is pretty much a convenience, since it has the properties that are desirable. Even the concept of no work being done by an applied force that is perpendicular to a given displacement makes a certain amount of sense: the force gets nothing done. As for the cross product, the first place a Physics student sees it is the torque equation, but I think the best argument for its use is actually for angular momentum. I won't go into details (unless you really want them) but again there are sensible features to using the sine of the angle between the vectors and (if you use some imagination) a reason for the perpendicular vector nature of the result. I will note that the dot product between two vectors is slightly more general than the cross product because there is a very easy way to generalize the dot product to higher dimensions, but as far as I know the cross product as defined can only be used in 3D. (I believe that the problem is not that it can't be generalized, but that it can't be uniquely generalized and that there is more than one useful generalization, but please don't quote me on that.) -Dan 5. There is a Lagrange identity that says, $|\bold{a} \times \bold{b}|^2 = |\bold{a}|^2|\bold{b}|^2 - |\bold{a}\cdot \bold{b}|^2$ Thus, (using the geometric meaning of dot product) $|\bold{a} \times \bold{b}|^2 = |\bold{a}|^2|\bold{b}|^2 - |\bold{a}|^2|\bold{b}|^2\cos^2 \theta$ Thus, $|\bold{a} \times \bold{b}|^2 = |\bold{a}|^2|\bold{b}|^2\sin^2\theta$ Take square roots, $|\bold{a} \times \bold{b}| = |\bold{a}||\bold{b}||\sin \theta|$ This means: The norm of a cross product of two (non-zero) vectors is the area of the paralleogram form. (Because the angle of parallelogram is from by multipling its sides by the sine of its angle). Now it can be easily shown that $\bold{a} \cdot (\bold{a}\times \bold{b}) = 0 \mbox{ and }\bold{b}\cdot (\bold{a}\times \bold{b}) = 0$. This means: The cross product is a vector perpendicular to two (non-zero) vectors. Putting it together we have: The cross product of two (non-zero) vectors is a vector perpendicular to both vectors whose length is the size of the parallelogram area form. Of course the only problem is what way do we take the cross product, i.e. up or down. But that is basically what the cross product means in geometric terms. 6. Originally Posted by topsquark I think the best reason for why the dot and cross products are what they are is because they are useful. There are probably an infinitude of ways to define a product between two vectors. In Physics the first basic use of the dot product comes in the form of the work equation: $W = \vec{F} \cdot \vec{s}$. Work in this sense is more or less the "effort" required to accomplish a task, and it makes sense (when you think about it for a while) that the work done should depend on the size of the angle between the applied force and displacement. The use of the cosine function is pretty much a convenience, since it has the properties that are desirable. Even the concept of no work being done by an applied force that is perpendicular to a given displacement makes a certain amount of sense: the force gets nothing done. As for the cross product, the first place a Physics student sees it is the torque equation, but I think the best argument for its use is actually for angular momentum. I won't go into details (unless you really want them) but again there are sensible features to using the sine of the angle between the vectors and (if you use some imagination) a reason for the perpendicular vector nature of the result. I will note that the dot product between two vectors is slightly more general than the cross product because there is a very easy way to generalize the dot product to higher dimensions, but as far as I know the cross product as defined can only be used in 3D. (I believe that the problem is not that it can't be generalized, but that it can't be uniquely generalized and that there is more than one useful generalization, but please don't quote me on that.) -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616307020187378, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/17801-please-help-solving-graphing-factoring.html
# Thread: 1. ## Please Help - Solving, Graphing, and Factoring I have 3 algebra problems that I have no idea how to work. It's the beginning of the school year and our teacher gave us a diagnostic worksheet to see what we know, but she still wants the answers to be correct. So, if you could tell me how to do each of these problems in the simplest way possible, I'd really appreciate it. 1.) Solve 3y - 2 > 5y. Graph the solution. 2.) Graph 2x + 4y < 1 3.) Factor 2x^2 - 5x - 12 2. Originally Posted by wonderwall09 1.) Solve 3y - 2 > 5y. Graph the solution. Don't ya just hate all that nasty stuff that comes back to haunt you? $3y - 2 > 5y$ $-2y > 2$ $\frac{-2y}{-2} < \frac{2}{-2}$ <-- Note the switch from > to < because we are multiplying/dividing both sides by a negative number! $y < -1$ Your instructor will want one of these two methods for graphing. Both are fairly standard, though I think the second method might be used a bit more. Method 1: To sketch this, first sketch out a number line marking 0 and a few integers around it. Just above the line you want to draw a ) over the -1 to indicate that -1 is not part of the solution, then an arrow from the ) pointing toward the negative side of the number line. Method 2: To sketch this, first sketch out a number line marking 0 and a few integers around it. On the line, at the point y = -1 draw an open circle to indicate that -1 is not part of the solution, then an arrow from the open circle pointing in the negative side of the number line. Perhaps someone will be good enough to show you what the sketches look like. -Dan 3. Originally Posted by wonderwall09 I have 3 algebra problems that I have no idea how to work. ... 2.) Graph 2x + 4y < 1 3.) Factor 2x^2 - 5x - 12 Hello, I hope this doesn't come too late ... to #2.): The inequality describes a half plane with a straight line as upper bound. The equation of the line is: $y=-\frac{1}{2}x+\frac{1}{4}$. The bound doesn't belong to the half plane. (see attachment) to #3.): $2x^2-5x-12 =2\left(x^2-\frac{5}{2}x-6\right) =2\left(\left(x^2-\frac{5}{2}x+\frac{25}{16}\right) - \frac{25}{16}-6\right)$ $= 2\left(\left(x-\frac{5}{4}\right)^2 - \frac{121}{16}\right) =2\left(\left(x-\frac{5}{4}\right)^2 - \left(\frac{11}{4}\right)^2\right)$ $=2\left(x-\frac{5}{4} + \frac{11}{4}\right) \left(x-\frac{5}{4}- \frac{11}{4}\right)~=~$ $~\boxed{(2x+3)(x-4)}$ Attached Thumbnails
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336010217666626, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/35139/what-is-the-possibility-of-a-railgun-assisted-orbital-launch/35680
# What is the possibility of a railgun assisted orbital launch? Basic facts: The world's deepest mine is 2.4 miles deep. Railguns can acheive a muzzle velocity of a projectile on the order of 7.5 km/s. The Earth's escape velocity is 11.2 km/s. It seems to me that a railgun style launch device built into a deep shaft such as an abandoned mine could reasonably launch a vehicle into space. I have not run the calculations and I wouldn't doubt that there might be issues with high G's that limit the potential for astronauts on such a vehicle, but even still it seems like it would be cheaper to build such a launch device and place a powerplant nearby to run it than it is to build and fuel single-use rockets. So, what is the possibility of a railgun assisted orbital launch? What am I missing here? Why hasn't this concept received more attention? - 2 Why suggest a mine? Up is not the big problem in launching stuff. Indeed to get to orbit you need more across than up. Heinlein's rather naive treatment in The Moon is a Harsh Mistress suggested using ridges on mountains that rise toward the east. – dmckee♦ Aug 29 '12 at 15:05 1 That's also a good idea. Either way, the basic concept of imparting the energy to a spacecraft while it is "on the ground" rather than having it carry fuel with it seems appealing. – AdamRedwine Aug 29 '12 at 15:10 3 Yes, but the energy required to launch an object from the ground does not depend on whether you impart that energy all at once as in this scenario, or over time as with a rocket. You are only changing from getting that energy through burning rocket fuel to getting it through a power plant. Given that in my scenario you wouldn't have to move the fuel sorce, it seems to me like it should be more efficient even if the initial velocity would have to be very high to compensate for energy loss with increased height and through atmospheric drag. – AdamRedwine Aug 29 '12 at 16:51 3 I have a friend who has looked into this sort of thing in some detail (and thinks it is feasible). I'll see if he has anything to offer on this question. – David Zaslavsky♦ Aug 29 '12 at 17:44 2 @Phill: That exponential formula only applies for rockets that have to hoist the fuel they will use further up. – Mike Dunlavey Aug 29 '12 at 20:04 show 2 more comments ## 2 Answers Ok David asked me to bring the rain. Here we go. Indeed it is very feesible and very efficient to use an electromagnetic accelerator to launch something into orbit, but first a look at our alternative: • Space Elevator: we don't have the tech • Rockets: You spend most of the energy carrying the fuel, and the machinery is complicated, dangerous, and it cannot be reused (no orbital launch vehicle has been 100% reusable. SpaceShipOne is suborbital, more on the distinction in a moment). Look at the SLS that NASA is developing, the specs aren't much better than the Saturn V and that was 50 years ago. The reason is that rocket fuel is the exact same - there is only so much energy you can squeeze out of these reactions. If there is a breakthrough in rocket fuel that is one thing but as there has been none and there is none on the horizon, rockets as an orbital launch vehicle are dead end techs which we have hit the pinnacle of. • Cannons: Acceleration by a pressure wave is limited to the speed of sound in the medium, so you cannot use any explosive as you will be limited by this (gunpowder is around $2\text{ km/s}$ , this is why battleship cannons have not increased in range over the last 100 years). Using a different medium you can achieve up to 11km/s velocity using hydrogen. This is the regime of 'light gas guns' and a company wants to use this to launch things into orbit. This requires high accelerations ( something ridiculous like thousands of $\mathrm{m/s^2}$) which restricts you to very hardened electronics and material supply such as fuel and water. • Maglev: Another company is planning on this (http://www.startram.com/) but if you look at their proposal it requires superconducting loops running something like 200MA generating a magnetic field that will destroy all communications in several states, I find this unlikely to be constructed. • Electromagnetic accelerator (railgun): This is going to be awesome! There is no requirement on high accelerations (A railgun can operate at lower accelerations) and no limit on upper speed. The tech for this exists and there have been papers out on it, here are two of them. Some quick distinctions, there is suborbital and orbital launch. Suborbital can achieve quite large altitudes which are well into space, sounding rockets can go up to 400miles and space starts at 60miles. The difference is if you have enough tangential velocity to achieve orbit. For $1\text{ kg}$ at $200\text{ km}$ from earth the energy to lift it to that height is $0.5 m g h = 1\text{ MJ}$, but the tangential velocity required to stay in orbit is $m v^2 / r = G m M / r^2$ yielding a $KE = 0.5 m v^2 = 0.5 G m M / r = 30\text{ MJ}$ , so you need a lot more kinetic energy tangentially. To do anything useful you need to be orbital, so you don't want to aim your gun up you want it at some gentle angle going up a mountian or something. The papers I cited all have the railgun going up a mountian and about a mile long and launching water and cargo. That is because to achieve the $6\text{ km/s}+$ you need for orbital velocity you need to accelerate the object from a standstill over the length of your track. The shorter the track the higher the acceleration. You will need about 100 miles of track to drop the accelerations to within survival tolerances NASA has. Why would you want to do this? You just need to maintain the power systems and the rails, which are on the ground so you can have crews on it the whole time. The entire thing is reusable, and can be reused many times a day. You can also just have a standard size of object it launches and it opens a massive market of spacecraft producers, small companies that can't pay 20 million for a launch can now afford the 500,000 for a launch. The electric costs of a railgun launch drops to about 3\\$/kg, which means all the money from the launch goes to maintenance and capital costs and once the gun is paid down prices can drop dramatically. It is the only way that humanity has the tech for that can launch large quantities of object and in the end it is all about mass launched. Noone has considered having a long railgun that is miles long because it sounds crazy right off the bat, so most proposals are for small high-acceleration railguns as in the papers above. The issue is that this limits what they can launch and as soon as you do that noone is very much interested. Why is a long railgun crazy? In reality it isn't, the raw materials (aluminum rails, concrete tube, flywheels, and vacuum pumps) are all known and cheap. If they could make a railroad of iron 2000miles in the 1800s why can't we do 150miles of aluminum in the 2000s? The question is of money and willpower, someone needs to show that this will work and not just write papers about this but get out there and do it if we ever have a hope of getting off this rock as a species and not just as the 600 or so that have gone already. Also the large companies and space agencies now are not going to risk billions into a new project while there is technology which has been perfected and proven for the last 80 years that they could use. There are a lot of engineering challenges, some of which I and others have been working on in our spare time and have solved, some which are still open problems. I and several other scientists who are finishing/have recently finished their PhDs plan on pursuing this course ( jeff ross and josh at solcorporation.com , the website isn't up yet because I finished my PhD 5 days ago but it is coming). # CONCLUSIONS Yes it is possible, the tech is here, it is economic and feesible to launch anything from cargo to people. It has not gotten a lot of attention because all the big boys use rockets already, and noone has proposed a railgun that can launch more than cargo. But it has caught the attention of some young scientists who are going to gun for this, so sit back and check the news in a few years. - 1 This is an excellent answer. – Nathaniel Aug 29 '12 at 19:37 1 I have to wonder, what is the effect of air friction? It's being shot in the thickest air at what, Mach 25? Seems you need to add a nose cone and some extra speed, just to have enough speed left when you get out of lower atmosphere. – Mike Dunlavey Aug 29 '12 at 19:56 this is way more feasible than space elevators and get us almost the same economical benefits, approved! – lurscher Aug 29 '12 at 20:02 3 Sweet! I knew I wasn't the only one thinking about this. If you have any open jobs, let me know. ;) Heck, I'll even work for just stock options. – AdamRedwine Aug 29 '12 at 20:23 4 I'll be sure to let the internet know when this starts up - publicity is going to be quite important. As we'll be doing scale models and slowly testing things the smaller railguns will make for great viral videos :-) Air friction is actually an incredibly difficult topic because it is like hitting a wall if the nosecone is not designed properly. A lot of drag reduction will need to be done in order to survive the launch - and you are right you need to overshoot the target velocity but by how much depends on just how much you can get your air drag down. – Dr. Jeffrey Berger Aug 29 '12 at 22:20 show 3 more comments There is good research on railguns at the University of Texas at Austin, led by Ian McNab. See, e.g., I.R. McNab. "Progress on Hypervelocity Railgun Research for Launch to Space." IEEE Trans. Mag. 45: 381-388, 2009. There is a list of his publications describing his team's work. The funding comes from the US Army, as there are applications in long-range artillery. There are still research problems, such as a tendency for the rails to vaporize and the problem of the payload overheating in the air at such colossal speeds. - 1 Nice. Maybe if I decide to go back to school, I can go to UT Austin... not sure how my fellow aggies would feel about that. :) – AdamRedwine Sep 5 '12 at 10:59 Hello John: Please add links for enabling more consistency of answers... – Ϛѓăʑɏ βµԂԃϔ Sep 22 '12 at 13:51 ## protected by Qmechanic♦2 days ago This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9650987386703491, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/03/08/partitions-of-unity-subordinate-to-a-cover/
# The Unapologetic Mathematician ## Partitions of Unity Subordinate to a Cover We know what a partition of unity is, but not all partitions of unity are very useful. For instance, the single function defined by $\phi(p)=1$ for all points $p\in M$ is a partition of unity all on its own — its support is $M$ itself, which is clearly a locally finite cover of $M$, and it adds up to the constant unit function. But we can’t really do anything with it. What we need is a partition of unity subordinate to an open cover. That is, given a collection $\{U_\alpha\}$ of open sets that cover $M$, we want a partition of unity $\{\phi_\beta\}$ such that for every $\beta$ there is some $\alpha$ so that $\phi_\beta$ is supported in $U_\alpha$. In particular, we can let $\{U_\alpha\}$ be the collection of coordinate patches in a smooth atlas, so each of the functions $\phi_\beta$ “lives in” a single local coordinate system. But do any such things exist? Remember, except for the trivial example above I haven’t actually given any examples of a smooth partition of unity at all. The example last time was differentiable, and even twice-differentiable, but not smooth. So this is a nice concept, but it might well be vacuous. Still, all is not lost: I say that given any open cover of a smooth manifold, there is a countable smooth partition of unity subordinate to that cover. In particular, given any smooth structure on a manifold we can always find a partition of unity with each function supported completely within a single coordinate patch. The proof of this fact, however, is one of the few really annoying, fiddly, technical bits in differential geometry. It will take a few days of doing, and I fully understand if you’d rather just skip it. All you really need to know is: whenever we need a partition of unity to break global things defined over our entire manifold up into nice chunks that fit into coordinate patches, we can do it. However, I should point this out: analytic manifolds are not nearly so forgiving. The basic (but sketchy) idea is that in order to construct our partitions of unity we’ll need to create “bump” functions sort of like the one we did last time, but ones that are smooth instead of just twice-differentiable. This means using a piecewise definition, just like last time, and at the edge of a piece we’ll have points such that in any neighborhood of that point we need two different definitions of the function. But if the function is supposed to be analytic, then the definition that works on one side should keep working on the other side, and so we can’t make the bump functions we need. This is a big reason why people stop at smooth manifolds rather than working with analytic ones, despite the fact that analytic functions are arguably “nicer”. Unfortunately, this also means that not everything we do carries over quite so easily to complex manifolds — based on complex vector spaces — which must always be analytic. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 5 Comments » 1. [...] first step in finding partitions of unity subordinate to a given cover is actually to set up a nice [...] Pingback by | March 10, 2011 | Reply 2. [...] we can prove what we’ve asserted: given any open cover of a smooth manifold we can find a countable smooth partition of unity [...] Pingback by | March 14, 2011 | Reply 3. [...] an immediate application of our partitions of unity, let’s show that we can always get whatever bump functions we [...] Pingback by | March 16, 2011 | Reply 4. [...] native orientations to cover the whole manifold. And as usual for this sort of thing, we pick a partition of unity subordinate to our atlas. That is, we have a countable, locally finite collection of functions so [...] Pingback by | August 31, 2011 | Reply 5. [...] can then find a partition of unity subordinate to this cover of . We can decompose into a (finite) [...] Pingback by | September 7, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345787167549133, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/34509/how-to-show-the-oblique-parameters-s-t-and-u-are-coefficients-of-d-6-operators
# How to show the oblique parameters S, T, and U are coefficients of d=6 operators In Morii, Lim, Mukherjee, The Physics of the Standard Model and Beyond. 2004, ch. 8, they claim that the Peskin–Takeuchi oblique parameters S, T and U are in fact Wilson coefficients of certain dimension-6 operators. On page 212, they claim that the T parameter is described by $$O_T=(\phi^\dagger D_\mu \phi)\phi^\dagger D^\mu \phi)-\frac{1}{3}(\phi^\dagger D_\mu D^\mu\phi)(\phi^\dagger\phi)\,,$$ and the S parameter by $$O_S=[\phi^\dagger(F_{\mu\nu}^i\sigma^i)\phi]B^{\mu\nu}\,,$$ where $\phi$ is the Higgs doublet, $F_{\mu\nu}^i$ and $\sigma^i$ are the SU(2) weak isospin field strength and sigma matrices respectively, and $B^{\mu\nu}$ is the U(1) weak hypercharge field strength. On the next page (p. 213), problem 8.6 asks us to show that these are the operators. How do I precisely show that these higher-dimension operators give precisely the Peskin-Takeuchi parameters? - 1 – Mistake Ink Aug 19 '12 at 12:13 Interested to know if you were able to work it out from there? – Mistake Ink Aug 19 '12 at 22:53 Thanks for the reference; I haven't looked at it carefully enough to see if I could work it out. In any case, the paper seems to be important enough to read through. – QuantumDot Aug 20 '12 at 0:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915779709815979, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/big-bang?sort=active&pagesize=15
# Tagged Questions According to the current cosmological theories, it's the model that explains the early life of the universe, starting from a rapid expansion of hot and dense matter. 1answer 58 views ### Assuming space is infinite can our observable universe be an island amongst an archipelego? According to recent measurements our observable universe is roughly 93 billion light years in diameter; also it appears (according to WMAP measurements) that spacetime is flat. Supposing space is ... 2answers 161 views ### Excluding big bang itself, does spacetime have a boundary? My understanding of big bang cosmology and General Relativity is that both matter and spacetime emerged together (I'm not considering time zero where there was a singularity). Does this mean that ... 1answer 66 views ### Explanation for the notion that physical laws break down at the Big Bang I've often heard the phrase "physical laws break down at the big bang". Why is this? Divide by zero? Please provide the mathematics. 3answers 142 views ### Can space expand with unlimited speed? At the beginning, right after the Big Bang, the universe was the size of a coin. One millionth of a second after the universe was the size of the Solar System (acording to ... 4answers 1k views ### Did time exist before the creation of matter in the universe? Does time stretch all the way back for infinity or was there a point when time appears to start in the universe? I remember reading long ago somewhere that according to one theory time began shortly ... 2answers 63 views ### proportion of dark matter/energy to other matters/energy at the beginning of the universe How will the proportion of dark matter/energy to other matters/energy be like at the momenets after the beginning of the universe (standard Big Bang model)? 2answers 167 views ### Simplifying the explanation of a low-entropy Big Bang The claim that the young universe was in a low-entropy state seems at odds with maximal entropy being thermal equilibrium, and the young universe being in thermal equilibrium. I've looked at some ... 1answer 131 views ### Early time in the Big Bang I am not a physicist, so I would really appreciate using a simple language for the explanation of my question. From what I understood at the early Big Bang the four fundamental forces were unified to ... 2answers 110 views ### Is this a great flaw in big bang theory? [duplicate] Einstein said that, Time & Space cannot exists without one another. Big bang says, time didn't exist before the big bang. So the Primordial ball referred in the Big Bang theory didn't had any ... 4answers 280 views ### Before the Big Bang I've heard this saying before I don't know about anyone else. It says, "What ever was before the big bang is something physics can't explain..! Is this saying true (accurate)? 1answer 103 views ### Did space and time exist before the Big Bang? [duplicate] I accept the Big Bang theory. What I can't understand is how there can be a where or when to the Big Bang if space time did not exist prior to it. Did space and time exist prior to the Big Bang? 1answer 64 views ### How fast did hydrogen atoms travel when they were first formed in the early universe? I can't seem to find any data on this, is it a known value? 1answer 68 views ### Which was first, energy or matter in the creation of our universe? Was it the Big Bang or was it something else that gives us our universe in its present condition? Did it all begin with just pure energy that eventually evolved into simple atoms of matter, that ... 3answers 418 views ### How can a quasar be 29 billion light-years away from Earth if Big Bang happened only 13.8 billion years ago? I was reading through the Wikipedia article on Quasars and came across the fact that the most distant Quasar is 29 Billion Light years. This is what the article exactly says The highest redshift ... 5answers 3k views ### How can something happen when time does not exist? I saw this documentary hosted by Stephen Hawkins: www.youtube.com/watch?v=WQhd05ZVYWg And if I didn't get it wrong, it says that there was no time before the big bang, time was created there. So how ... 9answers 740 views ### Can a universe emerge from nothing? If the Universe is flat and the total energy of the universe can be zero (we don't know if it is, but many theorists support the idea, i.e. at BB initial conditions: t = 0, V = 0, E = 0) then is it ... 1answer 63 views ### How do we know space is expanding when we are part of space? [duplicate] From what I understand space itself is expanding, and the Big Bang attempts to describe this expansion at the very early stages of the universe. This is usually described in a visual way as 2 dots on ... 1answer 102 views ### why gravity exists? [closed] I always have a doubt on gravity , why does gravity exists ? is there any scientific explanation on gravity ? I know that Gravity exists , i can feel it. But Who is creating all these rules ? ... 1answer 63 views ### What is the age of universe? [closed] As we know at the time of big bang as mentioned by the scientist the universe expanded faster than the speed of light. So does it mean that at that time all the particles present travelled in the time ... 1answer 82 views ### Is it possible that a black hole sucks energy that is the origin of another universes big bang? The universe expanded from nothing, right? And black holes may be a "gate" to parallel universes, is it possible that stars that is being sucked in by black holes in our universe may be the origin of ... 1answer 77 views ### In the big crunch theory, when the big crunch singularity forms, can the resulting black hole decay through hawking's radiation? I've been pondering about this and I couldn't really find the answer for this. The big crunch theory postulates that the universe will eventually stop expanding and reverse back in on its self into a ... 0answers 25 views ### Where is universe expanding? [duplicate] According to many physicist space, time and universe came to existence after the big bang. The universe is now expanding since after the big bang continuously. So my question is where our universe id ... 4answers 354 views ### Origins of the universe questions If the universe is expanding, what is it expanding into? Similarly when the big bang happened where and how did it occur? - Where did the energy come from? Energy can not be created or destroyed does ... 1answer 95 views ### Particles entangled after the big bang Is that true that the big bang caused the quantum entanglement of all the particles of the universe so every particle is entangled to each other particle of the universe? 4answers 283 views ### How was the universe created? I do not know much beyond high school Physics. Thus, I am asking this question from almost layman's perspective: What, as per the best of our existing knowledge and widely accepted among the ... 0answers 43 views ### Is the speed of light the ultimate speed limit? [duplicate] As we all know nothing can go faster than the speed of light as mentioned by most of our pioneer's in physics. But as I was listening to one of the statements of Sir. Stephen Hawkins he stated that at ... 1answer 227 views ### Superluminal expansion of the early universe how is this possible? Is this a postulate? I get the expansion of the universe, the addition of discrete bits of space time between me and a distant galaxy, until very distant parts of the universe are moving relative to ... 3answers 1k views ### What has been proved about the big bang, and what has not? Ok so the universe is in constant expansion, that has been proven, right? And that means that it was smaller in the past.. But what's the smallest size we can be sure the universe has ever had? I ... 1answer 80 views ### What was ticking just after the Big Bang? When reading about the Big Bang, I see phrases like 3 trillionths of a second after... So, what was ticking to give a time scale like this? We define time now in terms of atomic oscillations, but ... 3answers 1k views ### Where does matter come from? I admit, it's been a few years since I've studied physics, but the following question came to me when I was listening to a talk by Lawrence Krauss. Is there any knowledge of from where matter that ... 0answers 105 views ### Did force of gravity cause macroevolution? Did big bang create gravity? What role gravity is assumed to have played in the formation (starting from the big bang) of large structures of our universe and what other important physical mechanisms ... 2answers 86 views ### How exactly, or whats the process, rather, of energy changing into matter? $E=mc^2$ this is the equation by Einstein claiming energy can change from energy to mass. this would happened at the big bang I assume, when electrons and protons were made to create hydrogen and some ... 1answer 299 views ### LHC Big Bang Temperatures It's been claimed that the LHC's 14 TeV energy produces temperatures comparable to that which occurred very soon after the Big Bang. The well-known $E=1.5kT$ formula from classical statistical ... 0answers 55 views ### How can I read density fluctuation from microwaves? The Cosmic Microwave Background Radiation shows temperature differences. The red and yellow areas are warmer. The green and blue areas are cooler. For example consider this picture of CMBR ... 3answers 138 views ### Energy and Matter I was watching a show about the big bang theory. They were saying that in the beginning all that existed was energy. After the big bang that energy transformed into matter which then started forming ... 2answers 100 views ### Was the singularity at Big Bang perfectly uniform and if so, why did the universe lose its uniformity? Am I right in understanding that current theory states that Big Bang originated from a single point of singularity? If so, would this mean that this was a uniform point? If so, as the universe ... 4answers 1k views ### What is our location relative to the Big Bang? Given what we know about space, time and the movement of galaxies, have we or can we determine what our position is in relation to the projected location of the Big Bang? I've read some introductory ... 1answer 69 views ### How to understand movement in expanding universe? I know that universe is expanding equally between every pair of points but it was a single point in it's very past... so I was wondering if we could locate this center point of universe. Now I know ... 0answers 183 views ### Curiosity episode with Stephen Hawking. The Big-Bang In an episode of Discovery's Curiosity with host Stephen Hawking, he claims the Big Bang event can be explained from physics alone, and does not require the intervention of a creator. 1) His ... 0answers 74 views ### Conservation of Energy in the Universe [duplicate] Possible Duplicate: Is energy really conserved? Why can’t energy be created or destroyed? One of the laws of the universe that dazzles me the most is the law of conservation of energy. I ... 1answer 356 views ### Do all the forces become one? Were the forces of nature combined in one unifying force at the time of the Big Bang? By which symmetry is this unification governed? Are there any evidence for such unification of forces? Has ever ... 0answers 47 views ### What evidence is there for the Big Bang Theory? [duplicate] Possible Duplicate: What has been proved about the big bang, and what has not? I know the expansion of the universe is the most compelling but what else makes it so well liked and believed. ... 4answers 323 views ### Does the universe have a center? If the big bang was the birth of everything, and the big bang was an event in the sense that it had a location and a time (time 0), wouldn't that mean that our universe has a center? Where was the ... 1answer 51 views ### Could there be a sort of “Molecular Destiny”? Let's say we start with the Big Bang. Every bit of matter started from this event. Therefore, given EVERY variable (every particle, movements of particles, weights, times, etc) and an INFINITE amount ... 3answers 154 views ### Where does the light of the Big Bang come from? I'm wondering whether the residual light of the Big Bang comes from one particular direction and what possibilities do we have to detect its position? 1answer 199 views ### Is it possible that the Big Bang was caused by virtual particle creation? As far as I understand, it is understood that throughout the universe there exists, what is known as, a quantum field from which, due to its fluctuations, temporary (pairs of) virtual particles ... 0answers 35 views ### What's a compact scientific answer to question “(Why there is) / (what is before) the Big Bang?” [duplicate] Possible Duplicate: Did spacetime start with the Big bang? on causality and The Big Bang Theory Before the Big Bang What's a compact scientific answer to question "(Why there is) / ... 3answers 220 views ### Do new universes form on the other side of black holes? I have four questions about black holes and universe formations. Do new universes form on the other side of black holes? Was our own universe formed by this process? Was our big bang a black hole ... 0answers 401 views ### Was the Big Bang a result of a decayed white hole singularity? [closed] From my understanding, the Big Bang is theorized to have been a result of matter ejecting from a decayed white hole space/time singularity. ... 4answers 518 views ### Atoms pop out of nothing/vacuum/pre-big-bang? I saw a great documentary last night about 'nothing'. It's about vacuums, and how if you have a total vacuum atoms will pop out of nowhere! Pretty crazy stuff. Atoms literally coming out of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493211507797241, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/102025/grouping-vectors-together
## Grouping vectors together ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given $n$ unit vectors in $\mathbb{R}^n$ s.t. $0 \leq u\cdot v<1$ for all pair of distinct vectors $u,v$. These vectors span a $d$-dimensional subspace s.t. $d< n$. We conjecture that it is possible to partition the $n$ vectors into $d$ groups such that all the vectors within the same group are pairwise non-orthogonal. It trivially holds when $d\in\{1,2,n-1,n-2\}$. However, we have not been able to show for general $d$. Does the conjecture hold for any $d< n$? If yes, how to prove it? Any thoughts/hints would be appreciated. Thanks in advance. - By "mutually non-orthogonal" do you mean "pairwise non-orthogonal" for the vectors in the $d$ sets, or do you mean that no vector in the set is orthogonal to ALL of the others? – Geoff Robinson Jul 12 at 10:04 I mean pairwise non-orthogonal. Sorry for the confusion. – Pawan Aurora Jul 12 at 10:08 1 Your question is very related to Borsuk's conjecture, en.wikipedia.org/wiki/Borsuk%27s_conjecture (I am sure you know it, but want to say just in case not.) – Anton Petrunin Jul 12 at 12:44 Thanks for pointing that out. I must admit that I was not aware of it. – Pawan Aurora Jul 12 at 14:19 The statement is of course true for $d=n$. It seems more elegant to include that case. – Will Sawin Jul 12 at 15:03 show 1 more comment ## 1 Answer Kahn--Kalai' counterexample to Borsuk's conjecture is a collection of vertices of unit cube. Think about this cube as it siitting in an affine hyperplane of $\mathbb R^{n+1}$, so that the origin projects to the center of the cube. Project this cube centrally to the unit sphere. For right choice of parameter you get a counterexample to your statement. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9007404446601868, "perplexity_flag": "middle"}
http://dictionary.sensagent.com/Tetration/en-en/
sensagent's content • definitions • synonyms • antonyms • encyclopedia Dictionary and translator for handheld New : sensagent is now available on your handheld Advertising ▼ sensagent's office Shortkey or widget. Free. Windows Shortkey: . Free. Vista Widget : . Free. Webmaster Solution Alexandria A windows (pop-into) of information (full-content of Sensagent) triggered by double-clicking any word on your webpage. Give contextual explanation and translation from your sites ! Try here  or   get the code SensagentBox With a SensagentBox, visitors to your site can access reliable information on over 5 million pages provided by Sensagent.com. Choose the design that fits your site. Add new content to your site from Sensagent by XML. Crawl products or adds Index images and define metadata WordGame The English word games are: ○   Anagrams ○   Wildcard, crossword ○   Lettris ○   Boggle. Lettris Lettris is a curious tetris-clone game where all the bricks have the same square shape but different content. Each square carries a letter. To make squares disappear and save space for other squares you have to assemble English words (left, right, up, down) from the falling squares. boggle Boggle gives you 3 minutes to find as many words (3 letters or more) as you can in a grid of 16 letters. You can also try the grid of 16 letters. Letters must be adjacent and longer words score better. See if you can get into the grid Hall of Fame ! English dictionary Main references Most English definitions are provided by WordNet . English thesaurus is mainly derived from The Integral Dictionary (TID). English Encyclopedia is licensed by Wikipedia (GNU). The wordgames anagrams, crossword, Lettris and Boggle are provided by Memodata. The web service Alexandria is granted from Memodata for the Ebay search. The SensagentBox are offered by sensAgent. Translation Change the target language to find translations. Tips: browse the semantic fields (see From ideas to words) in two languages to learn more. last searches on the dictionary : 2820 online visitors computed in 0.078s Advertising ▼ » # Tetration It has been suggested that be merged into this article or section. (Discuss) Proposed since June 2009. Not to be confused with titration. Complex plot of holomorphic tetration ${}^{z}e$ ${}^{n}x$, for n = 1, 2, 3 ..., showing convergence to the infinite power tower between the two dots $\lim_{n\rightarrow \infty} x^{\frac{n}{}}$ Infinite power tower for bases $(e^{-1})^e \le x \le e^{e^{-1}})$ In mathematics, tetration (or hyper-4) is the next hyper operator after exponentiation, and is defined as iterated exponentiation. The word was coined by Reuben Louis Goodstein, from tetra- (four) and iteration. Tetration is used for the notation of very large numbers. Shown here are examples of the first four hyper operators, with tetration as the fourth (and succession, the unary operation denoted $a' = a + 1$ taking $a$ and yielding the number after $a$, as the 0th): 1. Addition $a + n = a\!\underbrace{''{}^{\cdots}{}'}_n$ a succeeded n times. 2. Multiplication $a \times n = \underbrace{a + a + \cdots + a}_n$ a added to itself, n times. 3. Exponentiation $a^n = \underbrace{a \times a \times \cdots \times a}_n$ a multiplied by itself, n times. 4. Tetration ${^{n}a} = \underbrace{a^{a^{\cdot^{\cdot^{a}}}}}_n$ a exponentiated by itself, n times. where each operation is defined by iterating the previous one (the next operation in the sequence is pentation). The peculiarity of the tetration among these operations is that the first three (addition, multiplication and exponentiation) are generalized for complex values of n, while for tetration, no such regular generalization is yet established; and tetration is not considered an elementary function. Addition ($a + n$) is the most basic operation, multiplication ($an$) is also a primary operation, though for natural numbers it can be thought of as a chained addition involving n numbers a, and exponentiation ($a^n$) can be thought of as a chained multiplication involving n numbers a. Analogously, tetration ($^{n}a$) can be thought of as a chained power involving n numbers a. The parameter a may be called the base-parameter in the following, while the parameter n in the following may be called the height-parameter (which is integral in the first approach but may be generalized to fractional, real and complex heights, see below). ## Definition For any positive real $a > 0$ and non-negative integer $n \ge 0$, we define $\,\! {^{n}a}$ by: ${^{n}a} := \begin{cases} 1 &\text{if }n=0 \\ a^{\left[^{(n-1)}a\right]} &\text{if }n>0 \end{cases}$ ## Iterated powers As we can see from the definition, when evaluating tetration expressed as an "exponentiation tower", the exponentiation is done at the deepest level first (in the notation, at the highest level). In other words: $\,\!\ ^{4}2 = 2^{2^{2^2}} = 2^{\left[2^{\left(2^2\right)}\right]} = 2^{\left(2^4\right)} = 2^{16} = 65,\!536$ Note that exponentiation is not associative, so evaluating the expression in the other order will lead to a different answer: $\,\! 2^{2^{2^2}} \ne \left[{\left(2^2\right)}^2\right]^2 = 2^{2 \cdot 2 \cdot2} = 256$ Thus, the exponential towers must be evaluated from top to bottom (or right to left). Computer programmers refer to this choice as right-associative. When a and 10 are coprime, we can compute the last m decimal digits of $\,\!\ ^{n}a$ using Euler's theorem. ## Terminology There are many terms for tetration, each of which has some logic behind it, but some have not become commonly used for one reason or another. Here is a comparison of each term with its rationale and counter-rationale. • The term tetration, introduced by Goodstein in his 1947 paper Transfinite Ordinals in Recursive Number Theory[1] (generalizing the recursive base-representation used in Goodstein's theorem to use higher operations), has gained dominance. It was also popularized in Rudy Rucker's Infinity and the Mind. • The term superexponentiation was published by Bromer in his paper Superexponentiation in 1987.[2] It was used earlier by Ed Nelson in his book Predicative Arithmetic, Princeton University Press, 1986. • The term hyperpower[3] is a natural combination of hyper and power, which aptly describes tetration. The problem lies in the meaning of hyper with respect to the hyper operator hierarchy. When considering hyper operators, the term hyper refers to all ranks, and the term super refers to rank 4, or tetration. So under these considerations hyperpower is misleading, since it is only referring to tetration. • The term power tower[4] is occasionally used, in the form "the power tower of order n" for ${\ \atop {\ }} {{\underbrace{a^{a^{\cdot^{\cdot^{a}}}}}} \atop n}$ Tetration is often confused with closely related functions and expressions. This is because much of the terminology that is used with them can be used with tetration. Here are a few related terms: Form Terminology $a^{a^{\cdot^{\cdot^{a^a}}}}$ Tetration $a^{a^{\cdot^{\cdot^{a^x}}}}$ Iterated exponentials $a_1^{a_2^{\cdot^{\cdot^{a_n}}}}$ Nested exponentials (also towers) $a_1^{a_2^{a_3^{\cdot^{\cdot^\cdot}}}}$ Infinite exponentials (also towers) In the first two expressions a is the base, and the number of times a appears is the height (add one for x). In the third expression, n is the height, but each of the bases is different. Care must be taken when referring to iterated exponentials, as it is common to call expressions of this form iterated exponentiation, which is ambiguous, as this can either mean iterated powers or iterated exponentials. ## Notation There are many different notation styles that can be used to express tetration. Some of these styles can be used for higher iterations as well (hyper-5, hyper-6, and so on).[clarification needed] Name Form Description Standard notation $\,{}^{n}a$ Used by Maurer [1901] and Goodstein [1947]; Rudy Rucker's book Infinity and the Mind popularized the notation. Knuth's up-arrow notation $a {\uparrow\uparrow} n$ Allows extension by putting more arrows, or, even more powerfully, an indexed arrow. Conway chained arrow notation $a \rightarrow n \rightarrow 2$ Allows extension by increasing the number 2 (equivalent with the extensions above), but also, even more powerfully, by extending the chain Ackermann function ${}^{n}2 = \operatorname{A}(4, n - 3) + 3$ Allows the special case $a=2$ to be written in terms of the Ackermann function. Iterated exponential notation ${}^{n}a = \exp_a^n(1)$ Allows simple extension to iterated exponentials from initial values other than 1. Hooshmand notation[5] $\operatorname{uxp}_a n, \, a^{\frac{n}{}}$ Hyper operator notation $a^{(4)}n, \, \operatorname{hyper}_4(a,n)$ Allows extension by increasing the number 4; this gives the family of hyper operations ASCII notation `a^^n` Since the up-arrow is used identically to the caret (`^`), the tetration operator may be written as (`^^`). One notation above uses iterated exponential notation; in general this is defined as follows: $\exp_a^n(x) = a^{a^{\cdot^{\cdot^{a^x}}}}$ with n "a"s. There are not as many notations for iterated exponentials, but here are a few: Name Form Description Standard notation $\exp_a^n(x)$ Euler coined the notation $\exp_a(x) = a^x$, and iteration notation $f^n(x)$ has been around about as long. Knuth's up-arrow notation $(a{\uparrow})^n(x)$ Allows for super-powers and super-exponential function by increasing the number of arrows; used in the article on large numbers. Ioannis Galidakis' notation $\,{}^{n}(a, x)$ Allows for large expressions in the base.[6] ASCII (auxiliary) `a^^n@x` Based on the view that an iterated exponential is auxiliary tetration. ASCII (standard) `exp_a^n(x)` Based on standard notation. J Notation `x^^:(n-1)x` Repeats the exponentiation. See J (programming language)[7] ## Examples In the following table, most values are too large to write in scientific notation, so iterated exponential notation is employed to express them in base 10. The values containing a decimal point are approximate. $x$ ${}^{2}x$ ${}^{3}x$ ${}^{4}x$ 1 1 1 1 2 4 16 65,536 3 27 7,625,597,484,987 $\exp_{10}^3(1.09902)$ 4 256 $\exp_{10}^2(2.18788)$ $\exp_{10}^3(2.18726)$ 5 3,125 $\exp_{10}^2(3.33931)$ $\exp_{10}^3(3.33928)$ 6 46,656 $\exp_{10}^2(4.55997)$ $\exp_{10}^3(4.55997)$ 7 823,543 $\exp_{10}^2(5.84259)$ $\exp_{10}^3(5.84259)$ 8 16,777,216 $\exp_{10}^2(7.18045)$ $\exp_{10}^3(7.18045)$ 9 387,420,489 $\exp_{10}^2(8.56784)$ $\exp_{10}^3(8.56784)$ 10 10,000,000,000 $\exp_{10}^3(1)$ $\exp_{10}^4(1)$ ## Extensions Tetration can be extended to define ${^n 0}$ and other domains as well. ### Extension of domain for bases #### Extension to base zero The exponential $0^0$ is non consistently defined. Thus, the tetrations $\,{^{n}0}$ are not clearly defined by the formula given earlier. However, $\lim_{x\rightarrow0} {}^{n}x$ is well defined, and exists: $\lim_{x\rightarrow0} {}^{n}x = \begin{cases} 1, & n \mbox{ even} \\ 0, & n \mbox{ odd} \end{cases}$ Thus we could consistently define ${}^{n}0 = \lim_{x\rightarrow0} {}^{n}x$. This is equivalent to defining $0^0 = 1$. Under this extension, ${}^{0}0 = 1$, so the rule ${^{0}a} = 1$ from the original definition still holds. #### Extension to complex bases Tetration by period Tetration by escape Since complex numbers can be raised to powers, tetration can be applied to bases of the form $\scriptstyle z \;=\; a + bi$, where $\scriptstyle i$ is the square root of −1. For example, $\scriptstyle {}^{n}z$ where $\scriptstyle z \;=\; i$, tetration is achieved by using the principal branch of the natural logarithm, and using Euler's formula we get the relation: $i^{a+bi} = e^{\frac{1}{2}{\pi i} (a+bi)} = e^{-\frac{1}{2}{\pi b}} \left(\cos{\frac{\pi a}{2}} + i \sin{\frac{\pi a}{2}}\right)$ This suggests a recursive definition for $\scriptstyle {}^{(n+1)}i \;=\; a'+b'i$ given any $\scriptstyle {}^{n}i \;=\; a+bi$: $\begin{align} a' &= e^{-\frac{1}{2}{\pi b}} \cos{\frac{\pi a}{2}} \\ b' &= e^{-\frac{1}{2}{\pi b}} \sin{\frac{\pi a}{2}} \end{align}$ The following approximate values can be derived: ${}^{n}i$ Approximate Value ${}^{1}i = i$ i ${}^{2}i = i^{\left({}^{1}i\right)}$ $0.2079$ ${}^{3}i = i^{\left({}^{2}i\right)}$ $0.9472 + 0.3208i$ ${}^{4}i = i^{\left({}^{3}i\right)}$ $0.0501 + 0.6021i$ ${}^{5}i = i^{\left({}^{4}i\right)}$ $0.3872 + 0.0305i$ ${}^{6}i = i^{\left({}^{5}i\right)}$ $0.7823 + 0.5446i$ ${}^{7}i = i^{\left({}^{6}i\right)}$ $0.1426 + 0.4005i$ ${}^{8}i = i^{\left({}^{7}i\right)}$ $0.5198 + 0.1184i$ ${}^{9}i = i^{\left({}^{8}i\right)}$ $0.5686 + 0.6051i$ Solving the inverse relation as in the previous section, yields the expected $\scriptstyle \,{}^{0}i \;=\; 1$ and $\scriptstyle \,{}^{(-1)}i \;=\; 0$, with negative values of n giving infinite results on the imaginary axis. Plotted in the complex plane, the entire sequence spirals to the limit $0.4383 + 0.3606i$, which could be interpreted as the value where n is infinite. Such tetration sequences have been studied since the time of Euler but are poorly understood due to their chaotic behavior. Most published research historically has focused on the convergence of the power tower function. Current research has greatly benefited by the advent of powerful computers with fractal and symbolic mathematics software. Much of what is known about tetration comes from general knowledge of complex dynamics and specific research of the exponential map. ### Extensions of the domain for (iteration) "heights" #### Extension to infinite heights The function $\left | \frac{\mathrm{W}(-\ln{z})}{-\ln{z}} \right |$ on the complex plane, showing infinite real power towers (black curve) Tetration can be extended to infinite heights (n in ${}^{n}a$). This is because for bases within a certain interval, tetration converges to a finite value as the height tends to infinity. For example, $\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\cdot^{\cdot^{\cdot}}}}}$ converges to 2, and can therefore be said to be equal to 2. The trend towards 2 can be seen by evaluating a small finite tower: $\begin{align} \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{1.414}}}}} &= \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{1.63}}}} \\ &= \sqrt{2}^{\sqrt{2}^{\sqrt{2}^{1.76}}} \\ &= \sqrt{2}^{\sqrt{2}^{1.84}} \\ &= \sqrt{2}^{1.89} \\ &= 1.93 \end{align}$ In general, the infinite power tower $x^{x^{\cdot^{\cdot^{\cdot}}}}$, defined as the limit of ${}^{n}x$ as n goes to infinity, converges for e−e ≤ x ≤ e1/e, roughly the interval from 0.066 to 1.44, a result shown by Leonhard Euler. The limit, should it exist, is a positive real solution of the equation y = xy. Thus, x = y1/y. The limit defining the infinite tetration of x fails to converge for x > e1/e because the maximum of y1/y is e1/e. This may be extended to complex numbers z with the definition: ${}^{\infty}z = z^{z^{\cdot^{\cdot^{\cdot}}}} = \frac{\mathrm{W}(-\ln{z})}{-\ln{z}}$ where W(z) represents Lambert's W function. As the limit y = ∞x (if existent, i.e. for e−e < x < e1/e) must satisfy xy = y we see that x ↦ y = ∞x is (the lower branch of) the inverse function of y ↦ x = y1/y. #### (Limited) extension to negative heights In order to preserve the original rule: ${^{(k+1)}a} = a^{({^{k}a})}$ for negative values of $k$ we must use the recursive relation: ${^{k}a} = \log_a \left( {^{(k+1)}a} \right)$ Thus: ${}^{(-1)}a = \log_{a} \left( {}^{0}a \right) = \log_{a} 1 = 0$ However smaller negative values cannot be well defined in this way because ${}^{(-2)}a = \log_{a} \left( {}^{-1}a \right) = \log_{a} 0$ which is not well defined. Note further that for $n = 1$ any definition of $\,\! {^{(-1)}1}$ is consistent with the rule because ${^{0}1} = 1 = 1^n$ for any $\,\! n = {^{(-1)}1}$. #### Extension to real heights At this time there is no commonly accepted solution to the general problem of extending tetration to the real or complex values of $n$. Various approaches are mentioned below. In general the problem is finding, for any real a > 0, a super-exponential function $\,f(x) = {}^{x}a$ over real x > -2 that satisfies • $\,{}^{(-1)}a = 0$ • $\,{}^{0}a = 1$ • $\,{}^{x}a = a^{\left({}^{(x-1)}a\right)}$ for all real x > -1. • A fourth requirement that is usually one of: • A continuity requirement (usually just that ${}^{x}a$ is continuous in both variables for $x > 0$). • A differentiability requirement (can be once, twice, k times, or infinitely differentiable in x). • A regularity requirement (implying twice differentiable in x) that: $\left( \frac{d^2}{dx^2}f(x) > 0\right)$ for all $x > 0$ The fourth requirement differs from author to author, and between approaches. There are two main approaches to extending tetration to real heights, one is based on the regularity requirement, and one is based on the differentiability requirement. These two approaches seem to be so different that they may not be reconciled, as they produce results inconsistent with each other. Fortunately, any solution that satisfies one of these in an interval of length one can be extended to a solution for all positive real numbers. When $\,{}^{x}a$ is defined for an interval of length one, the whole function easily follows for all x > -2. ##### Linear approximation for the extension to real heights $\,{}^{x}e$ using linear approximation. A linear approximation (solution to the continuity requirement, approximation to the differentiability requirement) is given by: ${}^{x}a \approx \begin{cases} \log_a(^{x+1}a) & x \le -1 \\ 1 + x & -1 < x \le 0 \\ a^{\left(^{x-1}a\right)} & 0 < x \end{cases}$ hence: Approximation Domain $\,{}^{x}a \approx x+1$ for $-1<x<0$ $\,{}^{x}a \approx a^x$ for $0<x<1$ $\,{}^{x}a \approx a^{a^{(x-1)}}$ for $1<x<2$ and so on. However, it is only piecewise differentiable; at integer values of x the derivative is multiplied by $\ln{a}$. ###### Examples $\begin{align} {}^{\frac{1}{2}\pi}e &\approx 5.868...,\\ {}^{-4.3}0.5 &\approx 4.03335... \end{align}$ A main theorem in Hooshmand's paper[5] states: Let $0 <a \neq 1$. If $f:(-2,+\infty)\rightarrow \mathbb{R}$ is continuous and satisfies the conditions: • $f(x)=a^{f(x-1)} \; \; \mbox{for all} \; \; x>-1, \; f(0)=1$, • $f$ is differentiable on $(-1, 0)$, • $f^\prime$ is a nondecreasing or nonincreasing function on $(-1,0),$ • $f^\prime (0^+) = (\ln a) f^\prime (0^-) \mbox{ or } f^\prime (-1^+) = f^\prime (0^-).$ then $f$ is uniquely determined through the equation $f(x)=\exp^{[x]}_a (a^{x})=\exp^{[x+1]}_a(x) \quad \mbox{for all} \; \; x > -2$, where $(x)=x-[x]$ denotes the fractional part of x and $\exp^{[x]}_a$ is the $[x]$-iterated function of the function $\exp_a$. The proof is that the second through fourth conditions trivially imply that f is a linear function on [-1, 0]. The linear approximation to natural tetration function ${}^xe$ is continuously differentiable, but its second derivative does not exist at integer values of its argument. Hooshmand derived another uniqueness theorem for it which states: If $f: (-2, +\infty)\rightarrow \mathbb{R}$ is a continuous function that satisfies: • $f(x)=e^{f(x-1)} \; \; \mbox{for all} \; \; x>-1, \; f(0)=1$, • $f$ is convex on $(-1,0)$, • $f^\prime (0^-)\leq f^\prime (0^+).$ then $f=\mbox{uxp}$. [Here $f=\mbox{uxp}$ is Hooshmand's name for the linear approximation to the natural tetration function.] The proof is much the same as before; the recursion equation ensures that $f^\prime (-1^+) = f^\prime (0^+),$ and then the convexity condition implies that $f$ is linear on (-1, 0). Therefore the linear approximation to natural tetration is the only solution of the equation $f(x)=e^{f(x-1)} \; \; (x>-1)$ and $f(0)=1$ which is convex on $(-1,+\infty)$. All other sufficiently-differentiable solutions must have an inflection point on the interval (-1, 0). ##### Higher order approximations for the extension to real heights A quadratic approximation (to the differentiability requirement) is given by: ${}^{x}a \approx \begin{cases} \log_a({}^{x+1}a) & x \le -1 \\ 1 + \frac{2\log(a)}{1 \;+\; \log(a)}x - \frac{1 \;-\; \log(a)}{1 \;+\; \log(a)}x^2 & -1 < x \le 0 \\ a^{\left({}^{x-1}a\right)} & 0 < x \end{cases}$ which is differentiable for all $x > 0$, but not twice differentiable. If $a = e$ this is the same as the linear approximation. A cubic approximation and a method for generalizing to approximations of degree n are given at.[8] #### Extension to complex heights Drawing of the analytic extension $f=F(x+{\rm i}y)$ of tetration to the complex plane. Levels $|f|=1,e^{\pm 1},e^{\pm 2},\ldots$ and levels $\arg(f)=0,\pm 1,\pm 2,\ldots$ are shown with thick curves. There is a conjecture[9] that there exists a unique function F which is a solution of the equation F(z+1)=exp(F(z)) and satisfies the additional conditions that F(0)=1 and F(z) approaches the fixed points of the logarithm (roughly 0.31813150520476413531 ± 1.33723570143068940890i) as z approaches ±i∞ and that F is holomorphic in the whole complex z-plane, except the part of the real axis at z≤−2. This function is shown in the figure at right. The complex double precision approximation of this function is available online.[10] The requirement of holomorphism of tetration is important for the uniqueness. Many functions $S$ can be constructed as $S(z)=F\!\left(~z~ +\sum_{n=1}^{\infty} \sin(2\pi n z)~ \alpha_n +\sum_{n=1}^{\infty} \Big(1-\cos(2\pi n z) \Big) ~\beta_n \right)$ where $\alpha$ and $\beta$ are real sequences which decay fast enough to provide the convergence of the series, at least at moderate values of $\Im(z)$. The function S satisfies the tetration equations S(z+1)=exp(S(z)), S(0)=1, and if αn and βn approach 0 fast enough it will be analytic on a neighborhood of the positive real axis. However, if some elements of {α} or {β} are not zero, then function S has multitudes of additional singularities and cutlines in the complex plane, due to the exponential growth of sin and cos along the imaginary axis; the smaller the coefficients {α} and {β} are, the further away these singularities are from the real axis. The extension of tetration into the complex plane is thus essential for the uniqueness; the real-analytic tetration is not unique. ## Open questions It is not known if nπ or ne is an integer for any positive integer n. ## Inverse relations Exponentiation has two inverse relations; roots and logarithms. Analogously, the inverse relations of tetration are often called the super-root, and the super-logarithm. ### Super-root The super-root is the inverse relation of tetration with respect to the base: if $^n y = x$, then y is an nth super root of x. For example, $^4 2 = 2^{2^{2^{2}}} = 65,536$ so 2 is the 4th super-root of 65,536 and $^3 3 = 3^{3^{3}} = 7,625,597,484,987$ so 3 is the 3rd super-root (or super cube root) of 7,625,597,484,987. #### Square super-root The graph y = $\sqrt{x}_s$. The 2nd-order super-root, square super-root, or super square root has two equivalent notations, $\mathrm{ssrt}(x)$ and $\sqrt{x}_s$. It is the inverse of $^2 x = x^x$ and can be represented with the Lambert W function:[11] $\mathrm{ssrt}(x)=e^{W(\mathrm{ln}(x))}=\frac{\mathrm{ln}(x)}{W(\mathrm{ln}(x))}$ The function also illustrates the reflective nature of the root and logarithm functions as the equation below only holds true when $x = \mathrm{ssrt}(y)$: $\sqrt[y]{x} = \mathrm{log}_y {x}$ Like square roots, the square super-root of x may not have a single solution. Unlike square roots, determining the number of square super-roots of x may be difficult. In general, if $e^{-1/e}<x<1$, then x has two positive square super-roots between 0 and 1; and if $x > 1$, then x has one positive square super-root greater than 1. If x is positive and less than $e^{-1/e}$ it doesn't have any real square super-roots, but the formula given above yields countably infinitely many complex ones for any finite x not equal to 1.[11] The function has been used to determine the size of data clusters.[12] #### Other super-roots For each integer n > 2, the function nx is defined and increasing for x ≥ 1, and n1 = 1, so that the nth super-root of x, $\sqrt[n]{x}_s$, exists for x ≥ 1. However, if the linear approximation above is used, then $^y x = y + 1$ if -1 < y ≤ 0, so $^y \sqrt{y + 1}_s$ cannot exist. Other super-roots are expressible under the same basis[clarification needed] used with normal roots: super cube roots, the function[dubious ] that produces y when $x = y^{y^y}$, can be expressed as $\sqrt[3]{x}_s$; the 4th super-root can be expressed as $\sqrt[4]{x}_s$; and it can therefore be said that the nth super-root is $\sqrt[n]{x}_s$. Note that $\sqrt[n]{x}_s$ may not be uniquely defined, because there may be more than one nth root. For example, x has a single (real) super-root if n is odd, and up to two if n is even.[citation needed] The super-root can be extended to $n = \infty$, and this shows a link to the mathematical constant e as it is only well-defined if 1/e ≤ x ≤ e (see extension of tetration to infinite heights). Note that $x = {^\infty y}$ implies that $x = y^x$ and thus that $y = x^{1/x}$. Therefore, when it is well defined, $\sqrt[\infty]{x}_s = x^{1/x}$ and thus it is an elementary function. For example, $\sqrt[\infty]{2}_s = 2^{1/2} = \sqrt{2}$. ### Super-logarithm Main article: Super-logarithm Once a continuous increasing (in x) definition of tetration, xa, is selected, the corresponding super-logarithm sloga x is defined for all real numbers x, and a > 1. The function $\mathrm{slog}_a$ satisfies: $\mathrm{slog}_a {^x a} = x$ $\mathrm{slog}_a a^x = 1 + \mathrm{slog}_a x$ $\mathrm{slog}_a x = 1 + \mathrm{slog}_a \log_a x$ $\mathrm{slog}_a x > -2$ ## References 1. R. L. Goodstein (1947). "Transfinite ordinals in recursive number theory". Journal of Symbolic Logic 12 (4): 123–129. DOI:10.2307/2266486. JSTOR 2266486. 2. N. Bromer (1987). "Superexponentiation". Mathematics Magazine 60 (3): 169–174. JSTOR 2689566. 3. J. F. MacDonnell (1989). "Somecritical points of the hyperpower function $x^{x^{\dots}}$". International Journal of Mathematical Education 20 (2): 297–305. MR 994348. 4. Weisstein, Eric W., "Power Tower" from MathWorld. 5. ^ a b M. H. Hooshmand, (2006). "Ultra power and ultra exponential functions". 17 (8): 549–558. DOI:10.1080/10652460500422247. 6. "Power Verb". J Vocabulary. J Software. Retrieved 28 October 2011. 7. ^ a b Corless, R. M.; Gonnet, G. H.; Hare, D. E. G.; Jeffrey, D. J.; Knuth, D. E. (1996). "On the Lambert W function" (PostScript). Advances in Computational Mathematics 5: 333. DOI:10.1007/BF02124750. • Daniel Geisler, tetration.org • Ioannis Galidakis, On extending hyper4 to nonintegers (undated, 2006 or earlier) (A simpler, easier to read review of the next reference) • Ioannis Galidakis, On Extending hyper4 and Knuth's Up-arrow Notation to the Reals (undated, 2006 or earlier). • Robert Munafo, Extension of the hyper4 function to reals (An informal discussion about extending tetration to the real numbers.) • Lode Vandevenne, Tetration of the Square Root of Two, (2004). (Attempt to extend tetration to real numbers.) • Ioannis Galidakis, Mathematics, (Definitive list of references to tetration research. Lots of information on the Lambert W function, Riemann surfaces, and analytic continuation.) • Galidakis, Ioannis and Weisstein, Eric W. Power Tower • Joseph MacDonell, Some Critical Points of the Hyperpower Function. • Dave L. Renfro, Web pages for infinitely iterated exponentials (Compilation of entries from questions about tetration on sci.math.) • R. Knobel. "Exponentials Reiterated." American Mathematical Monthly 88, (1981), p. 235-252. • Hans Maurer. "Über die Funktion $y=x^{[x^{[x(\cdots)]}]}$ für ganzzahliges Argument (Abundanzen)." Mittheilungen der Mathematische Gesellschaft in Hamburg 4, (1901), p. 33-50. (Reference to usage of $\ {^{n} a}$ from Knobel's paper.) • Ripà, Marco (2011). La strana coda della serie n^n^...^n, Trento, UNI Service. ISBN 978-88-6178-789-6
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 209, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.879471480846405, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/106264/list
## Return to Answer 3 added 373 characters in body Are you sure that $G$ isn't required to be connected? I think this is needed in order to construct the "valued root datum" structure which underlies Bruhat-Tits structure theory. Anyway, the key point is that there is the concept of "valuation" on the root datum, which is really a collection of "valuations" on the possibly non-commutative groups $U_a(K)$ subject to axioms defined in the first big Bruhat-Tits paper in IHES, which I'll call BTI. The existence of this kind of structure on $G(K)$ requires the full power of the theory of the 2nd Bruhat-Tits IHES paper (developed in more "modern" terms in later work of others, such as J-K. Yu), and it requires connectedness of $G$. On the set of such "valuations" there is a natural free action of $V$ and elements of the same equivalence class are called "parallel". The equivalence classes are naturally affine spaces for $V$, and the group $N(K)$ acts naturally on the entire set preserving each equivalence classes through an action by affine transformations (with $Z(K)$ acting through the translation formulas as you have written down). This is all pure group theory formalism (but far from trivial to set up), the definitions of which have nothing to do with any topological structure on $K$. The specification of a valuation on $K$ selects out a preferred equivalence class, and that is the one used to define $\phi$. (For example, if $G = {\rm{SL}}_2$ and $S$ is the diagonal split maximal torus of $G$ over a field $F$ then the parallelism classes of "valuations" on $G(F)$ in accordance with the root datum for $(G,S)$ correspond exactly to choices of nontrivial non-archimedean $\mathbf{R}$-valued valuations on the abstract field $F$.) So what you're missing is the (highly non-trivial to develop!) definition of the principal homogeneous space for $V$ which supports the action of $N(K)$. In other words, although one can say in concrete terms that the target of $\phi$ is the group of affine transformations of $V$, this is conceptually misleading: it is really the group of automorphisms of a more intrinsic affine space for $V$ in which there is absolutely no canonical base point (intrinsic to $(G,S,\Phi^+,\omega)$). I suppose there is could be a way to make the definition of $\phi$ by bare hands (or at least give formulas, without proving things are well-defined), but my understanding (which could be incomplete) is that using the set a specific parallelism class of of "valuations" as indicated above provides the only natural way to make the definition. Take a look at section 6 of BTI to learn what a valued root datum is, and the many nontrivial properties of this kind of structure. I think that BTI is more illuminating in certain conceptual respects than the Corvallis paper (though of course it doesn't have the rich supply of interesting examples as in the Corvallis paper, and is a rather challenging paper to read). 2 added 773 characters in body Are you sure that $G$ isn't required to be connected? I think this is needed in order to construct the "valued root datum" structure which underlies Bruhat-Tits structure theory. Anyway, the key point is that there is the concept of "valuation" on the root datum, which is really a collection of "valuations" on the possibly non-commutative groups $U_a(K)$ subject to axioms defined in the first big Bruhat-Tits paper in IHES, which I'll call BTI. The existence of this kind of structure on $G(K)$ requires the full power of the theory of the 2nd Bruhat-Tits IHES paper (developed in more "modern" terms in later work of others, such as J-K. Yu), including the and it requires connectedness of $G$, and G$. On the set of such "valuations" there is an a natural free action of$V$and elements of the same equivalence class are called "parallel". The equivalence classes are naturally affine space spaces for$V$. The V$, and the group $N(K)$ acts naturally on this space the entire set preserving each equivalence classes through an action by affine transformations , (with $Z(K)$ acting through the translation formulas as you have written downdown). That This is how all pure group theory formalism (but far from trivial to set up), the definitions of which have nothing to do with any topological structure on $\phi$ K$. The specification of a valuation on$K$selects out a preferred equivalence class, and that is defined. the one used to define$\phi\$. (For example, if $G = {\rm{SL}}_2$ and $S$ is the diagonal split maximal torus of $G$ over a field $F$ then the parallelism classes of "valuations" on $G(F)$ in accordance with the root datum for $(G,S)$ correspond exactly to choices of nontrivial non-archimedean $\mathbf{R}$-valued valuations on the abstract field $F$.) So what you're missing is the (highly non-trivial to develop!) definition of the principal homogeneous space for $V$ which supports the action of $N(K)$. I suppose there is a way to make the definition by bare hands (or at least give formulas, without proving things are well-defined), but my understanding (which could be incomplete) is that using the affine space set of "valuations" on the root datum as indicated above provides the only natural way to make the definition. Take a look at section 6 of BTI to learn what a valued root datum is, and the many beautiful nontrivial properties of this kind of structure. I think that BTI is more illuminating in certain conceptual respects than the Corvallis paper (though of course it doesn't have the rich supply of interesting examples as in the Corvallis paper, and is a rather challenging paper to read). 1 Are you sure that $G$ isn't required to be connected? I think this is needed in order to construct the "valued root datum" structure which underlies Bruhat-Tits structure theory. Anyway, the key point is that there is the concept of "valuation" on the root datum, which is really a collection of "valuations" on the possibly non-commutative groups $U_a(K)$ subject to axioms defined in the first big Bruhat-Tits paper in IHES, which I'll call BTI. The existence of this kind of structure on $G(K)$ requires the full power of the theory of the 2nd Bruhat-Tits IHES paper (developed in more "modern" terms in later work of others, such as J-K. Yu), including the connectedness of $G$, and the set of such "valuations" is an affine space for $V$. The group $N(K)$ acts naturally on this space through affine transformations, with $Z(K)$ acting through the translation formulas as you have written down. That is how $\phi$ is defined. So what you're missing is the (highly non-trivial to develop!) definition of the principal homogeneous space for $V$ which supports the action of $N(K)$. I suppose there is a way to make the definition by bare hands (or at least give formulas, without proving things are well-defined), but my understanding (which could be incomplete) is that the affine space of "valuations" on the root datum provides the only natural way to make the definition. Take a look at section 6 of BTI to learn what a valued root datum is, and the many beautiful properties of this kind of structure. I think that BTI is more illuminating in certain conceptual respects than the Corvallis paper (though of course it doesn't have the rich supply of interesting examples as in the Corvallis paper, and is a rather challenging paper to read).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482243061065674, "perplexity_flag": "head"}
http://mathoverflow.net/questions/32324?sort=oldest
## What is known about the transcendence of zeroes of Riemann zeta? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I was wondering if there are any well-known results or hunches about whether the non-trivial zeroes of Riemann-zeta (or zeta/L-functions in general) are algebraic or not. - 1 Seeing John's answer, I would agree that it is a nice question. – Wadim Zudilin Jul 18 2010 at 2:57 ## 2 Answers Every non-trivial zero of every L-function, besides possible zeros at $s=1/2$, is conjectured to be of the form $s=1/2+i\gamma$ with $\gamma$ real (GRH) and transcendental. I learned this from (for example) the Rubinstein-Sarnak paper on Chebyshev biases, but they were not the first to enunciate it. - David, thanks for your selfcriticism. I would add that it is expected that the ordinates of Riemann's zeroes are algebraically independent. But this is definitely out of reach because there are no constructions known even for rational approximations to a single zero. – Wadim Zudilin Jul 18 2010 at 23:43 2 Indeed, I don't think it has even been proved that there is a single zero of the Riemann zeta function where the imaginary part is irrational. – Greg Martin Jul 27 2010 at 7:41 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is a paper by A. E. Ingham, "On two conjectures in the theory of numbers", Amer. J. Math. 64 (1942), 313-319, where he shows that if the ordinates of the non-trivial zeros of the Riemann zeta-function are linearly independent over $\mathbb{Q}$ then Merten's conjecture is false. This is, of course, weaker than the Rubinstein-Sarnak conjecture, but related and much earlier. - 1 John, this is a very surprising (to me) result and the paper is quite influential (for example, the methods where used in [C.B. Haselgrove, Mathematika 5 (1958) 141--145; dx.doi.org/10.1112/S0025579300001480] for a disproof of a conjecture of Pólya. Here is the (jstor) link to Ingham's paper for those who are interested in: dx.doi.org/10.2307/2371685. I learn from this answer more than from David's (which represents the standard observations). – Wadim Zudilin Jul 18 2010 at 2:55 1 After some digging I've learnt that the Mertens conjecture (that always $|\sum_{k=1}^n\mu(k)| \le \sqrt n$ holds for all $n$) was only recently shown to fail for some $n$ no smaller than $10^{12}$ and probably larger than $10^{30}$ [J. Havel, Gamma: Exploring Euler’s Constant, Princeton University Press, Princeton, 2003, pp. 207–-210]. – Wadim Zudilin Jul 18 2010 at 4:03 1 @Wadim: I agree, this answer is much more useful. :) – David Hansen Jul 18 2010 at 20:21 1 It is been a while since I have read the paper, but I think Odlyzko & Riele's disproof of the Merten's conjecture (Crelle 357 (1985), 138-160) relies heavily on Ingham's ideas. – Micah Milinovich Jul 19 2010 at 0:30 Here is their paper: dtc.umn.edu/~odlyzko/doc/arch/… – Micah Milinovich Jul 19 2010 at 0:35 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113209843635559, "perplexity_flag": "middle"}
http://math.stackexchange.com/users/4726/tpofofn?tab=activity&sort=comments
# Tpofofn reputation 514 bio website location age member for 2 years, 5 months seen 23 hours ago profile views 139 | | | bio | visits | | | |------------|------------------|---------|----------|--------------|-------------------| | | 1,527 reputation | website | | member for | 2 years, 5 months | | 514 badges | location | | seen | 23 hours ago | | # 125 Comments | | | | |-------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1d | comment | Can you find the resultant force between these two vectors?@iostream007, I think your equation should be $|150+ 300\times \cos 110^\circ|$ because the negative will come from $\cos$. | | 1d | comment | Circular motion trigNo, because the argument to the $\sin$ and $\cos$ functions are $\theta = \dfrac{\pi}{3} - k \cdot 2\pi$ and $\theta = -\dfrac{\pi}{3} - k \cdot 2\pi$ which each only have one value in the interval $[0,2\pi]$. | | May9 | comment | This is regarding Vector spacesSo the space only has one element $a$? | | Apr26 | comment | 4 dimensional numbersIs your multiplication table right? By it, both the products $ij$ and $jk$ are anti-symmetric, however the product $ik$ is not. For example, $ik = i$, but $ki=-j$. Is this correct? | | Apr26 | comment | Max of two vectors - how is this evaluated?It depends entirely on how you define $\max$. It is perfectly reasonable to define $\max \{\mathbb{v},0\}$ as being vector valued. In that case there is no ambiguity. For example, if $v = [-1, 1]^T$ then $\max \{\mathbb{v},0\} = [0, 1]^T$ | | Apr23 | comment | What is this expression called?@Nexcius, You could think of $A_{1} * B_{2} - A_{2} * B_{1}$ as the determinant of a matrix or as a wedge product. | | Apr22 | comment | Intuition why the volume and surface area of the unit sphere eventually decrease | | Apr22 | comment | Determine all vector subspaces of the real vector space $\mathbb{R}^2$The 2 dimensional subspace and 0 dimension subspace are trivial. How would you describe the set of 1 dimension subspaces? What would a basis of such a subspace look like? | | Apr22 | comment | Is there a nice way to interpret this matrix equation that comes up in the context of least squares@crf, Once you reduce the problem to $A\mathbf{x}=\mathbf{y}$, you are breaking with the knowledge that the elements of $A$ come from powers of $x$. The expression of the normal equations basically projects the entire problem onto the column space of $A$. As long as the columns of $A$ are linearly independent, it does not matter where they came from. You are translating the problem from one domain (i.e. curve fitting) to another (linear algebra) to simplify the solution. You should not insist that notions from one domain maintain their meaning in the other. | | Apr19 | comment | how to i answer this calculus hw problem?How would you approach the problem with only one sub-interval (i.e. [1,5])? | | Apr3 | comment | Cross Product Intuition+1: This is a great question! | | Mar23 | comment | An eigenvector is a non-zero vector such that…Yes, but they are defined that way precisely because they represent a non-trivial subspace (i.e. they have to span that space). In other words... the feature that makes them interesting and worthy of definition is that they are non-zero and still satisfy $\mathbf{Ax}=\lambda\mathbf{x}$. | | Mar23 | comment | An eigenvector is a non-zero vector such that…@ScottH. I think that you are confusing the notions of a space and a basis. A basis is a non-zero vector which is linearly independent (i.e. cannot contain the zero vector) and spans the space. An eigenvector cannot be zero because it is a basis. The space spanned by the eigenvector must contain the zero vector. | | Mar22 | comment | An eigenvector is a non-zero vector such that…An eigenspace is spanned by a non-zero eigenvector associated with a particular eigenvalue. The eigenspace must be at least one dimensional and therefore excludes using the zero vector as an eigenvector. | | Mar6 | comment | New vector positionIs the axis aligned with one of your coordinate axes. If so, then you can omit that axis in your calculations. Don't include it in your distance calculations and don't include it in your scaling calculations. If it is not aligned with a coordinate axis, then you need something more sophisticated. | | Mar6 | comment | New vector positionIt is essentially the same equation just written a different way. The steps you need to go through are as follows: (1) compute the average distance, (2) compute the relative vector to the center $v_i - c$, (3) normalize it to unit length by dividing it by its own length, (4) scaling it to the average length and then finally (5) adding the center back onto the result. The referenced post does essentially the same thing, except that it distributes the scaling to each component. | | Mar5 | comment | New vector positionAlmost. You should use: npX = (vts[0] - cs[0]) / (distanceFromCenter[for this point]) * averageDistance + cs[0] | | Mar5 | comment | New vector positionJust the magnitude of the vector. I believe that you compute it in your first loop and call it distanceFromCenter. | | Mar5 | comment | New vector positionSo you have 8 points (for example) and a center point. You want to update the 8 points based on their distance to the center point. It is not clear what you want the new points to be based on their distance to the center point. Please clarify | | Mar5 | comment | New vector positionobserving your code, it appears that you are summing the coordinates in oldCoordArray. If you divide by the number of points, you should have the center of the points. |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213837385177612, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-math-topics/205674-colored-points-proof-help.html
1Thanks • 1 Post By johnsomeone # Thread: 1. ## Colored points proof Hey there I need some help with another problem Every point in the plane is colored either red green blue or yellow. Prove that there exists a rectangle in the plane such that all 4 of its vertices are the same color. so far I know that the color repeats after the 5th point but I'm not sure how to use this to progress the problem further. Hope someone can guide me through this problem so I understand how to do it. 2. ## Re: Colored points proof Interesting problem. I haven't proven it (*edit* - I have now - at the bottom), but I think this is a way to work toward a solution. Here's what comes to mind: $\text{Consider the strips of integer verticies }S_n = \mathbb{Z} \times \{1, 2, ..., n \}.$ $\text{Define } f : \mathbb{Z}_{\ge 2} \rightarrow \mathbb{Z}_{\ge 2} \text{ by } f(n) = \text{ the minimum number colors for which }S_n$ $\text{can be colored without making a "bad" rectangle (a rectangle whose vertices are all the same color).}$ $\text{An m-coloring is just a map } \phi : S_n \rightarrow \{ 1, 2, ..., m \}, \text{ where we think of the range as a set of m colors.}$ $\text{Have } f(2) = 2, \text{ as can color }S_2 \text{ by } \phi(x, 1) = \text{red}, \phi(x, 2) = \text{blue} \forall x \in \mathbb{Z},$ $\text{and obviously any coloring of } S_2 \text{ by in just one color will have many (all!) rectangles being "bad"}.$ $\text{First observation: For all }n \ge 2, f(n) \le f(n+1) \le f(n) + 1.$ $\text{Proof: If can color } S_{n+1} \text{ in } f(n+1) \text{ colors w/o a bad rectangle, then that colors}$ $S_n \text{ in } f(n+1) \text{ colors w/o a bad rectangle, so }f(n) \le f(n+1).$ $\text{If } \phi \text{ colors } S_{n} \text{ in } f(n) \text{ colors w/o a bad rectangle, then } S_{n+1} \text{ can be colored}$ $\text{in } f(n)+1 \text{ colors w/o a bad rectangle, by keeping the coloring } \phi \text{ on } S_{n} \text{ while coloring the}$ $\text{top horizontal line of } S_{n+1} (=\mathbb{Z} \times \{ n+1 \})\text{ with a single new color.}$ $\text{Therefore } f(n+1) \le f(n)+1.$ $\text{Claim: } f(3) = 3.$ $\text{Proof: Assume f(3) = 2. Let } \phi \text{ be the coloring in 2 colors of } S_3 \text{ without a bad rectangle.}$ $\text{Obviously, 3 points of the same color can only happen above an x at most once for each color.}$ $\text{Thus for infinitely many x, there are two of one color, one of the other above x.}$ $\text{WLOG (Without Loss Of Generality) say red appears exactly once above infinitely many x.}$ $\text{Then for infinitely many x, there's some } y_0 \in \{1, 2, 3 \} \text{ such that}$ $\phi(x, y_0) = \text{ "red", and } \phi(x, y) = \text{ "blue" for } y \ne y_0.$ $\text{That's because there's exactly one red above infinitely many x,}$ $\text{so that red couldn't occur for only finitely many x in all 3 possible rows.}$ $\text{But that means that for infinitely many x, }\phi \text{ will color the other two rows blue,}$ $\text{and that's for an infinite number of x's. Thus there's a bad "blue" rectangle.}$ $\text{So the assumption } f(3) = 2 \text{ leads to a contradition. Thus } f(2) \ne 2.$ $\text{But by the first observation, } 2 = f(2) \le f(3) \le f(2)+1 = 3.$ $\text{Therefore } f(3) = 3.$ $\text{What do you want to bet that } f(n) = n?$ $\text{In particular, if } f(5) = 5, \text{ then you'll know that on } S_5,$ $\text{any coloring with 4 colors produces a "bad" rectangle.}$ $\text{Since } S_5 \subset \mathbb{Z} \times \mathbb{Z} \subset \mathbb{R} \times \mathbb{R}, f(5) = 5, \text{ would give the desired proof.}$ $\text{However, merely proving that } f(k) = 5 \text{ for some } k \text{ would suffice.}$ $\text{It would be ideal to prove that } f(n) = n \text{ by induction.}$ $\text{Based on the initial observation, one need only show that } f(n+1) \ne f(n).$ ---------------------------------- $\text{Aside: I see a proof that it's true by induction, } f(n) = n \text{ for all }n \ge 2.$ Spoiler: $\text{The induction step is: First, assume the inductive hypothesis: } f(n) = n.$ $\text{Next ASSUME, gearing for a contradiction, that } f(n+1) = f(n).$ $\text{Consider a no-bad-rectangle coloring of } S_{n+1} \text{ in } f(n+1) = f(n) = n \text{ colors.}$ $\text{On } S_{n+1} \text{ consider the } y=n+1 \text{ top horizontal row.}$ $\text{That row must be colored using an infinite number of some color, say green.}$ $\text{We'll hereafter restrict our attention only to those countable number of x such that }$ $\phi(x, n+1) = \text{ green.}$ $\text{If, on } S_n, \phi \text{ used green only a finite number of times, then discard those x's, and will have}$ $\text{and infinite number of x such that } \phi \text{ colors } S_n \text{ without using any green.}$ $\text{But now that's a coloring in } n-1 \text{ colors over an infinite subset of } \mathbb{Z}, \text{ and so could've been}$ $\text{used to construct an } n-1 \text{ coloring of } S_n, \text{ contrary to } f(n) = n.$ $\text{Therefore, on } S_n, \phi \text{ uses green for an infinite number of x.}$ $\text{(Note that that infinity isn't even counting the top horizontal row of } S_{n+1} \text{ that's colored infinitely often in green.}$ $\text{Thus one of the n rows of } S_{n} \text{ must have an infinite number of green coloring.}$ $\text{In particular one such row has at least 2 colorings in green.}$ $\text{Since that reasoning is done after restricting ourselves to only those x's where the n+1 row is colored green,}$ $\text{we get that those two green colorings form a bad green rectangle, which is a contradiction.}$ $\text{Thus the assumption }f(n+1) = f(n) \text{ has produced a contradiction.}$ $\text{By the initial observation, it follows that } f(n+1) = f(n)+1 = n+1.$ $\text{This proves that } f(n) = n \text{ implies that } f(n+1) = n+1.$ $\text{And, with } f(2)=2, f(3) = 3 \text{ established, that completes the proof by induction.}$ 3. ## Re: Colored points proof help Hey there thanks a lot for helping me out! I understand it a lot better now
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 56, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321242570877075, "perplexity_flag": "head"}
http://mathoverflow.net/questions/67372?sort=oldest
## Distributing points with respect to a concave function ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose I have a concave function defined on the unit interval such that $f(0) = f(1) = 0$ and $\int_0^1 f(t) dt = \alpha$, where $\alpha$ is "small" (say $0.01$ or thereabouts). Say I distribute $n$ points $x_1,\dots,x_n$ on the unit interval and consider the function $F(x_1,\dots,x_n) = \int_0^1 f(t) \cdot \min_i{|x_i - t|} dt$. Is there a lower bound on $F$ as a function of $\alpha$ and $n$? If $n=1$, I can show that a lower bound is $\alpha/6$, so I'm curious if something like $\alpha/(6n)$ holds in general. - Did you try to minimize $F$, e.g. by calculating its (sub-)derivatives with respect to all coordinates $x_i$ all solving the resulting equations? – Dirk Jun 10 2011 at 9:11 1 Note that for a fixed $f$, the function $F$ is $C^1$ in the variables $(x_1,\dots,x_n)$, even if $f$ is only integrable. There is a minimizer on the closed symplex {$0\le x_1 \le \dots\le x_n\le 1$} by compactness, and it verifies $0 < x_1 < \dots < x_n < 1$ provided $f$ is a.e. positive. – Pietro Majer Jun 10 2011 at 17:26 The size of $\alpha = \int_0^1 f(t) \phantom. dt$ is irrelevant: multiplying $f$ by a scalar $c$ preserves concavity and multiplies $F(x_1,\ldots,x_n)$ by the same $c$, so if a bound like $\alpha/(6n)$ holds for "small" $\alpha$ then it also holds without any such hypothesis. – Noam D. Elkies Sep 3 2011 at 20:31 Is it easy to outline the proof of $F(x_1) \geq \alpha/6$? That might give a start towards the more general problem you ask. – Noam D. Elkies Sep 3 2011 at 20:33 ## 2 Answers You can define the function $f(t)\cdot min_i|x_i-t|$ piecewise so that $f(t)\cdot min_i|x_i-t|=f(t)\cdot|x_i-t|$ when $t$ is on the interval $\left[\frac{x_{i-1}+x_i}{2},\frac{x_i+x_{i+1}}{2}\right]$. Here we have to define $x_0=-x_1$ (so $\frac{x_{0}+x_1}{2}=0$) and $x_{n+1}=2-x_n$ (so $\frac{x_n+x_{n+1}}{2}=1$). This makes the integral \begin{equation} F(x_1,...,x_n)=\sum_{i=1}^n\int_\frac{x_{i-1}+x_i}{2}^\frac{x_i+x_{i+1}}{2}f(t)\cdot|x_i-t|dt \end{equation} Now, $f$ isn't $0$ at the endpoints of these intervals and the intervals don't have length $1$, but they are concave and their average value is $\alpha/n$. You can consider f on each interval as the limit of a sequence of concave functions defined on that interval which are each $0$ at the endpoints, so you should be able to put a lower bound on each integral in the same way that you did it for $\int_0^1f(t)\cdot|x_1-t|dt$. I can't say what that lower bound might be, because I don't know how (or if) you used the length of the interval $[0,1]$ when you found the lower bound for $\int_0^1f(t)\cdot|x_1-t|dt$. If you didn't use the length of the interval at all, the integrals will have to be on average greater than or equal to $\alpha/6n$, implying that $F(x_1,...,x_n)$ is bounded below by $\alpha/6$, which would be kind of interesting. One more thing to note is that unlike your example when $n=1$, for the integrals in the sum above you have the additional condition that $f(x_i)$ is bounded away from $0$ by the value of $f$ at at least one of the endpoints of the intervals. Maybe this will make it possible to put stricter lower bounds on each of these integrals. I hope all that was clear (and correct). - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It seems that for large $n$ there will be an upper bound $c_n \alpha$ with $c_n$ asymptotic to $\frac{2}{9n}$ (and thus a bit better than the $\frac{1}{6n}$ suggested by the value of $c_1$). The asymptotic equality condition should be: $f$ is a triangular function $\tau_z$ for some $z \in (0,1)$, defined by $$\tau_z(x) = \cases{ x/z,& {\rm if}\phantom0x \leq z,\cr (1-x)/(1-t), & {\rm if}\phantom0 x \geq z }$$ (i.e. $\tau_z$ is piecewise linear with a corner only at $x=z$ where $\tau_z$ attains its maximal value of $1$), and $x_1,\ldots,x_n$ are distributed on $(0,1)$ with density proportional to $\sqrt{f\phantom.}$ (which is probably not what most of us would have guessed at first). The condition that $\alpha = \int_0^1 f(x) \phantom. dx$ be "small" is a red herring because the problem is linear in $f$, so that what works for small $\alpha$ works equally for all $\alpha$. It is also enough to work with $f = \tau_z$ because the convex hull of functions $c \tau_z$ is precisely the concave functions $f$ on $[0,1]$ vanishing at the endpoints; e.g. if such $f$ has a continuous second derivative then $f(x) = \int_{z=0}^1 (z-z^2) \phantom. (-f''(z)) \phantom. \tau_z(x) \phantom. dz$, and $-f''(z) \geq 0$ for $f$ concave. Now the idea is that if $x_1,\ldots,x_n$ are regularly distributed with density $\delta(\cdot)$ on $[0,1]$ then $\min_i |x_i - t|$ behaves like $1/(4\delta(t))$, because it oscillates between $0$ and $1/(2\delta(t))$ like a nonnegative triangle wave. The only restriction on $\delta$ is that it be a positive function with $\int_0^1 \delta(t) \phantom. dt = n$. By Cauchy-Schwarz, $$\int_0^1 \frac{\tau_z(t)}{\delta(t)}dt \cdot \int_0^1 \delta(t) \phantom.dt \geq \left( \int_0^1 \sqrt{\tau_z(t)} \phantom. dt \right)^2,$$ with equality iff $\delta^2$ is proportional to $\tau_z$. Now we know $\int_0^1 \delta(t) \phantom. dt = n$, and compute $\int_0^1 \sqrt{\tau_z(t)} \phantom. dt = 2/3$ for all $z$. Hence $\int_0^1 (\tau_z(t) / \delta(t)) \phantom. dt \geq \frac4{9n}$. Since $\int_0^1 \tau_z(t) \phantom. dt = 1/2$, we deduce that $$\int_0^1 \frac{\tau_z(t)}{4\delta(t)}dt \geq \frac2{9n} \int_0^1 \tau_z(t) \phantom. dt,$$ from which the claimed asymptotic should follow after some epsilon-chasing. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 98, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432228207588196, "perplexity_flag": "head"}
http://mathoverflow.net/questions/23295?sort=newest
## Relation between the Hilbert Class polynomial of $\mathcal{O}_K$ and an order. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi all, I have been looking at complex multiplication of elliptic curves for a course project in cryptography and the following question came up: Let $\mathcal{O}_K$ be the maximal order in $K$ ($K$ is an imaginary quadratic field), let $h_K (X)$ be the Hilbert class polynomial of $K$. Suppose that $\mathcal{O}$ is another order (say $\mathcal{O} =\mathbb{Z}[ \frac{1 + \sqrt{D}}{2}]$ and $\mathcal{O} = \mathbb{Z}[\sqrt{D}]$ for concreteness). Let $h_\mathcal{O} (X)$ be the hilbert class polynomial of the order $\mathcal{O}$. Is there any relation between $h_k(X)$ and $h_\mathcal{O} (X)$? For example can one obtain $h_\mathcal{O}(X)$ from $h_K(X)$ and vice verse? - ## 1 Answer As far as I know, the best relation between the two is the following: the field generated by the hilbert class polynomial $h_\mathcal{O} (X)$ contains the field generated by $h_K(X)$. This is implied by Proposition 25 of http://www.math.uga.edu/~pete/torspaper.pdf This implies among other things that $\deg(h_K(X)) | \deg(h_\mathcal{O}(X))$ (although this could be determined by simpler means). Now as to your question about whether one can be generated from the other? No, unless you're in a very limited set of circumstances like $\deg(h_\mathcal{O}(X)) =1$ or such a thing. In fact it's a celebrated theorem of Heilbronn that $\deg(h_\mathcal{O}(X)) \to \infty$ as $|D| \to \infty$ where $D$ is the discriminant of $\mathcal{O}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459534287452698, "perplexity_flag": "head"}
http://physicspages.com/2011/10/12/electrostatic-boundary-conditions/
Notes on topics in science ## Electrostatic boundary conditions Required math: calculus, algebra Required physics: electrostatics Reference: Griffiths, David J. (2007) Introduction to Electrodynamics, 3rd Edition; Prentice Hall – Sec 2.3.5. Problems in electrostatics frequently make use of surface charge distributions, in which charge is imagined as smeared out over a mathematical surface, such as a spherical shell. Crossing such a surface results in a discontinuity in the electric field. Qualitatively, for a plane this is fairly obvious, since if the surface charge distribution consists of, say, positive charge, then the electric field has to point away from the surface on both sides. We can use Gauss’s law to work out by how much the electric field is discontinuous. Suppose we have some surface with a surface charge density of ${\sigma}$. This density may depend on the location, and the surface may be curved, but if we consider a small piece of area of size ${A}$, and build a little ‘pillbox’ that encloses this piece of the surface and extends a tiny distance above and below the surface, then Gauss’s law says $\displaystyle \oint\mathbf{E}\cdot d\mathbf{a}=\frac{q}{\epsilon_{0}}$ where the integral on the left is over the surface of the pillbox and ${q}$ is the charge enclosed by the pillbox. For a small enough area, to first order ${\sigma}$ is a constant across the area, so the total charge enclosed by the pillbox is ${\sigma A}$. Similarly, if the area is small enough, ${\mathbf{E}}$ is constant across the area (although a different constant on each side of the surface), so that ${\oint\mathbf{E}\cdot d\mathbf{a}=(E_{\perp}^{above}-E_{\perp}^{below})A}$. (The minus sign arises since ${d\mathbf{a}}$ points in opposite directions on the two faces of the pillbox.) We therefore get $\displaystyle E_{\perp}^{above}-E_{\perp}^{below}=\frac{\sigma}{\epsilon_{0}}$ Thus the perpendicular component of the field has a discontinuity of ${\sigma/\epsilon_{0}}$ as we cross the surface. (The contributions to the surface integral from the sides of the pillbox can be made as small as we like by decreasing the thickness of the pillbox, so that its two faces lie essentially right in the surface itself.) What about the component of ${\mathbf{E}}$ that is parallel to the surface? From Stokes’s theorem, since ${\nabla\times\mathbf{E}=0}$ in electrostatics, we know that the line integral of ${\mathbf{E}\cdot d\mathbf{l}}$ is always zero: $\displaystyle \oint\mathbf{E}\cdot d\mathbf{l}=0$ Now if we choose a path that is a little rectangle whose plane is perpendicular to the surface, where one side of the rectangle lies above the surface and the opposite side lies below it, then ${\oint\mathbf{E}\cdot d\mathbf{l}=E_{\parallel}^{above}l-E_{\parallel}^{below}l}$ where ${l}$ is the length of the side. The minus sign arises from the fact that when we integrate around a rectangle ${d\mathbf{l}}$ points in opposite directions on opposite sides of the rectangle. (Again, we can make the other two sides of the rectangle (the sides perpendicular to the surface) as small as we like, so there is no contribution from them.) Since the integral is zero, we conclude $\displaystyle E_{\parallel}^{above}=E_{\parallel}^{below}$ That is, the component of ${\mathbf{E}}$ parallel to the surface is continuous across the surface. Since the potential difference between two points can be calculated by $\displaystyle V(\mathbf{b})-V(\mathbf{a})=-\int_{\mathbf{a}}^{\mathbf{b}}\mathbf{E}\cdot d\mathbf{l}$ if we choose the two points to be on opposite sides of the surface, then as the distance between the two points is reduced, eventually the integral will also reduce to zero, since the integrand is the parallel component of ${\mathbf{E}}$ which we know is continuous. Thus the potential is always continuous across a surface. ### Like this: By , on Wednesday, 12 October 2011 at 17:10, under Electrodynamics, Physics. Tags: surface charge. 9 Comments ### Comments • monuj nath  On Tuesday, 18 September 2012 at 14:59 What is electrostatic boundary condition? ### Trackbacks • By Electrostatic boundary conditions – examples « Physics tutorials on Friday, 21 October 2011 at 15:17 [...] seen that the electric field has a discontinuity of when we cross a surface charge of density , but [...] • By Physics pages | Fitoimage on Wednesday, 26 October 2011 at 20:00 [...] Electrostatic boundary conditions « Physics tutorials [...] • By Electrostatic pressure « Physics tutorials on Monday, 31 October 2011 at 15:22 [...] is: what is the force on a surface charge distribution? This is a somewhat tricky question, since we’ve seen that the component of the electric field that is normal to the surface is discontinuous, with a [...] • By Laplace’s equation in spherical coordinates: surface charge « Physics tutorials on Monday, 30 January 2012 at 11:54 [...] sphere, and there are no charges inside or outside the sphere, we can use the relations between the potential and the surface charge density to find that charge density on the sphere. Since the electric field is discontinuous across a [...] • By Dielectric cylinder in uniform electric field « Physics tutorials on Saturday, 20 October 2012 at 18:07 [...] our original investigation into electrostatic boundary conditions in a vacuum, we found that the electric field is discontinuous across a layer of surface charge. [...] • By Point charge embedded in dielectric plane « Physics tutorials on Friday, 26 October 2012 at 18:13 [...] electric field is discontinuous across a layer of surface charge, with the discontinuity given [...] • By Electric displacement: boundary conditions « Physics tutorials on Sunday, 25 November 2012 at 12:56 [...] can work out analogs of the boundary conditions on the electric field in the case of the displacement vector . In [...] • By Magnetostatic boundary conditions | Physics tutorials on Monday, 8 April 2013 at 15:06 [...] while back, we worked out the behaviour of the electric field and potential at a layer of surface charge. We can do a similar analysis for the magnetic field and its behaviour [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215153455734253, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/213170-need-guidance-rearranging-formula-two-brackets-letter-appears.html
# Thread: 1. ## Need guidance with rearranging a formula with two brackets and a letter that appears. Hi guys. Basically as the title says can someone guide me through the problem of... Make x the subject of 5(x-3) = y(4-3x) Thanks any help and guidance would be greatly appreciated. 2. ## Re: Need guidance with rearranging a formula with two brackets and a letter that appe Hi MattA147. The question being begged is: 'How do you solve this problem explicitly for x'. Changing the subject of a formula basically means that if you have a formula in the form: $y=x+3$ then the 'subject' of the formula is 'y'. Changing the subject to 'x' would yield: $x=y-3$. So in your problem we have: $5(x-3)=y(4-3x)$ and we want to make 'x' the subject of the formula. Let's start by getting rid of those parentheses: $5(x-3)=y(4-3x)\\ \Rightarrow\\5x-15=4y-3yx$ Now let's solve for 'x': $5x-15=4y-3yx\\ \Rightarrow\\5x+3yx=15+4y\\ \Rightarrow\\x(5+3y)=15+4y\\ \Rightarrow\\x=\frac{4y+15}{3y+5}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213111400604248, "perplexity_flag": "middle"}
http://www.mashpedia.com/Normal_distribution
# Normal distribution Language Notation Probability density function The red curve is the standard normal distribution Cumulative distribution function $\mathcal{N}(\mu,\,\sigma^2)$ μ ∈ R — mean (location) σ2 > 0 — variance (squared scale) x ∈ R $\frac{1}{\sqrt{2\pi\sigma^2}}\operatorname{exp}\left\{-\frac{\left(x-\mu\right)^2}{2\sigma^2}\right\}$ $\frac12\left[1 + \operatorname{erf}\left( \frac{x-\mu}{\sqrt{2\sigma^2}}\right)\right]$ μ μ μ $\sigma^2\,$ 0 0 $\frac12 \ln(2 \pi e \, \sigma^2)$ $\exp\{ \mu t + \frac{1}{2}\sigma^2t^2 \}$ $\exp \{ i\mu t - \frac{1}{2}\sigma^2 t^2 \}$ $\begin{pmatrix}1/\sigma^2&0\\0&1/(2\sigma^4)\end{pmatrix}$ In probability theory, the normal (or Gaussian) distribution is a continuous probability distribution, defined by the formula $f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }.$ The parameter μ in this formula is the mean or expectation of the distribution (and also its median and mode). The parameter σ is its standard deviation; its variance is therefore σ 2. A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate. If μ = 0 and σ = 1, the distribution is called the standard normal distribution or the unit normal distribution, and a random variable with that distribution is a standard normal deviate. Normal distributions are extremely important in statistics, and are often used in the natural and social sciences for real-valued random variables whose distributions are not known.[1][2] One reason for their popularity is the central limit theorem, which states that, under mild conditions, the mean of a large number of random variables independently drawn from the same distribution is distributed approximately normally, irrespective of the form of the original distribution. Thus, physical quantities that are expected to be the sum of many independent processes (such as measurement errors) often have a distribution very close to normal. Another reason is that a large number of results and methods (such as propagation of uncertainty and least squares parameter fitting) can be derived analytically, in explicit form, when the relevant variables are normally distributed. The normal distribution is also the only absolutely continuous distribution all of whose cumulants beyond the first two (i.e. other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a given mean and variance.[3][4] The normal distribution is symmetric about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the weight of a person or the price of a share. Such variables may be better described by other distributions, such as the log-normal distribution or the Pareto distribution. The normal distribution is also practically zero once the value x lies more than a few standard deviations away from the mean. Therefore, it may not be appropriate when one expects a significant fraction of outliers, values that lie many standard deviations away from the mean. Least-squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable. In those cases, one assumes a more heavy-tailed distribution, and the appropriate robust statistical inference methods. The normal distributions are a subclass of the elliptical distributions. The Gaussian distribution is sometimes informally called the bell curve. However, there are many other distributions that are bell-shaped (such as Cauchy's, Student's, and logistic). The terms Gaussian function and Gaussian bell curve are also ambiguous since they sometimes refer to multiples of the normal distribution whose integral is not 1; that is, $a \exp(-b(x - c)^2)$ for arbitrary positive constants a, b and c. ## Definition ### Standard normal distribution The simplest case of a normal distribution is known as the standard normal distribution, described by this probability density function: $\phi(x) = \frac{1}{\sqrt{2\pi}}\, e^{- \frac{\scriptscriptstyle 1}{\scriptscriptstyle 2} x^2}.$ The factor $\scriptstyle\ 1/\sqrt{2\pi}$ in this expression ensures that the total area under the curve ϕ(x) is equal to one[proof]. The 1/2 in the exponent ensures that the distribution has unit variance (and therefore also unit standard deviation). This function is symmetric around x=0, where it attains its maximum value $1/\sqrt{2\pi}$; and has inflection points at +1 and −1. ### General normal distribution Any normal distribution is a version of the standard normal distribution whose domain has been stretched by a factor σ (the standard deviation) and then translated by μ (the mean value), that is $f(x) = \frac{1}{\sigma} \phi\left(\frac{x-\mu}{\sigma}\right).$ The probability density must be scaled by $1/\sigma$ so that the integral is still 1. If Z is a standard normal deviate, then X = Zσ + μ will have a normal distribution with expected value μ and standard deviation σ. Conversely, if X is a general normal deviate, then Z = (X − μ)/σ will have a standard normal distribution. Every normal distribution is the exponential of a quadratic function: $f(x) = e^{a x^2 + b x + c}$ where a is negative and c is $-\ln(-4a\pi)/2$. In this form, the mean value μ is −b/a, and the variance σ2 is −1/(2a). For the standard normal distribution, a is −1/2, b is zero, and c is $-\ln(2\pi)/2$. ### Notation The standard Gaussian distribution (with zero mean and unit variance) is often denoted with the Greek letter ϕ (phi).[5] The alternative form of the Greek phi letter, φ, is also used quite often. The normal distribution is also often denoted by N(μ, σ2).[6] Thus when a random variable X is distributed normally with mean μ and variance σ2, we write $X\ \sim\ \mathcal{N}(\mu,\,\sigma^2).$ ### Alternative parametrizations Some authors advocate using the precision τ as the parameter defining the width of the distribution, instead of the deviation σ or the variance σ2. The precision is normally defined as the reciprocal of the variance, 1/σ2.[7] The formula for the distribution then becomes $f(x) = \sqrt{\frac{\tau}{2\pi}}\, e^{\frac{-\tau(x-\mu)^2}{2}}.$ This choice is claimed to have advantages in numerical computations when σ is very close to zero, and to simplify formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution. Occasionally, the precision τ is defined as 1/σ, the reciprocal of the standard deviation; so that $f(x) = \frac{\tau}{\sqrt{2\pi}}\, e^{\frac{-\tau^2(x-\mu)^2}{2}}.$ ### Alternative definitions Authors may differ also on which normal distribution should be called the "standard" one. Gauss himself defined the standard normal as having variance σ2 = 1/2, that is $f(x) = \frac{1}{\sqrt\pi}\,e^{-x^2}$ Stephen Stigler[8] goes even further, defining the standard normal with variance σ2 = 1/2π : $f(x) = e^{-\pi x^2}$ According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, the fact that the pdf has unit height at zero, and simple approximate formulas for the quantiles of the distribution. ## Properties ### Symmetries and derivatives The normal distribution f(x), with any mean μ and any positive deviation σ, has the following properties: • It is symmetric around the point x = μ, which is at the same time the mode, the median and the mean of the distribution.[9] • It is unimodal: its first derivative is positive for x < μ, negative for x > μ, and zero only at x = μ. • It has two inflection points (where the second derivative of f is zero and changes sign), located one standard deviation away from the mean, namely at x = μ − σ and x = μ + σ.[9] • It is log-concave.[9] • It is infinitely differentiable, indeed supersmooth of order 2.[10] Furthermore, the standard normal distribution ϕ (with μ = 0 and σ = 1) also has the following properties: • Its first derivative ϕ′(x) is −xϕ(x). • Its second derivative ϕ′′(x) is (x2 − 1)ϕ(x) • More generally, its n-th derivative ϕ(n)(x) is (-1)nHn(x)ϕ(x), where Hn is the Hermite polynomial of order n.[11] ### Moments The plain and absolute moments of a variable X are the expected values of Xp and |X|p,respectively. If the expected value μ of X is zero, these parameters are called central moments. Usually we are interested only in moments with integer order p. If X has a normal distribution, these moments exist and are finite for any p whose real part is greater than −1. For any non-negative integer p, the plain central moments are $\mathrm{E}\left[X^p\right] = \begin{cases} 0 & \text{if }p\text{ is odd,} \\ \sigma^p\,(p-1)!! & \text{if }p\text{ is even.} \end{cases}$ Here n!! denotes the double factorial, that is the product of every odd number from n to 1. The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer p, $\operatorname{E}\left[|X|^p\right] = \sigma^p\,(p-1)!! \cdot \left.\begin{cases} \sqrt{\frac{2}{\pi}} & \text{if }p\text{ is odd} \\ 1 & \text{if }p\text{ is even} \end{cases}\right\} = \sigma^p \cdot \frac{2^{\frac{p}{2}}\Gamma\left(\frac{p+1}{2}\right)}{\sqrt{\pi}}$ The last formula is valid also for any non-integer p > −1. When the mean μ is not zero, the plain and absolute moments can be expressed in terms of confluent hypergeometric functions 1F1 and U.[citation needed] $\begin{align} \operatorname{E} \left[ X^p \right] &=\sigma^p \cdot (-i\sqrt{2}\sgn\mu)^p \; U\left( {-\frac{1}{2}p},\, \frac{1}{2},\, -\frac{1}{2}(\mu/\sigma)^2 \right), \\ \operatorname{E} \left[ |X|^p \right] &=\sigma^p \cdot 2^{\frac p 2} \frac {\Gamma\left(\frac{1+p}{2}\right)}{\sqrt\pi}\; _1F_1\left( {-\frac{1}{2}p},\, \frac{1}{2},\, -\frac{1}{2}(\mu/\sigma)^2 \right). \end{align}$ These expressions remain valid even if p is not integer. See also generalized Hermite polynomials. Order Non-central moment Central moment 1 μ 0 2 μ2 + σ2 σ 2 3 μ3 + 3μσ2 0 4 μ4 + 6μ2σ2 + 3σ4 3σ 4 5 μ5 + 10μ3σ2 + 15μσ4 0 6 μ6 + 15μ4σ2 + 45μ2σ4 + 15σ6 15σ 6 7 μ7 + 21μ5σ2 + 105μ3σ4 + 105μσ6 0 8 μ8 + 28μ6σ2 + 210μ4σ4 + 420μ2σ6 + 105σ8 105σ 8 ### Fourier transform and characteristic function The Fourier transform of a normal distribution f with mean μ and deviation σ is[12] $\hat\phi(t) = \int_{-\infty}^\infty\! f(x)e^{itx} dx = e^{\mathbf{i}\mu t} e^{- \frac12 (\sigma t)^2}$ where i is the imaginary unit. If the mean μ is zero, the first factor is 1, and the Fourier transform is also a normal distribution on the frequency domain, with mean 0 and standard deviation 1/σ. In particular, the standard normal distribution ϕ (with μ=0 and σ=1) is an eigenfunction of the Fourier transform. In probability theory, the Fourier transform of the probability distribution of a real-valued random variable X is called the characteristic function of that variable, and can be defined as the expected value of eitX, as a function of the real variable t (the frequency parameter of the Fourier transform). This definition can be analytically extended to a complex-value parameter t.[13] ### Moment and cumulant generating functions The moment generating function of a real random variable X is defined as the expected value of etX, as a function of the real parameter t. For a normal distribution with mean μ and deviation σ, the moment generating function exists and is equal to $M(t) = \hat \phi(-\mathbf{i}t) = e^{ \mu t} e^{\frac12 \sigma^2 t^2 }$ The cumulant generating function is the logarithm of the moment generating function, namely $g(t) = \ln M(t) = \mu t + \frac{1}{2} \sigma^2 t^2$ Since this is a quadratic polynomial in t, only the first two cumulants are nonzero, namely the mean μ and the variance σ2. ## Cumulative distribution The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter $\Phi$ (phi), is the integral $\Phi(x)\; = \;\frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-t^2/2} \, dt$ In statistics one often uses the related error function, or erf(x), defined as the probability of a random variable with normal distribution of mean 0 and variance 1/2 falling in the range $[-x, x]$; that is $\operatorname{erf}(x)\; =\; \frac{1}{\sqrt{\pi}} \int_{-x}^x e^{-t^2} \, dt$ These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. They are closely related, namely $\Phi(x)\; =\; \frac12\left[1 + \operatorname{erf}\left(\frac{x}{\sqrt{2}}\right)\right]$ For a generic normal distribution f with mean μ and deviation σ, the cumulative distribution function is $F(x)\;=\;\Phi\left(\frac{x-\mu}{\sigma}\right)\;=\; \frac12\left[1 + \operatorname{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right]$ The complement of the standard normal CDF, $Q(x) = 1 - \Phi(x)$, is often called the Q-function, especially in engineering texts.[14][15] It gives the probability of that the value of a standard normal random variable X will exceed x. Other definitions of the Q-function, all of which are simple transformations of $\Phi$, are also used occasionally.[16] The graph of the standard normal CDF $\Phi$ has 2-fold rotational symmetry around the point (0,1/2); that is, $\Phi(-x) = 1 - \Phi(x)$. Its antiderivative (indefinite integral) $\int \Phi(x)\, dx$ is $\int \Phi(x)\, dx = x\Phi(x) + \phi(x)$. ### Standard deviation and tolerance intervals Main article: Tolerance interval Dark blue is less than one standard deviation away from the mean. For the normal distribution, this accounts for about 68% of the set, while two standard deviations from the mean (medium and dark blue) account for about 95%, and three standard deviations (light, medium, and dark blue) account for about 99.7%. About 68% of values drawn from a normal distribution are within one standard deviation σ away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. This fact is known as the 68-95-99.7 (empirical) rule, or the 3-sigma rule. More precisely, the probability that a normal deviate lies in the range μ − nσ and μ + nσ is given by $F(\mu+n\sigma) - F(\mu-n\sigma) = \Phi(n)-\Phi(-n) = \mathrm{erf}\left(\frac{n}{\sqrt{2}}\right),$ To 12 decimal places, the values for n = 1, 2, ..., 6 are:[17] n F(μ+nσ) − F(μ−nσ) i.e. 1 minus ... or 1 in ... OEIS 1 0.682689492137 0.317310507863 3.15148718753 2 0.954499736104 0.045500263896 21.9778945080 3 0.997300203937 0.002699796063 370.398347345 4 0.999936657516 0.000063342484 15,787.1927673 5 0.999999426697 0.000000573303 1,744,277.89362 6 0.999999998027 0.000000001973 506,797,345.897 ### Quantile function The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function: $\Phi^{-1}(p)\; =\; \sqrt2\;\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1).$ For a normal random variable with mean μ and variance σ2, the quantile function is $F^{-1}(p) = \mu + \sigma\Phi^{-1}(p) = \mu + \sigma\sqrt2\,\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1).$ The quantile $\Phi^{-1}(p)$ of the standard normal distribution is commonly denoted as zp. These values are used in hypothesis testing, construction of confidence intervals and Q-Q plots. A normal random variable X will exceed μ + σzp with probability 1−p; and will lie outside the interval μ ± σzp with probability 2(1−p). In particular, the quantile z0.975 is 1.96; therefore a normal random variable will lie outside the interval μ ± 1.96σ in only 5% of cases. The following table gives the multiple n of σ such that X will lie in the range μ ± nσ with a specified probability p. These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions:[18] F(μ+nσ) − F(μ−nσ) n    F(μ+nσ) − F(μ−nσ) n 0.80 1.281551565545 0.999 3.290526731492 0.90 1.644853626951 0.9999 3.890591886413 0.95 1.959963984540 0.99999 4.417173413469 0.98 2.326347874041 0.999999 4.891638475699 0.99 2.575829303549 0.9999999 5.326723886384 0.995 2.807033768344 0.99999999 5.730728868236 0.998 3.090232306168 0.999999999 6.109410204869 ## Zero-variance limit In the limit when σ tends to zero, the probability density f(x) eventually tends to zero at any x ≠ μ, but grows without limit if x = μ, while its integral remains equal to 1. Therefore, the normal distribution cannot be defined as an ordinary function when σ = 0. However, one can define the normal distribution with zero variance as a generalized function; specifically, as Dirac's "delta function" δ translated by the mean μ, that is f(x) = δ(x−μ). Its CDF is then the Heaviside step function translated by the mean μ, namely $F(x) = \begin{cases} 0 & \text{if }x < \mu \\ 1 & \text{if }x \geq \mu \end{cases}$ ## The central limit theorem As the number of discrete events increases, the function begins to resemble a normal distribution Comparison of probability density functions, p(k) for the sum of n fair 6-sided dice to show their convergence to a normal distribution with increasing n, in accordance to the central limit theorem. In the bottom-right graph, smoothed profiles of the previous graphs are rescaled, superimposed and compared with a normal distribution (black curve). Main article: Central limit theorem The central limit theorem states that under certain (fairly common) conditions, the sum of a large number of random variables will have an approximately normal distribution. More specifically, suppose that X1, …, Xn be independent and identically distributed random variables, all with the same arbitrary distribution, with zero mean and variance σ2; and that Z is their mean scaled by $\sqrt{n}$, that is, $Z = \sqrt{n}\left(\frac{1}{n}\sum_{i=1}^n X_i\right)$ Then, as n increases, the probability distribution of Z will tend to the normal distribution with zero mean and variance σ2. The theorem can be extended to variables Xi are not independent and/or not identically distributed, if certain constraints on the degree of dependence and the moments of the distributions. A great number of test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions. The central limit theorem also implies that certain distributions can be approximated by the normal distribution, for example: • The binomial distribution B(n, p) is approximately normal with mean np and variance np(1−p)) for large n and for p not too close to zero or one. • The Poisson distribution with parameter λ is approximately normal with mean λ and variance λ, for large values of nλ.[citation needed] • The chi-squared distribution χ2(k) is approximately normal with mean k and variance 2k, for large k. • The Student's t-distribution t(ν) is approximately normal with mean 0 and variance 1 when ν is large. Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution. A general upper bound for the approximation error in the central limit theorem is given by the Berry–Esseen theorem, improvements of the approximation are given by the Edgeworth expansions. ## Operations on normal deviates The family of normal distributions is closed under linear transformations. That is, if X is normally distributed with mean μ and deviation σ, then the variable Y = aX + b, for any real numbers a and b, is also normally distributed, with mean aμ + b and deviation aσ. Also if X1 and X2 are two independent normal random variables, with means μ1, μ2 and standard deviations σ1, σ2, then their sum X1 + X2 will also be normally distributed,[proof] with mean μ1 + μ2 and variance $\sigma_1^2 + \sigma_2^2$. In particular, if X and Y are independent normal deviates with zero mean and variance σ2, then X + Y and X − Y are also independent and normally distributed, with zero mean and variance 2σ2. This is a special case of the polarization identity.[19] Also, if X1, X2 are two independent normal deviates with mean μ and deviation σ, and a, b are arbitrary real numbers, then the variable $X_3 = \frac{aX_1 + bX_2 - (a+b)\mu}{\sqrt{a^2+b^2}} + \mu$ is also normally distributed with mean μ and deviation σ. It follows that the normal distribution is stable (with exponent α = 2). More generally, any linear combination of independent normal deviates is a normal deviate. ### Infinite divisibility and Cramér's theorem For any positive integer n, any normal distribution with mean μ and variance σ2 is the distribution of the sum of n independent normal deviates, each with mean μ/n and variance σ2/n. This property is called infinite divisibility.[20] Conversely, if X1 and X2 are independent random variables and their sum X1 + X2 has a normal distribution, then both X1 and X2 must be normal deviates.[21] This result is known as Cramér's decomposition theorem, and is equivalent to saying that the convolution of two distribution is normal if and only if both are normal. Cramér's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily close.[22] ### Bernstein's theorem Bernstein's theorem states that if X and Y are independent and X + Y and X − Y are also independent, then both X and Y must necessarily have normal distributions.[23][24] More generally, if X1, ..., Xn are independent random variables, then two distinct linear combinations ∑akXk and ∑bkXk will be independent if and only if all Xk's are normal and ∑akbkσ 2 k = 0 , where σ 2 k denotes the variance of Xk.[23] ## Other properties 1. If the characteristic function φX of some random variable X is of the form φX(t) = eQ(t), where Q(t) is a polynomial, then the Marcinkiewicz theorem (named after Józef Marcinkiewicz) asserts that Q can be at most a quadratic polynomial, and therefore X a normal random variable.[22] The consequence of this result is that the normal distribution is the only distribution with a finite number (two) of non-zero cumulants. 2. If X and Y are jointly normal and uncorrelated, then they are independent. The requirement that X and Y should be jointly normal is essential, without it the property does not hold.[citation needed][proof] For non-normal random variables uncorrelatedness does not imply independence. 3. The Kullback–Leibler divergence of one normal distributions X2 ∼ N(μ2, σ21 )from another X1 ∼ N(μ1, σ22 )is given by:[25] $D_\mathrm{KL}( X_1 \,\|\, X_2 ) = \frac{(\mu_1 - \mu_2)^2}{2\sigma_2^2} \,+\, \frac12\left(\, \frac{\sigma_1^2}{\sigma_2^2} - 1 - \ln\frac{\sigma_1^2}{\sigma_2^2} \,\right)\ .$ The Hellinger distance between the same distributions is equal to $H^2(X_1,X_2) = 1 \,-\, \sqrt{\frac{2\sigma_1\sigma_2}{\sigma_1^2+\sigma_2^2}} \; e^{-\frac{1}{4}\frac{(\mu_1-\mu_2)^2}{\sigma_1^2+\sigma_2^2}}\ .$ 4. The Fisher information matrix for a normal distribution is diagonal and takes the form $\mathcal I = \begin{pmatrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{pmatrix}$ 5. Normal distributions belongs to an exponential family with natural parameters $\scriptstyle\theta_1=\frac{\mu}{\sigma^2}$ and $\scriptstyle\theta_2=\frac{-1}{2\sigma^2}$, and natural statistics x and x2. The dual, expectation parameters for normal distribution are η1 = μ and η2 = μ2 + σ2. 6. The conjugate prior of the mean of a normal distribution is another normal distribution.[26] Specifically, if x1, …, xn are iid N(μ, σ2) and the prior is μ ~ N(μ0, σ2 0 ) , then the posterior distribution for the estimator of μ will be $\mu | x_1,\ldots,x_n\ \sim\ \mathcal{N}\left( \frac{\frac{\sigma^2}{n}\mu_0 + \sigma_0^2\bar{x}}{\frac{\sigma^2}{n}+\sigma_0^2},\ \left( \frac{n}{\sigma^2} + \frac{1}{\sigma_0^2} \right)^{\!-1} \right)$ 7. Of all probability distributions over the reals with mean μ and variance σ2, the normal distribution N(μ, σ2) is the one with the maximum entropy.[27] 8. The family of normal distributions forms a manifold with constant curvature −1. The same family is flat with respect to the (±1)-connections ∇(e) and ∇(m).[28] ## Related distributions ### Operations on a single random variable If X is distributed normally with mean μ and variance σ2, then • The exponential of X is distributed log-normally: eX ~ ln(N (μ, σ2)). • The absolute value of X has folded normal distribution: |X| ~ Nf (μ, σ2). If μ = 0 this is known as the half-normal distribution. • The square of X/σ has the noncentral chi-squared distribution with one degree of freedom: X2/σ2 ~ χ21(μ2/σ2). If μ = 0, the distribution is called simply chi-squared. • The distribution of the variable X restricted to an interval [a, b] is called the truncated normal distribution. • (X − μ)−2 has a Lévy distribution with location 0 and scale σ−2. ### Combination of two independent random variables If X1 and X2 are two independent standard normal random variables with mean 0 and variance 1, then • Their sum and difference is distributed normally with mean zero and variance two: X1 ± X2 ∼ N(0, 2). • Their product Z = X1·X2 follows the "product-normal" distribution[29] with density function fZ(z) = π−1K0(|z|), where K0 is the modified Bessel function of the second kind. This distribution is symmetric around zero, unbounded at z = 0, and has the characteristic function φZ(t) = (1 + t 2)−1/2. • Their ratio follows the standard Cauchy distribution: X1 ÷ X2 ∼ Cauchy(0, 1). • Their Euclidean norm $\scriptstyle\sqrt{X_1^2\,+\,X_2^2}$ has the Rayleigh distribution, also known as the chi distribution with 2 degrees of freedom. ### Combination of two or more independent random variables • If X1, X2, …, Xn are independent standard normal random variables, then the sum of their squares has the chi-squared distribution with n degrees of freedom $X_1^2 + \cdots + X_n^2\ \sim\ \chi_n^2.$. • If X1, X2, …, Xn are independent normally distributed random variables with means μ and variances σ2, then their sample mean is independent from the sample standard deviation,[30] which can be demonstrated using Basu's theorem or Cochran's theorem.[31] The ratio of these two quantities will have the Student's t-distribution with n − 1 degrees of freedom: $t = \frac{\overline X - \mu}{S/\sqrt{n}} = \frac{\frac{1}{n}(X_1+\cdots+X_n) - \mu}{\sqrt{\frac{1}{n(n-1)}\left[(X_1-\overline X)^2+\cdots+(X_n-\overline X)^2\right]}} \ \sim\ t_{n-1}.$ • If X1, …, Xn, Y1, …, Ym are independent standard normal random variables, then the ratio of their normalized sums of squares will have the with (n, m) degrees of freedom:[32] $F = \frac{\left(X_1^2+X_2^2+\cdots+X_n^2\right)/n}{\left(Y_1^2+Y_2^2+\cdots+Y_m^2\right)/m}\ \sim\ F_{n,\,m}.$ ### Operations on the density function The split normal distribution is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The truncated normal distribution results from rescaling a section of a single density function. ### Extensions The notion of normal distribution, being one of the most important distributions in probability theory, has been extended far beyond the standard framework of the univariate (that is one-dimensional) case (Case 1). All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists. • The multivariate normal distribution describes the Gaussian law in the k-dimensional Euclidean space. A vector X ∈ Rk is multivariate-normally distributed if any linear combination of its components ∑k j=1 aj Xj has a (univariate) normal distribution. The variance of X is a k×k symmetric positive-definite matrix V. The multivariate normal distribution is a special case of the elliptical distributions. As such, its iso-density loci in the k = 2 case are ellipses and in the case of arbitrary k are ellipsoids. • Rectified Gaussian distribution a rectified version of normal distribution with all the negative elements reset to 0 • Complex normal distribution deals with the complex normal vectors. A complex vector X ∈ Ck is said to be normal if both its real and imaginary components jointly possess a 2k-dimensional multivariate normal distribution. The variance-covariance structure of X is described by two matrices: the variance matrix Γ, and the relation matrix C. • Matrix normal distribution describes the case of normally distributed matrices. • Gaussian processes are the normally distributed stochastic processes. These can be viewed as elements of some infinite-dimensional Hilbert space H, and thus are the analogues of multivariate normal vectors for the case k = ∞. A random element h ∈ H is said to be normal if for any constant a ∈ H the scalar product (a, h) has a (univariate) normal distribution. The variance structure of such Gaussian random element can be described in terms of the linear covariance operator K: H → H. Several Gaussian processes became popular enough to have their own names: • Gaussian q-distribution is an abstract mathematical construction that represents a "q-analogue" of the normal distribution. • the q-Gaussian is an analogue of the Gaussian distribution, in the sense that it maximises the Tsallis entropy, and is one type of Tsallis distribution. Note that this distribution is different from the Gaussian q-distribution above. One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice. In such case a possible extension would be a richer family of distributions, having more than two parameters and therefore being able to fit the empirical distribution more accurately. The examples of such extensions are: • Pearson distribution — a four-parametric family of probability distributions that extend the normal law to include different skewness and kurtosis values. ## Normality tests Main article: Normality tests Normality tests assess the likelihood that the given data set {x1, …, xn} comes from a normal distribution. Typically the null hypothesis H0 is that the observations are distributed normally with unspecified mean μ and variance σ2, versus the alternative Ha that the distribution is arbitrary. A great number of tests (over 40) have been devised for this problem, the more prominent of them are outlined below: • "Visual" tests are more intuitively appealing but subjective at the same time, as they rely on informal human judgement to accept or reject the null hypothesis. • Q-Q plot — is a plot of the sorted values from the data set against the expected values of the corresponding quantiles from the standard normal distribution. That is, it's a plot of point of the form (Φ−1(pk), x(k)), where plotting points pk are equal to pk = (k − α)/(n + 1 − 2α) and α is an adjustment constant, which can be anything between 0 and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line. • P-P plot — similar to the Q-Q plot, but used much less frequently. This method consists of plotting the points (Φ(z(k)), pk), where $\scriptstyle z_{(k)} = (x_{(k)}-\hat\mu)/\hat\sigma$. For normally distributed data this plot should lie on a 45° line between (0, 0) and (1, 1). • Wilk–Shapiro test employs the fact that the line in the Q-Q plot has the slope of σ. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly. • Normal probability plot (rankit plot) • Moment tests: • Empirical distribution function tests: • Lilliefors test (an adaptation of the Kolmogorov–Smirnov test) • Anderson–Darling test ## Estimation of parameters See also: Standard error of the mean See also: Standard deviation#Estimation See also: Variance#Estimation It is often the case that we don't know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample (x1, …, xn) from a normal N(μ, σ2) population we would like to learn the approximate values of parameters μ and σ2. The standard approach to this problem is the maximum likelihood method, which requires maximization of the log-likelihood function: $\ln\mathcal{L}(\mu,\sigma^2) = \sum_{i=1}^n \ln f(x_i;\,\mu,\sigma^2) = -\frac{n}{2}\ln(2\pi) - \frac{n}{2}\ln\sigma^2 - \frac{1}{2\sigma^2}\sum_{i=1}^n (x_i-\mu)^2.$ Taking derivatives with respect to μ and σ2 and solving the resulting system of first order conditions yields the maximum likelihood estimates: $\hat{\mu} = \overline{x} \equiv \frac{1}{n}\sum_{i=1}^n x_i, \qquad \hat{\sigma}^2 = \frac{1}{n} \sum_{i=1}^n (x_i - \overline{x})^2.$ Estimator $\scriptstyle\hat\mu$ is called the sample mean, since it is the arithmetic mean of all observations. The statistic $\scriptstyle\overline{x}$ is complete and sufficient for μ, and therefore by the Lehmann–Scheffé theorem, $\scriptstyle\hat\mu$ is the uniformly minimum variance unbiased (UMVU) estimator.[33] In finite samples it is distributed normally: $\hat\mu \ \sim\ \mathcal{N}(\mu,\,\,\sigma^2\!\!\;/n).$ The variance of this estimator is equal to the μμ-element of the inverse Fisher information matrix $\scriptstyle\mathcal{I}^{-1}$. This implies that the estimator is finite-sample efficient. Of practical importance is the fact that the standard error of $\scriptstyle\hat\mu$ is proportional to $\scriptstyle1/\sqrt{n}$, that is, if one wishes to decrease the standard error by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo simulations. From the standpoint of the asymptotic theory, $\scriptstyle\hat\mu$ is consistent, that is, it converges in probability to μ as n → ∞. The estimator is also asymptotically normal, which is a simple corollary of the fact that it is normal in finite samples: $\sqrt{n}(\hat\mu-\mu) \ \xrightarrow{d}\ \mathcal{N}(0,\,\sigma^2).$ The estimator $\scriptstyle\hat\sigma^2$ is called the sample variance, since it is the variance of the sample (x1, …, xn). In practice, another estimator is often used instead of the $\scriptstyle\hat\sigma^2$. This other estimator is denoted s2, and is also called the sample variance, which represents a certain ambiguity in terminology; its square root s is called the sample standard deviation. The estimator s2 differs from $\scriptstyle\hat\sigma^2$ by having (n − 1) instead of n in the denominator (the so-called Bessel's correction): $s^2 = \frac{n}{n-1}\,\hat\sigma^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2.$ The difference between s2 and $\scriptstyle\hat\sigma^2$ becomes negligibly small for large n's. In finite samples however, the motivation behind the use of s2 is that it is an unbiased estimator of the underlying parameter σ2, whereas $\scriptstyle\hat\sigma^2$ is biased. Also, by the Lehmann–Scheffé theorem the estimator s2 is uniformly minimum variance unbiased (UMVU),[33], which makes it the "best" estimator among all unbiased ones. However it can be shown that the biased estimator $\scriptstyle\hat\sigma^2$ is "better" than the s2 in terms of the mean squared error (MSE) criterion. In finite samples both s2 and $\scriptstyle\hat\sigma^2$ have scaled chi-squared distribution with (n − 1) degrees of freedom: $s^2 \ \sim\ \frac{\sigma^2}{n-1} \cdot \chi^2_{n-1}, \qquad \hat\sigma^2 \ \sim\ \frac{\sigma^2}{n} \cdot \chi^2_{n-1}\ .$ The first of these expressions shows that the variance of s2 is equal to 2σ4/(n−1), which is slightly greater than the σσ-element of the inverse Fisher information matrix $\scriptstyle\mathcal{I}^{-1}$. Thus, s2 is not an efficient estimator for σ2, and moreover, since s2 is UMVU, we can conclude that the finite-sample efficient estimator for σ2 does not exist. Applying the asymptotic theory, both estimators s2 and $\scriptstyle\hat\sigma^2$ are consistent, that is they converge in probability to σ2 as the sample size n → ∞. The two estimators are also both asymptotically normal: $\sqrt{n}(\hat\sigma^2 - \sigma^2) \simeq \sqrt{n}(s^2-\sigma^2)\ \xrightarrow{d}\ \mathcal{N}(0,\,2\sigma^4).$ In particular, both estimators are asymptotically efficient for σ2. By Cochran's theorem, for normal distributions the sample mean $\scriptstyle\hat\mu$ and the sample variance s2 are independent, which means there can be no gain in considering their joint distribution. There is also a reverse theorem: if in a sample the sample mean and sample variance are independent, then the sample must have come from the normal distribution. The independence between $\scriptstyle\hat\mu$ and s can be employed to construct the so-called t-statistic: $t = \frac{\hat\mu-\mu}{s/\sqrt{n}} = \frac{\overline{x}-\mu}{\sqrt{\frac{1}{n(n-1)}\sum(x_i-\overline{x})^2}}\ \sim\ t_{n-1}$ This quantity t has the Student's t-distribution with (n − 1) degrees of freedom, and it is an ancillary statistic (independent of the value of the parameters). Inverting the distribution of this t-statistics will allow us to construct the confidence interval for μ;[34] similarly, inverting the χ2 distribution of the statistic s2 will give us the confidence interval for σ2:[35] $\begin{align} & \mu \in \left[\, \hat\mu + t_{n-1,\alpha/2}\, \frac{1}{\sqrt{n}}s,\ \ \hat\mu + t_{n-1,1-\alpha/2}\,\frac{1}{\sqrt{n}}s \,\right] \approx \left[\, \hat\mu - |z_{\alpha/2}|\frac{1}{\sqrt n}s,\ \ \hat\mu + |z_{\alpha/2}|\frac{1}{\sqrt n}s \,\right], \\ & \sigma^2 \in \left[\, \frac{(n-1)s^2}{\chi^2_{n-1,1-\alpha/2}},\ \ \frac{(n-1)s^2}{\chi^2_{n-1,\alpha/2}} \,\right] \approx \left[\, s^2 - |z_{\alpha/2}|\frac{\sqrt{2}}{\sqrt{n}}s^2,\ \ s^2 + |z_{\alpha/2}|\frac{\sqrt{2}}{\sqrt{n}}s^2 \,\right], \end{align}$ where tk,p and χ 2 k,p are the pth quantiles of the t- and χ2-distributions respectively. These confidence intervals are of the level 1 − α, meaning that the true values μ and σ2 fall outside of these intervals with probability α. In practice people usually take α = 5%, resulting in the 95% confidence intervals. The approximate formulas in the display above were derived from the asymptotic distributions of $\scriptstyle\hat\mu$ and s2. The approximate formulas become valid for large values of n, and are more convenient for the manual calculation since the standard normal quantiles zα/2 do not depend on n. In particular, the most popular value of α = 5%, results in |z0.025| = 1.96. ## Bayesian analysis of the normal distribution Bayesian analysis of normally distributed data is complicated by the many different possibilities that may be considered: • Either the mean, or the variance, or neither, may be considered a fixed quantity. • When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified. • Both univariate and multivariate cases need to be considered. • Either conjugate or improper prior distributions may be placed on the unknown variables. • An additional set of cases occurs in Bayesian linear regression, where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the regression coefficients. The resulting analysis is similar to the basic cases of independent identically distributed data, but more complex. The formulas for the non-linear-regression cases are summarized in the conjugate prior article. ### The sum of two quadratics #### Scalar form The following auxiliary formula is useful for simplifying the posterior update equations, which otherwise become fairly tedious. $a(y-x)^2 + b(x-z)^2 = (a + b)\left(x - \frac{ay+bz}{a+b}\right)^2 + \frac{ab}{a+b}(y-z)^2$ This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms in x, and completing the square. Note the following about the complex constant factors attached to some of the terms: 1. The factor $\frac{ay+bz}{a+b}$ has the form of a weighted average of y and z. 2. $\frac{ab}{a+b} = \frac{1}{\frac{1}{a}+\frac{1}{b}} = (a^{-1} + b^{-1})^{-1}.$ This shows that this factor can be thought of as resulting from a situation where the reciprocals of quantities a and b add directly, so to combine a and b themselves, it's necessary to reciprocate, add, and reciprocate the result again to get back into the original units. This is exactly the sort of operation performed by the harmonic mean, so it is not surprising that $\frac{ab}{a+b}$ is one-half the harmonic mean of a and b. #### Vector form A similar formula can be written for the sum of two vector quadratics: If x, y, z are vectors of length k, and A and B are symmetric, invertible matrices of size $k\times k$, then $(\mathbf{y}-\mathbf{x})'\mathbf{A}(\mathbf{y}-\mathbf{x}) + (\mathbf{x}-\mathbf{z})'\mathbf{B}(\mathbf{x}-\mathbf{z}) = (\mathbf{x} - \mathbf{c})'(\mathbf{A}+\mathbf{B})(\mathbf{x} - \mathbf{c}) + (\mathbf{y} - \mathbf{z})'(\mathbf{A}^{-1} + \mathbf{B}^{-1})^{-1}(\mathbf{y} - \mathbf{z})$ where $\mathbf{c} = (\mathbf{A} + \mathbf{B})^{-1}(\mathbf{A}\mathbf{y} + \mathbf{B}\mathbf{z})$ Note that the form x′ A x is called a quadratic form and is a scalar: $\mathbf{x}'\mathbf{A}\mathbf{x} = \sum_{i,j}a_{ij} x_i x_j$ In other words, it sums up all possible combinations of products of pairs of elements from x, with a separate coefficient for each. In addition, since $x_i x_j = x_j x_i$, only the sum $a_{ij} + a_{ji}$ matters for any off-diagonal elements of A, and there is no loss of generality in assuming that A is symmetric. Furthermore, if A is symmetric, then the form $\mathbf{x}'\mathbf{A}\mathbf{y} = \mathbf{y}'\mathbf{A}\mathbf{x}$ . ### The sum of differences from the mean Another useful formula is as follows: $\sum_{i=1}^n (x_i-\mu)^2 = \sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2$ where $\bar{x} = \frac{1}{n}\sum_{i=1}^n x_i.$ ### With known variance For a set of i.i.d. normally distributed data points X of size n where each individual point x follows $x \sim \mathcal{N}(\mu, \sigma^2)$ with known variance σ2, the conjugate prior distribution is also normally distributed. This can be shown more easily by rewriting the variance as the precision, i.e. using τ = 1/σ2. Then if $x \sim \mathcal{N}(\mu, \tau)$ and $\mu \sim \mathcal{N}(\mu_0, \tau_0),$ we proceed as follows. First, the likelihood function is (using the formula above for the sum of differences from the mean): $\begin{align} p(\mathbf{X}|\mu,\tau) &= \prod_{i=1}^n \sqrt{\frac{\tau}{2\pi}} \exp\left(-\frac{1}{2}\tau(x_i-\mu)^2\right) \\ &= \left(\frac{\tau}{2\pi}\right)^{\frac{n}{2}} \exp\left(-\frac{1}{2}\tau \sum_{i=1}^n (x_i-\mu)^2\right) \\ &= \left(\frac{\tau}{2\pi}\right)^{\frac{n}{2}} \exp\left[-\frac{1}{2}\tau \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right]. \end{align}$ Then, we proceed as follows: $\begin{align} p(\mu|\mathbf{X}) &\propto p(\mathbf{X}|\mu) p(\mu) \\ & = \left(\frac{\tau}{2\pi}\right)^{\frac{n}{2}} \exp\left[-\frac{1}{2}\tau \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right] \sqrt{\frac{\tau_0}{2\pi}} \exp\left(-\frac{1}{2}\tau_0(\mu-\mu_0)^2\right) \\ &\propto \exp\left(-\frac{1}{2}\left(\tau\left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right) + \tau_0(\mu-\mu_0)^2\right)\right) \\ &\propto \exp\left(-\frac{1}{2} \left(n\tau(\bar{x}-\mu)^2 + \tau_0(\mu-\mu_0)^2 \right)\right) \\ &= \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\right)^2 + \frac{n\tau\tau_0}{n\tau+\tau_0}(\bar{x} - \mu_0)^2\right) \\ &\propto \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\right)^2\right) \end{align}$ In the above derivation, we used the formula above for the sum of two quadratics and eliminated all constant factors not involving μ. The result is the kernel of a normal distribution, with mean $\frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}$ and precision $n\tau + \tau_0$, i.e. $p(\mu|\mathbf{X}) \sim \mathcal{N}\left(\frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}, n\tau + \tau_0\right)$ This can be written as a set of Bayesian update equations for the posterior parameters in terms of the prior parameters: $\begin{align} \tau_0' &= \tau_0 + n\tau \\ \mu_0' &= \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0} \\ \bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i \end{align}$ That is, to combine n data points with total precision of nτ (or equivalently, total variance of n/σ2) and mean of values $\bar{x}$, derive a new total precision simply by adding the total precision of the data to the prior total precision, and form a new mean through a precision-weighted average, i.e. a weighted average of the data mean and the prior mean, each weighted by the associated total precision. This makes logical sense if the precision is thought of as indicating the certainty of the observations: In the distribution of the posterior mean, each of the input components is weighted by its certainty, and the certainty of this distribution is the sum of the individual certainties. (For the intuition of this, compare the expression "the whole is (or is not) greater than the sum of its parts". In addition, consider that the knowledge of the posterior comes from a combination of the knowledge of the prior and likelihood, so it makes sense that we are more certain of it than of either of its components.) The above formula reveals why it is more convenient to do Bayesian analysis of conjugate priors for the normal distribution in terms of the precision. The posterior precision is simply the sum of the prior and likelihood precisions, and the posterior mean is computed through a precision-weighted average, as described above. The same formulas can be written in terms of variance by reciprocating all the precisions, yielding the more ugly formulas $\begin{align} {\sigma^2_0}' &= \frac{1}{\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}} \\ \mu_0' &= \frac{\frac{n\bar{x}}{\sigma^2} + \frac{\mu_0}{\sigma_0^2}}{\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2}} \\ \bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i \end{align}$ ### With known mean For a set of i.i.d. normally distributed data points X of size n where each individual point x follows $x \sim \mathcal{N}(\mu, \sigma^2)$ with known mean μ, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. The use of the inverse gamma is more common, but the scaled inverse chi-squared is more convenient, so we use it in the following derivation. The prior for σ2 is as follows: $p(\sigma^2|\nu_0,\sigma_0^2) = \frac{(\sigma_0^2\frac{\nu_0}{2})^{\frac{\nu_0}{2}}}{\Gamma\left(\frac{\nu_0}{2} \right)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \propto \frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}}$ The likelihood function from above, written in terms of the variance, is: $\begin{align} p(\mathbf{X}|\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{\frac{n}{2}} \exp\left[-\frac{1}{2\sigma^2} \sum_{i=1}^n (x_i-\mu)^2\right] \\ &= \left(\frac{1}{2\pi\sigma^2}\right)^{\frac{n}{2}} \exp\left[-\frac{S}{2\sigma^2}\right] \end{align}$ where $S = \sum_{i=1}^n (x_i-\mu)^2.$ Then: $\begin{align} p(\sigma^2|\mathbf{X}) &\propto p(\mathbf{X}|\sigma^2) p(\sigma^2) \\ &= \left(\frac{1}{2\pi\sigma^2}\right)^{\frac{n}{2}} \exp\left[-\frac{S}{2\sigma^2}\right] \frac{(\sigma_0^2\frac{\nu_0}{2})^{\frac{\nu_0}{2}}}{\Gamma\left(\frac{\nu_0}{2} \right)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \\ &\propto \left(\frac{1}{\sigma^2}\right)^{\frac{n}{2}} \frac{1}{(\sigma^2)^{1+\frac{\nu_0}{2}}} \exp\left[-\frac{S}{2\sigma^2} + \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right] \\ &= \frac{1}{(\sigma^2)^{1+\frac{\nu_0+n}{2}}} \exp\left[-\frac{\nu_0 \sigma_0^2 + S}{2\sigma^2}\right] \end{align}$ This is also a scaled inverse chi-squared distribution, where $\begin{align} \nu_0' &= \nu_0 + n \\ \nu_0'{\sigma_0^2}' &= \nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\mu)^2 \end{align}$ or equivalently $\begin{align} \nu_0' &= \nu_0 + n \\ {\sigma_0^2}' &= \frac{\nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\mu)^2}{\nu_0+n} \end{align}$ Reparameterizing in terms of an inverse gamma distribution, the result is: $\begin{align} \alpha' &= \alpha + \frac{n}{2} \\ \beta' &= \beta + \frac{\sum_{i=1}^n (x_i-\mu)^2}{2} \end{align}$ ### With unknown mean and variance For a set of i.i.d. normally distributed data points X of size n where each individual point x follows $x \sim \mathcal{N}(\mu, \sigma^2)$ with unknown mean μ and variance σ2, a combined (multivariate) conjugate prior is placed over the mean and variance, consisting of a normal-inverse-gamma distribution. Logically, this originates as follows: 1. From the analysis of the case with unknown mean but known variance, we see that the update equations involve sufficient statistics computed from the data consisting of the mean of the data points and the total variance of the data points, computed in turn from the known variance divided by the number of data points. 2. From the analysis of the case with unknown variance but known mean, we see that the update equations involve sufficient statistics over the data consisting of the number of data points and sum of squared deviations. 3. Keep in mind that the posterior update values serve as the prior distribution when further data is handled. Thus, we should logically think of our priors in terms of the sufficient statistics just described, with the same semantics kept in mind as much as possible. 4. To handle the case where both mean and variance are unknown, we could place independent priors over the mean and variance, with fixed estimates of the average mean, total variance, number of data points used to compute the variance prior, and sum of squared deviations. Note however that in reality, the total variance of the mean depends on the unknown variance, and the sum of squared deviations that goes into the variance prior (appears to) depend on the unknown mean. In practice, the latter dependence is relatively unimportant: Shifting the actual mean shifts the generated points by an equal amount, and on average the squared deviations will remain the same. This is not the case, however, with the total variance of the mean: As the unknown variance increases, the total variance of the mean will increase proportionately, and we would like to capture this dependence. 5. This suggests that we create a conditional prior of the mean on the unknown variance, with a hyperparameter specifying the mean of the pseudo-observations associated with the prior, and another parameter specifying the number of pseudo-observations. This number serves as a scaling parameter on the variance, making it possible to control the overall variance of the mean relative to the actual variance parameter. The prior for the variance also has two hyperparameters, one specifying the sum of squared deviations of the pseudo-observations associated with the prior, and another specifying once again the number of pseudo-observations. Note that each of the priors has a hyperparameter specifying the number of pseudo-observations, and in each case this controls the relative variance of that prior. These are given as two separate hyperparameters so that the variance (aka the confidence) of the two priors can be controlled separately. 6. This leads immediately to the normal-inverse-gamma distribution, which is defined as the product of the two distributions just defined, with conjugate priors used (an inverse gamma distribution over the variance, and a normal distribution over the mean, conditional on the variance) and with the same four parameters just defined. The priors are normally defined as follows: $\begin{align} p(\mu|\sigma^2; \mu_0, n_0) &\sim \mathcal{N}(\mu_0,\sigma_0^2/n_0) \\ p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) \end{align}$ The update equations can be derived, and look as follows: $\begin{align} \bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i \\ \mu_0' &= \frac{n_0\mu_0 + n\bar{x}}{n_0 + n} \\ n_0' &= n_0 + n \\ \nu_0' &= \nu_0 + n \\ \nu_0'{\sigma_0^2}' &= \nu_0 \sigma_0^2 + \sum_{i=1}^n (x_i-\bar{x})^2 + \frac{n_0 n}{n_0 + n}(\mu_0 - \bar{x})^2 \end{align}$ The respective numbers of pseudo-observations just add the number of actual observations to them. The new mean hyperparameter is once again a weighted average, this time weighted by the relative numbers of observations. Finally, the update for $\nu_0'{\sigma_0^2}'$ is similar to the case with known mean, but in this case the sum of squared deviations is taken with respect to the observed data mean rather than the true mean, and as a result a new "interaction term" needs to be added to take care of the additional error source stemming from the deviation between prior and data mean. Proof is as follows. [Proof] The prior distributions are $\begin{align} p(\mu|\sigma^2; \mu_0, n_0) &\sim \mathcal{N}(\mu_0,\sigma^2/n_0) = \frac{1}{\sqrt{2\pi\frac{\sigma^2}{n_0}}} \exp\left(-\frac{n_0}{2\sigma^2}(\mu-\mu_0)^2\right) \\ &\propto (\sigma^2)^{-1/2} \exp\left(-\frac{n_0}{2\sigma^2}(\mu-\mu_0)^2\right) \\ p(\sigma^2; \nu_0,\sigma_0^2) &\sim I\chi^2(\nu_0,\sigma_0^2) = IG(\nu_0/2, \nu_0\sigma_0^2/2) \\ &= \frac{(\sigma_0^2\nu_0/2)^{\nu_0/2}}{\Gamma(\nu_0/2)}~\frac{\exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right]}{(\sigma^2)^{1+\nu_0/2}} \\ &\propto {(\sigma^2)^{-(1+\nu_0/2)}} \exp\left[ \frac{-\nu_0 \sigma_0^2}{2 \sigma^2}\right] \end{align}$ Therefore, the joint prior is $\begin{align} p(\mu,\sigma^2; \mu_0, n_0, \nu_0,\sigma_0^2) &= p(\mu|\sigma^2; \mu_0, n_0)\,p(\sigma^2; \nu_0,\sigma_0^2) \\ &\propto (\sigma^2)^{-(\nu_0+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + n_0(\mu-\mu_0)^2\right)\right] \end{align}$ The likelihood function from the section above with known variance is: $\begin{align} p(\mathbf{X}|\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \left(\sum_{i=1}^n(x_i -\mu)^2\right)\right] \end{align}$ Writing it in terms of variance rather than precision, we get: $\begin{align} p(\mathbf{X}|\mu,\sigma^2) &= \left(\frac{1}{2\pi\sigma^2}\right)^{n/2} \exp\left[-\frac{1}{2\sigma^2} \left(\sum_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\right)\right] \\ &\propto {\sigma^2}^{-n/2} \exp\left[-\frac{1}{2\sigma^2} \left(S + n(\bar{x} -\mu)^2\right)\right] \end{align}$ where $S = \sum_{i=1}^n(x_i-\bar{x})^2.$ Therefore, the posterior is (dropping the hyperparameters as conditioning factors): $\begin{align} p(\mu,\sigma^2|\mathbf{X}) & \propto p(\mu,\sigma^2) \, p(\mathbf{X}|\mu,\sigma^2) \\ & \propto (\sigma^2)^{-(\nu_0+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + n_0(\mu-\mu_0)^2\right)\right] {\sigma^2}^{-n/2} \exp\left[-\frac{1}{2\sigma^2} \left(S + n(\bar{x} -\mu)^2\right)\right] \\ &= (\sigma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + n_0(\mu-\mu_0)^2 + n(\bar{x} -\mu)^2\right)\right] \\ &= (\sigma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2 + (n_0+n)\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\right)^2\right)\right] \\ & \propto (\sigma^2)^{-1/2} \exp\left[-\frac{n_0+n}{2\sigma^2}\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\right)^2\right] \\ & \quad\times (\sigma^2)^{-(\nu_0/2+n/2+1)} \exp\left[-\frac{1}{2\sigma^2}\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\right)\right] \\ & = \mathcal{N}_{\mu|\sigma^2}\left(\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}, \frac{\sigma^2}{n_0+n}\right) \cdot {\rm IG}_{\sigma^2}\left(\frac12(\nu_0+n), \frac12\left(\nu_0\sigma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\right)\right). \end{align}$ In other words, the posterior distribution has the form of a product of a normal distribution over p(μ|σ2) times an inverse gamma distribution over p(σ2), with parameters that are the same as the update equations above. ## Occurrence The occurrence of normal distribution in practical problems can be loosely classified into three categories: 1. Exactly normal distributions; 2. Approximately normal laws, for example when such approximation is justified by the central limit theorem; and 3. Distributions modeled as normal – the normal distribution being the distribution with maximum entropy for a given mean and variance. ### Exact normality The ground state of a quantum harmonic oscillator has the Gaussian distribution. Certain quantities in physics are distributed normally, as was first demonstrated by James Clerk Maxwell. Examples of such quantities are: • Velocities of the molecules in the ideal gas. More generally, velocities of the particles in any system in thermodynamic equilibrium will have normal distribution, due to the maximum entropy principle. • Probability density function of a ground state in a quantum harmonic oscillator. • The position of a particle that experiences diffusion. If initially the particle is located at a specific point (that is its probability distribution is the dirac delta function), then after time t its location is described by a normal distribution with variance t, which satisfies the diffusion equation ∂/∂t f(x,t) = 1/2 ∂2/∂x2 f(x,t). If the initial location is given by a certain density function g(x), then the density at time t is the convolution of g and the normal PDF. ### Approximate normality Approximately normal distributions occur in many situations, as explained by the central limit theorem. When the outcome is produced by a large number of small effects acting additively and independently, its distribution will be close to normal. The normal approximation will not be valid if the effects act multiplicatively (instead of additively), or if there is a single external influence that has a considerably larger magnitude than the rest of the effects. • In counting problems, where the central limit theorem includes a discrete-to-continuum approximation and where infinitely divisible and decomposable distributions are involved, such as • Binomial random variables, associated with binary response variables; • Poisson random variables, associated with rare events; • Thermal light has a Bose–Einstein distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem. ### Assumed normality Histogram of sepal widths for Iris versicolor from Fisher's Iris flower data set, with superimposed best-fitting normal distribution. I can only recognize the occurrence of the normal curve – the Laplacian curve of errors – as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations. There are statistical methods to empirically test that assumption, see the above Normality tests section. • In biology, the logarithm of various variables tend to have a normal distribution, that is, they tend to have a log-normal distribution (after separation on male/female subpopulations), with examples including: • Measures of size of living tissue (length, height, skin area, weight);[36] • The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of growth; presumably the thickness of tree bark also falls under this category; • Certain physiological measurements, such as blood pressure of adult humans. • In finance, in particular the Black–Scholes model, changes in the logarithm of exchange rates, price indices, and stock market indices are assumed normal (these variables behave like compound interest, not like simple interest, and so are multiplicative). Some mathematicians such as Benoît Mandelbrot have argued that log-Levy distributions, which possesses heavy tails would be a more appropriate model, in particular for the analysis for stock market crashes. • Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors.[37] Fitted cumulative normal distribution to October rainfalls, see distribution fitting • In standardized testing, results can be made to have a normal distribution. This is done by either selecting the number and difficulty of questions (as in the IQ test), or by transforming the raw test scores into "output" scores by fitting them to the normal distribution. For example, the SAT's traditional range of 200–800 is based on a normal distribution with a mean of 500 and a standard deviation of 100. • Many scores are derived from the normal distribution, including percentile ranks ("percentiles" or "quantiles"), normal curve equivalents, stanines, z-scores, and T-scores. Additionally, a number of behavioral statistical procedures are based on the assumption that scores are normally distributed; for example, t-tests and ANOVAs. Bell curve grading assigns relative grades based on a normal distribution of scores. • In hydrology the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the central limit theorem.[38] The blue picture illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% confidence belt based on the binomial distribution. The rainfall data are represented by plotting positions as part of the cumulative frequency analysis. ## Generating values from normal distribution The bean machine, a device invented by Francis Galton, can be called the first generator of normal random variables. This machine consists of a vertical board with interleaved rows of pins. Small balls are dropped from the top and then bounce randomly left or right as they hit the pins. The balls are collected into bins at the bottom and settle down into a pattern resembling the Gaussian curve. In computer simulations, especially in applications of the Monte-Carlo method, it is often desirable to generate values that are normally distributed. The algorithms listed below all generate the standard normal deviates, since a N(μ, σ2 ) can be generated as X = μ + σZ, where Z is standard normal. All these algorithms rely on the availability of a random number generator U capable of producing uniform random variates. • The most straightforward method is based on the probability integral transform property: if U is distributed uniformly on (0,1), then Φ−1(U) will have the standard normal distribution. The drawback of this method is that it relies on calculation of the probit function Φ−1, which cannot be done analytically. Some approximate methods are described in Hart (1968) and in the erf article. Wichura[39] gives a fast algorithm for computing this function to 16 decimal places, which is used by R to compute random variates of the normal distribution. • An easy to program approximate approach, that relies on the central limit theorem, is as follows: generate 12 uniform U(0,1) deviates, add them all up, and subtract 6 – the resulting random variable will have approximately standard normal distribution. In truth, the distribution will be Irwin–Hall, which is a 12-section eleventh-order polynomial approximation to the normal distribution. This random deviate will have a limited range of (−6, 6).[40] • The Box–Muller method uses two independent random numbers U and V distributed uniformly on (0,1). Then the two random variables X and Y $\begin{align} & X = \sqrt{- 2 \ln U} \, \cos(2 \pi V) , \\ & Y = \sqrt{- 2 \ln U} \, \sin(2 \pi V) . \end{align}$ will both have the standard normal distribution, and will be independent. This formulation arises because for a bivariate normal random vector (X Y) the squared norm X2 + Y2 will have the chi-squared distribution with two degrees of freedom, which is an easily generated exponential random variable corresponding to the quantity −2ln(U) in these equations; and the angle is distributed uniformly around the circle, chosen by the random variable V. • Marsaglia polar method is a modification of the Box–Muller method algorithm, which does not require computation of functions sin() and cos(). In this method U and V are drawn from the uniform (−1,1) distribution, and then S = U2 + V2 is computed. If S is greater or equal to one then the method starts over, otherwise two quantities $X = U\sqrt{\frac{-2\ln S}{S}}, \qquad Y = V\sqrt{\frac{-2\ln S}{S}}$ are returned. Again, X and Y will be independent and standard normally distributed. • The Ratio method[41] is a rejection method. The algorithm proceeds as follows: • Generate two independent uniform deviates U and V; • Compute X = √8/e (V − 0.5)/U; • Optional: if X2 ≤ 5 − 4e1/4U then accept X and terminate algorithm; • Optional: if X2 ≥ 4e−1.35/U + 1.4 then reject X and start over from step 1; • If X2 ≤ −4 lnU then accept X, otherwise start over the algorithm. • The ziggurat algorithm[42] is faster than the Box–Muller transform and still exact. In about 97% of all cases it uses only two random numbers, one random integer and one random uniform, one multiplication and an if-test. Only in 3% of the cases where the combination of those two falls outside the "core of the ziggurat" a kind of rejection sampling using logarithms, exponentials and more uniform random numbers has to be employed. • There is also some investigation[43] into the connection between the fast Hadamard transform and the normal distribution, since the transform employs just addition and subtraction and by the central limit theorem random numbers from almost any distribution will be transformed into the normal distribution. In this regard a series of Hadamard transforms can be combined with random permutations to turn arbitrary data sets into a normally distributed data. ## Numerical approximations for the normal CDF The standard normal CDF is widely used in scientific and statistical computing. The values Φ(x) may be approximated very accurately by a variety of methods, such as numerical integration, Taylor series, asymptotic series and continued fractions. Different approximations are used depending on the desired level of accuracy. • Zelen & Severo (1964) give the approximation for Φ(x) for x > 0 with the absolute error |ε(x)| < 7.5·10−8 (algorithm 26.2.17): $\Phi(x) = 1 - \phi(x)\left(b_1t + b_2t^2 + b_3t^3 + b_4t^4 + b_5t^5\right) + \varepsilon(x), \qquad t = \frac{1}{1+b_0x},$ where ϕ(x) is the standard normal PDF, and b0 = 0.2316419, b1 = 0.319381530, b2 = −0.356563782, b3 = 1.781477937, b4 = −1.821255978, b5 = 1.330274429. • Hart (1968) lists almost a hundred of rational function approximations for the erfc() function. His algorithms vary in the degree of complexity and the resulting precision, with maximum absolute precision of 24 digits. An algorithm by West (2009) combines Hart's algorithm 5666 with a continued fraction approximation in the tail to provide a fast computation algorithm with a 16-digit precision. • Cody (1969) after recalling Hart68 solution is not suited for erf, gives a solution for both erf and erfc, with maximal relative error bound, via Rational Chebyshev Approximation. • Marsaglia (2004) suggested a simple algorithm[nb 1] based on the Taylor series expansion $\Phi(x) = \frac12 + \phi(x)\left( x + \frac{x^3}{3} + \frac{x^5}{3\cdot5} + \frac{x^7}{3\cdot5\cdot7} + \frac{x^9}{3\cdot5\cdot7\cdot9} + \cdots \right)$ for calculating Φ(x) with arbitrary precision. The drawback of this algorithm is comparatively slow calculation time (for example it takes over 300 iterations to calculate the function with 16 digits of precision when x = 10). • The GNU Scientific Library calculates values of the standard normal CDF using Hart's algorithms and approximations with Chebyshev polynomials. ## History It has been suggested that this section be split into a new article titled . (Discuss) Proposed since May 2013. ### Development Some authors[44][45] attribute the credit for the discovery of the normal distribution to de Moivre, who in 1738[nb 2] published in the second edition of his "The Doctrine of Chances" the study of the coefficients in the binomial expansion of (a + b)n. De Moivre proved that the middle term in this expansion has the approximate magnitude of $\scriptstyle 2/\sqrt{2\pi n}$, and that "If m or ½n be a Quantity infinitely great, then the Logarithm of the Ratio, which a Term distant from the middle by the Interval ℓ, has to the middle Term, is $\scriptstyle -\frac{2\ell\ell}{n}$."[46] Although this theorem can be interpreted as the first obscure expression for the normal probability law, Stigler points out that de Moivre himself did not interpret his results as anything more than the approximate rule for the binomial coefficients, and in particular de Moivre lacked the concept of the probability density function.[47] Carl Friedrich Gauss discovered the normal distribution in 1809 as a way to rationalize the method of least squares. In 1809 Gauss published his monograph "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" where among other things he introduces several important statistical concepts, such as the method of least squares, the method of maximum likelihood, and the normal distribution. Gauss used M, M′, M′′, … to denote the measurements of some unknown quantity V, and sought the "most probable" estimator: the one that maximizes the probability φ(M−V) · φ(M′−V) · φ(M′′−V) · … of obtaining the observed experimental results. In his notation φΔ is the probability law of the measurement errors of magnitude Δ. Not knowing what the function φ is, Gauss requires that his method should reduce to the well-known answer: the arithmetic mean of the measured values.[nb 3] Starting from these principles, Gauss demonstrates that the only law that rationalizes the choice of arithmetic mean as an estimator of the location parameter, is the normal law of errors:[48] $\varphi\mathit{\Delta} = \frac{h}{\surd\pi}\, e^{-\mathrm{hh}\Delta\Delta},$ where h is "the measure of the precision of the observations". Using this normal law as a generic model for errors in the experiments, Gauss formulates what is now known as the non-linear weighted least squares (NWLS) method.[49] Marquis de Laplace proved the central limit theorem in 1810, consolidating the importance of the normal distribution in statistics. Although Gauss was the first to suggest the normal distribution law, Laplace made significant contributions.[nb 4] It was Laplace who first posed the problem of aggregating several observations in 1774,[50] although his own solution led to the Laplacian distribution. It was Laplace who first calculated the value of the integral ∫ e−t ²dt = √π in 1782, providing the normalization constant for the normal distribution.[51] Finally, it was Laplace who in 1810 proved and presented to the Academy the fundamental central limit theorem, which emphasized the theoretical importance of the normal distribution.[52] It is of interest to note that in 1809 an American mathematician Adrain published two derivations of the normal probability law, simultaneously and independently from Gauss.[53] His works remained largely unnoticed by the scientific community, until in 1871 they were "rediscovered" by Abbe.[54] In the middle of the 19th century Maxwell demonstrated that the normal distribution is not just a convenient mathematical tool, but may also occur in natural phenomena:[55] "The number of particles whose velocity, resolved in a certain direction, lies between x and x + dx is $\mathrm{N}\; \frac{1}{\alpha\;\sqrt\pi}\; e^{-\frac{x^2}{\alpha^2}}dx$ ### Naming Since its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of errors, Laplace's second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual".[56] However, by the end of the 19th century some authors[nb 5] had started using the name normal distribution, where the word "normal" was used as an adjective – the term now being seen as a reflection of the fact that this distribution was seen as typical, common – and thus "normal". Peirce (one of those authors) once defined "normal" thus: "...the 'normal' is not the average (or any other kind of mean) of what actually occurs, but of what would, in the long run, occur under certain circumstances."[57] Around the turn of the 20th century Pearson popularized the term normal as a designation for this distribution.[58] Many years ago I called the Laplace–Gaussian curve the normal curve, which name, while it avoids an international question of priority, has the disadvantage of leading people to believe that all other distributions of frequency are in one sense or another 'abnormal'. Also, it was Pearson who first wrote the distribution in terms of the standard deviation σ as in modern notation. Soon after this, in year 1915, Fisher added the location parameter to the formula for normal distribution, expressing it in the way it is written nowadays: $df = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-m)^2}{2\sigma^2}}dx$ The term "standard normal", which denotes the normal distribution with zero mean and unit variance came into general use around 1950s, appearing in the popular textbooks by P.G. Hoel (1947) "Introduction to mathematical statistics" and A.M. Mood (1950) "Introduction to the theory of statistics".[59] When the name is used, the "Gaussian distribution" was named after Carl Friedrich Gauss, who introduced the distribution in 1809 as a way of rationalizing the method of least squares as outlined above. Among English speakers, both "normal distribution" and "Gaussian distribution" are in common use, with different terms preferred by different communities. ## See also • Behrens–Fisher problem—the long-standing problem of testing whether two normal samples with different variances have same means; • Bhattacharyya distance – method used to separate mixtures of normal distributions • Erdős–Kac theorem—on the occurrence of the normal distribution in number theory • Gaussian blur—convolution, which uses the normal distribution as a kernel • Sum of normally distributed random variables • Normally distributed and uncorrelated does not imply independent • Tweedie distributions—The normal distribution is a member of the family of Tweedie exponential dispersion models ## Notes 1. De Moivre first published his findings in 1733, in a pamphlet "Approximatio ad Summam Terminorum Binomii (a + b)n in Seriem Expansi" that was designated for private circulation only. But it was not until the year 1738 that he made his results publicly available. The original pamphlet was reprinted several times, see for example Walker (1985). 2. "My custom of terming the curve the Gauss–Laplacian or normal curve saves us from proportioning the merit of discovery between the two great astronomer mathematicians." quote from Pearson (1905, p. 189) 3. Besides those specifically referenced here, such use is encountered in the works of Peirce, Galton (Galton (1889, chapter V)) and Lexis (Lexis (1878), Rohrbasser & Véron (2003)) c. 1875.[] ## Citations 1. Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory. John Wiley and Sons. p. 254. 2. Park, Sung Y.; Bera, Anil K. (2009). "Maximum Entropy Autoregressive Conditional Heteroskedasticity Model". Journal of Econometrics (Elsevier) 150 (2): 219–230. doi:10.1016/j.jeconom.2008.12.014. Retrieved 2011-06-02. 3. ^ a b c 4. Scott, Clayton; Nowak, Robert (August 7, 2003). "The Q-function". Connexions. 5. 6. ^ a b 7. ^ a b 8. Quine, M.P. (1993) "On three characterisations of the normal distribution", Probability and Mathematical Statistics, 14 (2), 257-263 9. 10. Eugene Lukacs (1942). "A Characterization of the Normal Distribution". The Annals of Mathematical Statistics 13 (1): 91–93. doi:10.1214/aoms/1177731647. 11. D. Basu and R. G. Laha (1954). "On Some Characterizations of the Normal Distribution". 13 (4): 359–362. 12. Lehmann, E. L. (1997). Testing Statistical Hypotheses (2nd ed.). Springer. p. 199. ISBN 0-387-94919-4.  Unknown parameter `|unused_data=` ignored (help) 13. ^ a b 14. Jaynes, Edwin T. (2003). Probability Theory: The Logic of Science. Cambridge University Press. pp. 592–593. 15. Oosterbaan, Roland J. (1994). "Chapter 6: Frequency and Regression Analysis of Hydrologic Data". In Ritzema, Henk P. Drainage Principles and Applications, Publication 16 (second revised ed.). Wageningen, The Netherlands: International Institute for Land Reclamation and Improvement (ILRI). pp. 175–224. ISBN 90-70754-33-9. 16. Wichura, Michael J. (1988). "Algorithm AS241: The Percentage Points of the Normal Distribution". Applied Statistics (Blackwell Publishing) 37 (3): 477–484. doi:10.2307/2347330. JSTOR 2347330. 17. Peirce, Charles S. (c. 1909 MS), v. 6, paragraph 327 18. ## References • •   In particular, the entries for "bell-shaped and bell curve", "normal (distribution)", "Gaussian", and "Error, law of error, theory of errors, etc.". • Amari, Shun-ichi; Nagaoka, Hiroshi (2000). Methods of Information Geometry. Oxford University Press. ISBN 0-8218-0531-2. • Bernardo, José M.; Smith, Adrian F. M. (2000). Bayesian Theory. Wiley. ISBN 0-471-49464-X. • Bryc, Wlodzimierz (1995). The Normal Distribution: Characterizations with Applications. Springer-Verlag. ISBN 0-387-97990-5. • Casella, George; Berger, Roger L. (2001). Statistical Inference (2nd ed.). Duxbury. ISBN 0-534-24312-6. • Cody, William J. (1969). "Rational Chebyshev Approximations for the Error Function". Mathematics of Computation 23 (107): 631–638. doi:10.1090/S0025-5718-1969-0247736-4. • Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory. John Wiley and Sons. • de Moivre, Abraham (1738). . ISBN 0-8218-2103-2. • Fan, Jianqing (1991). "On the optimal rates of convergence for nonparametric deconvolution problems". The Annals of Statistics 19 (3): 1257–1272. doi:10.1214/aos/1176348248. JSTOR 2241949. • • Galambos, Janos; Simonelli, Italo (2004). Products of Random Variables: Applications to Problems of Physics and to Arithmetical Functions. Marcel Dekker, Inc. ISBN 0-8247-5402-6. • Gauss, Carolo Friderico (1809). Theoria motvs corporvm coelestivm in sectionibvs conicis Solem ambientivm [Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections] (in Latin). English translation. • Gould, Stephen Jay (1981). (first ed.). W. W. Norton. ISBN 0-393-01489-4. • Halperin, Max; Hartley, Herman O.; Hoel, Paul G. (1965). "Recommended Standards for Statistical Symbols and Notation. COPSS Committee on Symbols and Notation". The American Statistician 19 (3): 12–14. doi:10.2307/2681417. JSTOR 2681417. • Hart, John F.; et al. (1968). Computer Approximations. New York, NY: John Wiley & Sons, Inc. ISBN 0-88275-642-7. • Hazewinkel, Michiel, ed. (2001), "Normal Distribution", , Springer, ISBN 978-1-55608-010-4 • Herrnstein, Richard J.; Murray, Charles (1994). . Free Press. ISBN 0-02-914673-9. • Huxley, Julian S. (1932). Problems of Relative Growth. London. ISBN 0-486-61114-0. OCLC 476909537. • Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1994). Continuous Univariate Distributions, Volume 1. Wiley. ISBN 0-471-58495-9. • Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1995). Continuous Univariate Distributions, Volume 2. Wiley. ISBN 0-471-58494-0. • Kinderman, Albert J.; Monahan, John F. (1977). "Computer Generation of Random Variables Using the Ratio of Uniform Deviates". ACM Transactions on Mathematical Software 3 (3): 257–260. doi:10.1145/355744.355750. • Krishnamoorthy, Kalimuthu (2006). Handbook of Statistical Distributions with Applications. Chapman & Hall/CRC. ISBN 1-58488-635-8. • Kruskal, William H.; Stigler, Stephen M. (1997). In Spencer, Bruce D. Normative Terminology: 'Normal' in Statistics and Elsewhere. Statistics and Public Policy. Oxford University Press. ISBN 0-19-852341-6. • Laplace, Pierre-Simon de (1774). "Mémoire sur la probabilité des causes par les événements". Mémoires de l'Académie royale des Sciences de Paris (Savants étrangers), tome 6: 621–656.  Translated by Stephen M. Stigler in Statistical Science 1 (3), 1986: JSTOR 2245476. • Laplace, Pierre-Simon (1812). Théorie analytique des probabilités []. • Le Cam, Lucien; Lo Yang, Grace (2000). Asymptotics in Statistics: Some Basic Concepts (second ed.). Springer. ISBN 0-387-95036-2. • Lexis, Wilhelm (1878). "Sur la durée normale de la vie humaine et sur la théorie de la stabilité des rapports statistiques". Annales de démographie internationale (Paris) II: 447–462. • Lukacs, Eugene; King, Edgar P. (1954). "A Property of Normal Distribution". The Annals of Mathematical Statistics 25 (2): 389–394. doi:10.1214/aoms/1177728796. JSTOR 2236741. • McPherson, Glen (1990). Statistics in Scientific Investigation: Its Basis, Application and Interpretation. Springer-Verlag. ISBN 0-387-97137-8. • Marsaglia, George; Tsang, Wai Wan (2000). "The Ziggurat Method for Generating Random Variables". Journal of Statistical Software 5 (8). • Wallace, C. S. (1996). "Fast pseudo-random generators for normal and exponential variates". ACM Transactions on Mathematical Software 22 (1): 119–127. doi:10.1145/225545.225554. • Marsaglia, George (2004). "Evaluating the Normal Distribution". Journal of Statistical Software 11 (4). • Maxwell, James Clerk (1860). "V. Illustrations of the dynamical theory of gases. — Part I: On the motions and collisions of perfectly elastic spheres". Philosophical Magazine, series 4 19 (124): 19–32. doi:10.1080/14786446008642818. • Patel, Jagdish K.; Read, Campbell B. (1996). Handbook of the Normal Distribution (2nd ed.). CRC Press. ISBN 0-8247-9342-0. • Pearson, Karl (1905). "'Das Fehlergesetz und seine Verallgemeinerungen durch Fechner und Pearson'. A rejoinder". Biometrika 4 (1): 169–212. JSTOR 2331536. • Pearson, Karl (1920). "Notes on the History of Correlation". Biometrika 13 (1): 25–45. doi:10.1093/biomet/13.1.25. JSTOR 2331722. • Rohrbasser, Jean-Marc; Véron, Jacques (2003). "Wilhelm Lexis: The Normal Length of Life as an Expression of the "Nature of Things"". Population 58 (3): 303–322. • Stigler, Stephen M. (1978). "Mathematical Statistics in the Early States". The Annals of Statistics 6 (2): 239–265. doi:10.1214/aos/1176344123. JSTOR 2958876. • Stigler, Stephen M. (1982). "A Modest Proposal: A New Standard for the Normal". The American Statistician 36 (2): 137–138. doi:10.2307/2684031. JSTOR 2684031. • Stigler, Stephen M. (1986). The History of Statistics: The Measurement of Uncertainty before 1900. Harvard University Press. ISBN 0-674-40340-1. • Stigler, Stephen M. (1999). Statistics on the Table. Harvard University Press. ISBN 0-674-83601-4. • Walker, Helen M. (1985). "De Moivre on the Law of Normal Probability". In Smith, David Eugene. A Source Book in Mathematics. Dover. ISBN 0-486-64690-4. • • West, Graeme (2009). "Better Approximations to Cumulative Normal Functions". Wilmott Magazine: 70–76. • Zelen, Marvin; Severo, Norman C. (1964). Probability Functions (chapter 26). , by Abramowitz, M.; and Stegun, I. A.: National Bureau of Standards. New York, NY: Dover. ISBN 0-486-61272-4. Top Videos Latest Videos • From Arenamont... • From mxology • From Howard O.... • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From Chris Devers • From andrusdev... • From Jakobpunkt • From AJC1 • From M J M • From Unhindere... • From Clint_2013 • From kevin dooley • From Tiemen... • From NUREZKI • From okfn • From Better... • From aphexious • From Gaynoir_ • From John... • From fdecomite • From HelpAge • From tvanhoosear • From Robin M.... • From Compassio... • From StarHunti... • From draggin • From BierDoctor • From Rétrofutu... • From vlasta2 • From unspeakab... • From wolftone • From Chris Devers • From Chris Devers • From Chris Devers Images Source: Flickr. Images licensed under the Creative Commons CC-BY-SA
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 140, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.865045964717865, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/3691/musiela-parameterization?answertab=oldest
# Musiela parameterization I have a question regarding the proof of the Musiela parametrization for the dynamics of the forward rate curve. If $T$ is the maturity, $\tau=T-t$ is the time to maturity, and $dF(t,T)$ defines the dynamics of the forward rate curve, then the Musiela parametrization defines the forward rate dynamics $$d\bar{F}(t,\tau)=dF(t,t+\tau)$$. My question is regarding the next step in the working of the Musiela parametrization. All of the literature I've looked at explains the next line by simply stating that a "slight variation" of Ito is applied. The line reads: $$d\bar{F}(t,\tau)=dF(t,T)+\frac{\partial F}{\partial T}dt$$ Can someone please clarify what variation of Ito is being used here? I'm not following. The parameters to $d\bar{F}$ do not include an Ito drift/diffusion process, so why is Ito being used? - ## 1 Answer $dF(t,T)$ describes the dynamics of the rate of a particular forward contract as time $t$ moves forward to a fixed expiration $T$. $d\bar F(t,\tau)$ describes the dynamics of the rate at a particular point on the yield curve as time moves forward. The differential $\frac{\partial F}{\partial T}dt$ is simply the difference between holding the expiration time $T$ constant in the case of $F$ and moving it ahead with time $t$ to stay at the same point $t+\tau$ on the yield curve in the case of $\bar F$. Somewhere underlying all this is a drift-diffusion process, but it isn't stated explicitly in your equations. $dF(t,t+\tau)$ is a "total" differential of $F$ with respect to a simultaneous change in both its arguments. This becomes the sum of a partial differential w.r.t. change in the first argument only, $dF(t,T)$, and a partial differential w.r.t. change in the second argument only, $\frac{\partial F}{\partial T}dt$, as time moves forward. - Thanks for the response. I think I follow your explanation, but I'm looking for more of a mathematical derivation of the first and second terms for $d\bar{F}(t;\tau)$. Mathematically, how would you explain going from the first equation I've listed in the original post to the second? Thanks for your time. – qfin_newguy Jul 1 '12 at 16:24 I edited my answer to explain a little more. – JL344 Jul 1 '12 at 17:16 Thanks for the revision. That's along the lines of what I was thinking, but I think I'm mis-interpreting the notation somehow. Here's how I understand breaking up the total differential into the sum of partials: $$dF(t,T)=\frac{\partial F}{\partial t} dt + \frac{\partial F}{\partial T} dT$$ – qfin_newguy Jul 1 '12 at 18:38 T is a function of t, $T=t+\tau$, so applying the chain rule on the second term yields: $$dF(t,T)=\frac{\partial F}{\partial t} dt + \frac{\partial F}{\partial T} \frac{\partial T}{\partial t} dT$$ with $\frac{\partial T}{\partial t}=1$ – qfin_newguy Jul 1 '12 at 18:49 As I wrote that last comment, I realized that I don't think I can write $dT$ as a differential since T is a function. So that explains the second term. For the first term though, your answer is implying that $\frac{\partial F}{\partial t}dt = dF(t,T)$. If that's correct, that's where I'm getting lost. Thanks again for your help with this. – qfin_newguy Jul 1 '12 at 18:53 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453312754631042, "perplexity_flag": "head"}
http://blog.informationgeometry.org/article.php?id=187
# Computational Information Geometry Wonderland ## Aug 08, 2011 ### The Burbea-Rao and Bhattacharyya Centroids Post @ 15:15:36 | Burbea-Rao We study the centroid with respect to the class of information-theoretic Burbea-Rao divergences that generalize the celebrated Jensen-Shannon divergence by measuring the non-negative Jensen difference induced by a strictly convex and differentiable function. Although those Burbea-Rao divergences are symmetric by construction, they are not metric since they fail to satisfy the triangle inequality. We first explain how a particular symmetrization of Bregman divergences called Jensen-Bregman distances yields exactly those Burbea-Rao divergences. We then proceed by defining skew Burbea-Rao divergences, and show that skew Burbea-Rao divergences amount in limit cases to compute Bregman divergences. We then prove that Burbea-Rao centroids can be arbitrarily finely approximated by a generic iterative concave-convex optimization algorithm with guaranteed convergence property. In the second part of the paper, we consider the Bhattacharyya distance that is commonly used to measure overlapping degree of probability distributions. We show that Bhattacharyya distances on members of the same statistical exponential family amount to calculate a Burbea-Rao divergence in disguise. Thus we get an efficient algorithm for computing the Bhattacharyya centroid of a set of parametric distributions belonging to the same exponential families, improving over former specialized methods found in the literature that were limited to univariate or ?diagonal? multivariate Gaussians. To illustrate the performance of our Bhattacharyya/Burbea-Rao centroid algorithm, we present experimental performance results for $k$-means and hierarchical clustering methods of Gaussian mixture models. paper #### Trackback No Trackbacks ##### Track from Your Website http://blog.informationgeometry.org/trackback/tb.php?id=187 (言及リンクのないトラックバックは無視されます) No Comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8367025852203369, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/harmonic-analysis?page=4&sort=active&pagesize=15
Tagged Questions Harmonic analysis is the generalisation of Fourier analysis. Use this tag for analysis on locally compact groups (e.g. Pontryagin duality), eigenvalues of the Laplacian on compact manifolds or graphs, and the abstract study of Fourier transform on Euclidean spaces (singular integrals, ... 3answers 469 views Do discontinuous harmonic functions exist? A function, $u$, on $\mathbb R^n$ is normally said to be harmonic if $\Delta u=0$, where $\Delta$ is the Laplacian operator $\Delta=\sum_{i=1}^n\frac{\partial^2}{\partial x_i^2}$. So obviously, ... 0answers 89 views Curvatures of contours of solutions of 3d Poisson's equation Let $f(x,y,z)$ be a complex function in a 3d euclidian space that fulfill the Poisson's equation \frac{\partial^2}{\partial x^2} f + \frac{\partial^2}{\partial y^2} f + \frac{\partial^2}{\partial ... 3answers 159 views What is a “domain” in the maximum-minimum principle? The maximum-minimum principle says that A harmonic function on a domain cannot attain its maximum or its minimum unless it is constant. Here is my question: If we restrict our attention in ... 1answer 136 views On covering lemma and Calderón–Zygmund decomposition I am working on something which needs to understand covering lemmas and Calderón–Zygmund decomposition. These type of lemmas are as in the following link ... 1answer 498 views Theorem of Steinhaus The Steinhaus theorem says that if a set $A \subset \mathbb R^n$ is of positive inner Lebesgue measure then $\operatorname{int}{(A+A)} \neq \emptyset$. Is it true that also ... 1answer 373 views Reference request: Fourier and Fourier-Stieltjes algebras I would like to learn the basic theory of Fourier algebras and Fourier-Stieltjes algebras. In particular, I want to know how these two objects are defined in the case of not necessarily abelian ... 0answers 226 views Motivation for abstract harmonic analysis I am reading Folland's A Course in Abstract Harmonic Analysis and find this book extremely exciting. However it seems Folland does not give many examples to illustrate the motivation behind much of ... 1answer 128 views Surjective endomorphism preserves Haar measure How to prove the following statement: Let $G$ be a compact topological group and let $m$ be the Haar measure on it. Let $\varphi$ be a continuous endomorphism of $G$ onto $G$, i.e., the map $\varphi$ ... 1answer 124 views On a duality Fefferman-Stein's inequality Let $M$ be the Hardy-Littlewood maximal operator. In the book "Weighted norm inequalities and Related Topics" by Rubio de Francia and J. Cuerva, page 150, theorem 2.1.2 states as the following: *For ... 1answer 197 views Applications of Young's convolution inequality Recall that the convolution of two functions is given by $$f*g(y)=\int f(x)g(y-x)dx.$$ The well known inequality known as Young's inequality, say that $$\|f*g\|_r\leq\|f\|_p\cdot\|g\|_q$$ provided ... 0answers 324 views Spherical harmonics give all the irreducible representations of $SO(3)$? It is mentioned in Wiki that the spaces $\mathcal{H}_{k}$ of spherical harmonics of degree $k$ give ALL the irreducible representations of $SO(3)$. Could anyone tell me where can I find the proof? ... 1answer 242 views Stone-Weierstrass implies Fourier expansion To prove the existence of Fourier expansion, I have to solve the following exercise, which supposedly follows from the Stone-Weierstrass theorem: Let $G$ be a compact abelian topological group ... 1answer 327 views Properties of Haar measure Let $G$ be a locally compact group (but not discrete) and let $m$ be its left Haar measure. Is it true that $\forall \epsilon$ $\exists$ $C$ such that $C$ is a compact neighborhood of the identity and ... 1answer 220 views Left regular representation of $L^1(G)$ for a locally compact group $G$ Let $G$ be a locally compact group (not discrete) and let $L$ be the left regular representation of $A = L^1(G)$ on itself i.e. $L: A \to \mathcal{B}(A)$ where $L(f): A \to A$, $L(f)(g) = f*g$. I want ... 0answers 76 views Preduals of Banach spaces and in particular of $\text{BMO}(\mathbf R^d)$ In general the predual of a Banach space is not unique. If there are multiple ones must they be isomorphic? More specifically is $H^1(\mathbf R^d)$ the only predual of $\text{BMO}(\mathbf R^d)$ or ... 3answers 352 views Rate of divergence for the series $\sum |\sin(n\theta) / n|$ In the following we consider the series $$S(N;\theta)= \sum_{n = 1}^{N} \left| \frac{\sin n\theta}{n} \right|$$ parametrized by $\theta$. It is well known that this series (taking the limit ... 0answers 178 views Proving the maximum principle for harmonic real valued functions in $\mathbb{R}^n$ Assuming the principle is stated as such: Let $U\subset\mathbb{R}^n$ be a bounded domain and $u$ harmonic in $U$ such that $\sup_{x\in U}u(x)\leq A$ for some $A\in\mathbb{R}$. Then either \$\forall ... 1answer 76 views Restriction and completion of Haar measure on $\mathbf{R} \times \mathbf{R_d}$ to Borel $\sigma$-algebra Let's consider the measure space $(G, \mathfrak{M}, \mu)$, where $\mu$ is the Haar measure on topological group $G:=\mathbf{R} \times \mathbf{R_d}$, ($\mathbf{R}$ is the group of reals with the ... 1answer 160 views Example of a locally compact connected Abelian group with non-$\sigma$-finite measure I look for an example of an Abelian locally compact topological group $G$ such that: $G$ is connected and Haar measure on $G$ is not $\sigma$-finite and $\{0\} \times G \subset \mathbf{R} \times G$ ... 0answers 283 views Positive definite function zoo A positive definite function $\varphi: G \rightarrow \mathbb{C}$ on a group $G$ is a function that arises as a coefficient of a unitary representation of $G$. For a definition and discussion of ... 2answers 366 views Bounded linear operators that commute with translation I'm trying to read Elias Stein's "Singular Integrals" book, and in the beginning of the second chapter, he states two results classifying bounded linear operators that commute (on $L^1$ and $L^2$ ... 0answers 123 views Steinhaus theorem in topological groups Let $(G,\cdot)$ be a locally compact Abelian topological group with Haar measure. It is known that if $B$ is a measurable subset of $G$ of finite and positive Haar measure then \$int(B \cdot B)\neq ... 1answer 180 views Asymptotic error of Fourier series partial sum of sawtooth function In Iwaniec's book, Topics in Classical Automorphic Forms, pg. 4, he gives the statement: $$\{x\}=\frac{1}{2}-\sum_{n=1}^N\frac{\sin 2\pi nx}{\pi n}+O((1+||x||N)^{-1})$$ where $\{x\}$ denotes the ... 1answer 200 views Transitive group actions and homogeneous spaces Given a topological group $G$ and a space $X$ with a transitive $G$ action, let $G_x$ be the isotropy group of a point. In Folland "A course in harmonic analysis", there is a statement that $X$ is ... 1answer 554 views Lyapunov's Inequality for Weak-Lp Spaces Let $(X,\mu)$ be a measure space. Suppose that $0 < p_{0} < p < p_{1} < \infty$ and $\frac{1}{p} = \frac{1-\theta}{p_{0}} + \frac{\theta}{p_{1}}$ for some $\theta \in (0,1)$. If \$f \in ... 1answer 90 views Fourier analysis on groups and paths in a Cayley graph If one takes a cyclic group and a function on this group, and performs harmonic analysis on it (classical Fourier analysis), the result is a set of coefficients, each one of them corresponding to ... 1answer 172 views Fourier transform of function in $L^{4/3}$ Suppose $f \in L^{4/3}(\mathbb{R}^2)$ and denote its Fourier transform by $\mathscr{F}(f)$. Is it true that the function $g:\mathbb{R}^2 \rightarrow \mathbb{C}$ defined by ... 1answer 123 views In what locally compact abelian groups does $\mathbb{Q}$ embed densely? I know that there is classification of local fields, but here is a closely related question: Can the additive group of $\mathbb{Q}$ be a proper dense subgroup of a locally compact abelian group, whose ... 1answer 2k views Criteria for swapping integration and summation order I have a function (a potential from an electrostatic potential via a Fourier series) in the form of $$V(x, y, z)=\sum_n\sum_m \ a(x, n, m) b(y, n) c(z, m) \int\int f(u, v) d(u,n) e(v,m) du\, dv$$ ... 1answer 115 views Restricted Direct Products in Koch's Number Theory On p.353 of Number Theory: Algebraic Numbers and Functions by Helmut Koch, he considers a group $G$ which is the restricted direct product of the locally compact abelian groups $G_i$ with respect to ... 2answers 105 views regularity of $d\mu=u dx$ Let $G$ be a abelian, locally compact group with a Haar measure $dx$ on $G$. We know that every Haar measure is inner regular, ie... for any open subset $U$ of $G$, then ... 1answer 173 views Hardy-Littlewood maximal function of a Lipschitz function In a book, it is said that Hardy-Littlewood maximal function of a Lipschitz function is also Lipschitz. How do we prove this? +) For Hardy-Littlewood maximal function, see: ... 0answers 156 views Harmonic measure could anybody will help me to do this problems: Let $\mathcal D$ be the unit disk a Set $E\subseteq\partial\mathcal D$ has harmonic measure identically $0$ with respect to $\mathcal D$. What can you ... 1answer 369 views Is this a square wave signal? i have a decomposition of a square wave signal: $$y = \frac{4h}{\pi}(\sin(x) + \frac{1}{3}\sin(3x) + \frac{1}{5}\sin(5x) + ...)$$ I computed the fundamental wave and 2 harmonic waves: U_{r0} = ... 1answer 76 views Convolution on noncommutative group algebras If $G$ is a non-Abelian locally compact group, and $f$ is in $L^1{(G)}$ and $u$ is in $L^{\infty}(G)$, and $f\ast u=0$ can it be concluded that $u\ast f=0$? 0answers 134 views Extending a convolution operator from $L^p(\mathbb{R}^d)$ to $L^p(\mathbb{R}^d;L^q(\Omega))$ Let $1<p,q<\infty$ and $\Omega$ some $\sigma$-finite measure space. Let $T$ denote a bounded convolution operator on $L^p(\mathbb{R}^d)$ with scalar valued kernel $K$ which is locally integrable ... 0answers 121 views Estimate the Hilbert transform Let $1\leq p<∞$: Suppose that there exists a constant $C>0$ such that for all $f\in S(\mathbb{R})$ with $L^p$ norm one we have $$\biggl|\{x:|H(f)(x)|>1\}\biggr|\leq C.$$ Here $H(f)$ is ... 2answers 508 views Sine wave harmonics, sawtooth waveform modified I am trying to find the formula to generate the waveform below. By using harmonics on standard sine waves and then combining the outcomes, I have managed to generate, triangle, sawtooth and square ... 1answer 195 views Suppose $\phi$ is a weak solution of $\Delta \phi = f \in \mathcal{H}^1$. Then $\phi\in W^{2,1}$ I'm trying to prove the statement in the title in as simple a way as possible. It is Theorem 3.2.9 in Helein's book "Harmonic maps, conservation laws, and moving frames", although it is not proved ... 2answers 318 views Why are translation invariant operators on $L^2$ multiplier operators For $m \in L^\infty$, we can define the multiplier operator $T_m \in L(L^2,L^2)$ implicitly by $\mathcal F (T_m f)(\xi) = m(\xi) \cdot (\mathcal F T_m)(\xi)$ where $\mathcal F$ is the Fourier ... 1answer 246 views Convolution on group with measure I was wondering about the generalization of the concept of convolution from the familiar one on real spaces and how many properties still remain. For convolution on Lebesgue-integrable real-valued ... 1answer 165 views Convolution inequality Let $u$ and $v$ be two $L^1(\mathbb{R})$ functions such that $\|u\|_{L^1} \le \|v\|_{L^1}$ and $f$ is non-negative $L^1(\mathbb{R})$ with non-negative inverse Fourier transform. Is it true that for ... 2answers 77 views Sequence of smooth functions whose image under a maximal operator diverges in $L_p$ norm For functions $f: \mathbb{R} \rightarrow \mathbb{C}$, define $$M f(x) = \sup_{t >0} \frac{1}{2}| f(x+t) + f(x-t) |.$$ Given $p \geq 1$, I want to construct a sequence of smooth functions ... 1answer 82 views Bound on the relative measure of $\delta$-neighbourhoods of compact sets using the Hardy-Littlewood maximal theorem Notation: $|A|$ is the Lebesgue measure of $A \subset \mathbb{R}^d$, and $A_\delta = \{ x : \text{dist}(x,A) \leq \delta \}$ is the $\delta$-neighborhood of $A$. I want to show that there is a ... 0answers 88 views Intuition for Calderon-Zygmund operator? What is the best intuition for Calderon-Zygmund operators? Why are they so important in singular integrals, and complementary, which singular integrals don't they cover? 1answer 427 views What are the differences and relations of Haar integrals, Lebesgue integrals, Riemann integrals? Are Riemann integrals special cases of Haar integrals? Why do we need the invariant property under some actions of groups in the definition of Haar integrals? For example, if we have a group of real ... 3answers 428 views learning algebra and harmonic analysis I've revised my question a bit in response to the (very helpful) advice so far-- I have an engineering background but am interested in learning abstract harmonic analysis. My interest is rather ... 2answers 195 views Operators commuting with translations Let $T$ be a bounded linear operator on $L^2(\mathbb R)$. So, let us now assume that $T$ commutes with the translations $\tau_x$. How do I now show that $T$ is given by a convolution with respect to a ... 3answers 266 views Derivatives distribution Let $f$ be a distribution on $\mathbf{R}^n$ (in the Schwartz sense) such that $$\frac{\partial f}{\partial x_i} = 0 \text{ for $i = 1, \ldots, n$.}$$ Then how to prove that $f$ is a constant? I had ... 1answer 273 views Fourier transform of a special Schwartz function In Classical Fourier Analysis by Loukas Grafakos we have in Proposition 2.3.25 the following definition for $\mathcal{S}_\infty(\mathbf{R}^n)$, namely that these are all the Schwartz functions $\phi$ ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 148, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8990887999534607, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2011/03/31/coordinate-vectors-span-tangent-spaces/?like=1&source=post_flair&_wpnonce=ae4d6bb289
# The Unapologetic Mathematician ## Coordinate Vectors Span Tangent Spaces Given a point $p$ in an $n$-dimensional manifold $M$, we have the vector space $\mathcal{T}_pM$ of tangent vectors at $p$. Given a coordinate patch $(U,x)$ around $p$, we’ve constructed $n$ coordinate vectors at $p$, and shown that they’re linearly independent in $\mathcal{T}_pM$. I say that they also span the space, and thus constitute a basis. To see this, we’ll need a couple lemmas. First off, if $f$ is constant in a neighborhood of $p$, then $v(f)=0$ for any tangent vector $v$. Indeed, since all that matters is the germ of $f$, we may as well assume that $f$ is the constant function with value $c$. By linearity we know that $v(c)=cv(1)$. But now since $1=1\cdot1$ we use the derivation property to find $\displaystyle v(1)=v(1\cdot1)=v(1)1(p)+1(p)v(1)=2v(1)$ and so we conclude that $v(1)=v(c)=0$. In a slightly more technical vein, let $U$ be a “star-shaped” neighborhood of $0\in\mathbb{R}^n$. That is, not only does $U$ contain $0$ itself, but for every point $x\in U$ it contains the whole segment of points $tx$ for $0\leq t\leq1$. An open ball, for example, is star-shaped, so you can just think of that to be a little simpler. Anyway, given such a $U$ and a differentiable function $f$ on it we can find $n$ functions $\psi_i$ with $\psi_i(0)=D_if(0)$, and such that we can write $\displaystyle f=f(0)+\sum\limits_{i=1}^nu^i\psi_i$ where $u^i$ is the $i$th component function. If we pick a point $x\in U$ we can parameterize the segment $c(t)=tx$, and set $\phi=f\circ c$ to get a function on the unit interval $\phi:[0,1]\to\mathbb{R}$. This function is clearly differentiable, and we can calculate $\displaystyle\phi'(t)=\sum\limits_{i=1}^n\frac{\partial f}{\partial x^i}\frac{dc^i}{dt}=\sum\limits_{i=1}^nD_if(tx)x^i$ using the multivariable chain rule. We find $\displaystyle\begin{aligned}f(x)-f(0)&=\phi(1)-\phi(0)\\&=\int\limits_0^1\phi'(t)\,dt\\&=\sum\limits_{i=1}^nx^i\int\limits_0^1D_if(tx)\,dt\end{aligned}$ We can thus find the desired functions $\psi_i$ by setting $\displaystyle\psi_i(x)=\int\limits_0^1D_if(tx)\,dt$ Now if we have a differentiable function $f$ defined on a neighborhood $U$ of a point $p\in M$, we can find a coordinate patch $(U,x)$ — possibly by shrinking $U$ — with $x(p)=0$ and $x(U)$ star-shaped. Then we can apply the previous lemma to $f\circ x^{-1}$ to get $\displaystyle f\circ x^{-1}=f(p)+\sum\limits_{i=1}^nu^i\psi_i$ with $\psi_i(0)=\left[\frac{\partial}{\partial x^i}(p)\right](f)$. Moving the coordinate map to the other side we find $\displaystyle f=f(p)+\sum\limits_{i=1}^nx^i\left(\psi_i\circ x\right)$ Now we can hit this with a tangent vector $v$ $\displaystyle\begin{aligned}v(f)&=v\left(f(p)+\sum\limits_{i=1}^nx^i\left(\psi_i\circ x\right)\right)\\&=v\left(f(p)\right)+\sum\limits_{i=1}^nv\left(x^i\left(\psi_i\circ x\right)\right)\\&=\sum\limits_{i=1}^n\left(v(x^i)\left[\psi_i\circ x\right](p)+x^i(p)v\left(\psi_i\circ x\right)\right)\\&=\sum\limits_{i=1}^nv(x^i)\psi_i(0)\\&=\sum\limits_{i=1}^nv(x^i)\left[\frac{\partial}{\partial x^i}(p)\right](f)\end{aligned}$ where we have used linearity, the derivation property, and the first lemma above. Thus we can write $\displaystyle v=\sum\limits_{i=1}^nv(x^i)\frac{\partial}{\partial x^i}(p)$ and the coordinate vectors span the space of tangent vectors at $p$. As a consequence, we conclude that $\mathcal{T}_pM$ always has dimension $n$ — exactly the same dimension as the manifold itself. And this is exactly what we should expect; if $M$ is $n$-dimensional, then in some sense there are $n$ independent directions to move in near any point $p$, and these “directions to move” are the core of our geometric notion of a tangent vector. Ironically, if we start from a more geometric definition of tangent vectors, it’s actually somewhat harder to establish this fact, which is partly why we’re starting with the more algebraic definition. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 8 Comments » 1. [...] problem, then, is to calculate the th component — the one corresponding to — of . But we know that this coefficient comes from sticking into this vector and seeing what pops [...] Pingback by | April 6, 2011 | Reply 2. [...] basis on , and thus get components of in those coordinates. As usual, we calculate the th component [...] Pingback by | April 8, 2011 | Reply 3. [...] if has dimension , then we know is an -manifold, and thus we know that is an -dimensional vector space for every point . Now, of course all -dimensional vector [...] Pingback by | April 11, 2011 | Reply 4. [...] we know at each point that the coordinate vectors span the tangent space. So let’s take a vector field and break up the vector . We can [...] Pingback by | May 24, 2011 | Reply 5. [...] is basically just like a lemma we proved for functions on star-shaped neighborhoods. Indeed, it suffices to [...] Pingback by | June 16, 2011 | Reply 6. Just after “multivariable chain rule”, f(p) – f(0) should be f(x) – f(0). p lives in the manifold rather than R^n. Comment by Rory Molinari | July 6, 2011 | Reply 7. Hi, I’m learning differential geometry and run into this blog, I must say you did a great job. But I have a question: “What does \circ means?” Comment by guo wei | October 16, 2012 | Reply 8. It sounds like for some reason the TeX is not being rendered in your browser. \circ is the TeX code to generate the composition symbol $\circ$. Comment by | October 16, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 66, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216353297233582, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/46062/what-is-background-independence-and-how-important-is-it/46072
# What is background independence and how important is it? 1. What is background independence and how important is it? 2. In order to be a theory of everything, will the final string-theory/m-theory have to be background independent? 3. Does the current lack of background independence show string theory is currently NOT a theory of everything? My understanding from Wikipedia is that the ADS/CFT shows hopeful hints. Are there any recent papers that have made progress in this direction? I've tried google but get haven't been able to get a definitive answer to this question. I found this interesting post by Lubos Motl, but it is from 2005. - I actually think that the Wikipedia summary isn't bad. – Luboš Motl Dec 6 '12 at 7:24 Oops, well, I started that article, too, so this could have looked like (partial) self-boasting as well, but I didn't realize that when I wrote the previous comment. – Luboš Motl Dec 6 '12 at 8:07 Regarding question one "What is background independence [...]?", I think it would be helpful to have counterexamples for what is not background intependend. Some simple equations and why they are not. – Nick Kidman Dec 6 '12 at 10:12 – ungerade Dec 6 '12 at 11:38 ## 2 Answers 1. Background independence is generally the independence of the equations defining a theory on all the allowed values of its degrees of freedom, especially values of spacetime fields, especially the metric tensor. However, this concept has various levels that are inequivalent and the differences are often important to answer questions about the "necessity" of background independence, see below. 2. We don't know. The [manifest, see below] background independence is an aesthetic expectation, one could say a prejudice, that we cannot prove in any scientific way, so the progress in science may show that it has been a good guide or it was a misleading excessive constraint. For several centuries, we have known that science can't systematically make progress by imposing arbitrary philosophical dogmas and stubbornly defending them. Science often finds out that some philosophical expectations, however "beautiful" or "convincing", have been invalid. Expectations about the "background independence" aren't an exception. Again, it is unknown whether the final "best" form of a theory of everything (if there exists "one best form" at all, which is another albeit related "if") will be [manifestly] background-independent. 3. No, there's no known way to show that the lack of background independence already implies that a theory isn't a complete theory of all interactions and types of matter. Some necessary conditions for consistency may be understood in the future but at this moment, it's a speculation whether they exist. Now, the subtleties. You implicitly wrote that string theory is background-dependent. This is a very delicate question. Some formulations (particular sets of equations used to define the theory, at least for a subclass of situations) such as AdS/CFT or Matrix theory are background-dependent. For example, AdS/CFT is formulated as a theory with a preferred background, the empty space $AdS_d\times M$, and all other states are built "on top of that". Similarly, matrix theory defines the theory for the flat space times some simple manifold (torus, K3, etc.). There is no way to see "completely" different backgrounds in this picture and even the equivalence with other nearby shapes of spacetime is far from obvious. In Matrix theory, one has to construct a new matrix model for a new background (this fact is a part of the light-cone gauge package). However, these are just observations about what the equations "look like". Invariant statements about a theory clearly shouldn't depend on the way how equations "look like", about some possibly misleading coating on the surface: they should only depend on the actual mathematical and physical properties of the theory that may be measured. When we are asking questions about the validity or completeness of a theory, we should really be talking not about "background independence seen in the equations" but rather "background independence of the dynamics". The dynamics of string theory is demonstrably background-independent. This point may be shown in most formulations we know. Perturbative string theory (which requires the string coupling to remain weak and uses the weakness to organize all the around "fundamental strings" as the only elementary objects while everything else is a "soliton" or "composite") is a power-law expansion around a predetermined background but we may easily show that if we define perturbative string theory as an expansion around a different background, we get an equivalent theory. One background may be obtained from another background by adding actual physical excitations (a coherent state of gravitons and moduli) allowed by this "another background". There's only one perturbative superstring theory in this sense – whose spacetime fields may be divided to "background" and "excitations" in various ways. But the freedom to divide the fields into "background" and "excitations around it" in many ways isn't a vice in any sense. It is a virtue and, one could say, a necessity because a preferred background (identified with the vacuum ground state) is needed to describe the Hilbert space in an explicit way, approximately as a Fock space. There is a related question whether the "space of possible backgrounds" is connected. Much of it is connected by dualities and various transitions: T-dualities, S-dualities, U-dualities, conifold and flop transitions, and various related ones that are more fancy and understood by fewer people. It's much more connected than people would be imagining in the 1980s. When we look at simple and symmetric enough vacua, they really seem to be connected: there's just one component of string/M-theory. On the other hand, the total connectedness isn't a dogma. It's a scientific – and mathematical – question whose both answers are conceivable until proved otherwise. The same equations may admit solutions that can't be deformed to one another at all. My ex-adviser Tom Banks is a defender of the viewpoint that sufficiently different backgrounds in string/M-theory should be considered disconnected although his quantum-gravity-based reasoning isn't quite comprehensible to anyone else. When we talk about background independence, there is one more technical question, namely whether we want the theory to have the same form for all backgrounds including those that change the spacetime at infinity, or just backgrounds that preserve the fields in the asymptotic region. AdS/CFT is background-dependent in one sense because it requires the fields at infinity to converge to the $AdS_d\times M$ geometry with all the fields at their expected values (usually zero). Generally, configurations that change the asymptotic region are "heavily infinite-energy" states that can't really be constructed reliably in the original CFT. However, if you only consider backgrounds that differ in the "bulk", one could still say that even AdS/CFT (and similarly Matrix theory) is background-independent although not manifestly so. Now, the big elephant is "manifest background independence", a form of equations that don't try to show you any preferred background at all and that are as easy (or difficult) to be applied to one background as any other, arbitrarily faraway background. All the backgrounds should emerge as solutions and they should emerge "with the same ease". This is the "manifest background independence". Some people always mean "manifest background independence" when they talk about "background independence": it should be really easy to see that all the backgrounds follow from the same equations, they think. Again, it's an aesthetic expectation that can't be shown "necessary" for anything in physics, not even the "completeness" of a theory as a final TOE. There are limited successes. For example, the cubic Witten's open string field theory (of the Chern-Simons type) may be written in the background-independent way so that the cubic term is the only term in the action that is left. It's elegant but in reality, we always solve the equations so that we find a background-like solution and expand around it, to get back to the quadratic plus cubic (Chern-Simons-like) form of the action. While the purely cubic starting point is elegant, we are not learning too much from the first step: we're just reformulating the consistency conditions for the backgrounds as the fact that they solve some (somewhat formal) equations. String field theory is only good to study perturbative stringy physics (and for some technical reasos, it's actually fully working for processes with internal open strings only although all closed string states may be seen as poles in the scattering amplitudes). Nonperturbatively (at strong coupling), background independence becomes harder because it should make all S-dualities (equivalence between strongly coupled string theory of one type and weakly coupled string theory of another type or the same type) manifest. Despite the overwhelming evidence supporting dualities, there's no known formulation that makes all of them manifest. There's no way to convincingly argue that there's something wrong about this situation. In fact, one could go further. One could say that physicists have accumulated circumstantial evidence that "the formulation making all symmetries and relationships manifest" is a chimera, whether we like the flavor of these results or not. It's quite a typical situation that formulations making some features of the theory manifest make other features of the theory "hard to see" and vice versa. Because it's so typical, it could even be a "law" – a new kind of "complementarity" which goes directly against "background independence" – although we would have to formulate the law rigorously and no one knows how to do so. For example, ordinary perturbative string theory in spaces asymptoting the 10-dimensional Minkowski space may be written down using "covariant" equations. That's the word for a description that makes the spacetime Lorentz symmetry manifest. But when we do so, the unitarity – especially the absence of negative-norm "ghost" states in the spectrum – becomes hard to prove. And vice versa. The light-cone gauge formulations make the unitarity manifest but they obscure the symmetry under some generators of the Lorentz symmetry. It's sort of inevitable. Also, the covariant approaches (RNS) make the spacetime supersymmetry somewhat hard to prove. This "complementarity" may not be inevitable; Nathan Berkovits' pure spinor formalism, if it works and I bet it does, makes both the Lorentz symmetry and the supersymmetry manifest. It's also close to a light-cone gauge Green-Schwarz description so the "unitarity" isn't too hard, either. However, it has an infinite number of world sheet ghosts (and ghosts for ghosts, and so on, indefinitely) and one could argue that the absence of various problems connected with them is non-manifest. The landscape of string/M-theory, as we know it today, is rather complicated and has lots of structure. We must sharpen our tools if we want to study some transitions in this landscape, a region of it. The tools needed for distinct questions seem to be inequivalent. A manifest background-independent formulation of string theory would make all these transitions equally accessible – all the tools would really be "one tool" used in many ways. In some sense, this desired construction would have to unify "all branches of maths" that become relevant for the research of separated questions in various corners of string theory (and believe me, it does look like different corners of string theory force you to learn functions and algebraic and geometric structures that are really different, studied by very different mathematicians etc.). It would be a formulation that stands "well above" this whole landscape "manifold". Such a "one size fits all" formulation is intriguing but it is in no way guaranteed to exist and failures of attempts to find it over the years provide us with some evidence (although not a proof) that it doesn't exist. Instead, many people are imagining that string theory's landscape is a sort of a manifold that must inevitably be described by "patches" that are smoothly glued to their neighbors. Each patch requires somewhat different maths. Just like manifolds may be described in terms of an atlas of patches, the same thing could be true for the landscape of string/M-theory. We also have more unified, less fragmented ways to think about the manifolds. It's not clear whether the counterpart of these ways is possible for the stringy landscape and if it is possible, whether the human mind is capable of finding it. So nothing is guaranteed. The transitions in the landscape and the dualities and duality groups are so mathematically diverse and rich that a formulation that "spits out" all of them as solutions to some universal equations or conditions is an ambitious goal, indeed. It may be impossible to find it. I also want to mention one simple point about non-stringy theories. The background independence is sometimes used as a "marketing slogan" for some non-stringy proposals but the slogan is extremely misleading because instead of explaining all the duality groups in the whole landscape, including e.g. the $E_{7(7)}(Z)$ U-duality group of M-theory on a seven-torus (these exceptional Lie groups are rather complicated by themselves, and they should appear as one of the solutions to some conditions among many), these alternative theories rather tell you that no spacetime and no transitions and no interesting dualities exist at all. While their proponents try to convince you that you should like this answer, this answer is obviously wrong because the transitions, dualities, and especially the spacetime itself does exist. This version of "background-independent theories" should be called "backgrounds-prohibiting theories" or "spacetime-prohibiting theories" and of course, the fact that one can't derive any realistic spacetimes out of them is a reason to immediately abandon them, not to consider them viable competitors of string/M-theory. This version of "background independence" has absolutely nothing to do with the ambitious goal of finding rules that allow us to derive "all dualities and transitions we know in physics (not only the new, purely stringy ones but also the older ones that have been known in physics before string theory)" as solutions. Instead, this marketing type of "background independence" is a sleight of hand to argue that we should forget all the physics and there's nothing to explain, no dualities, no transitions, no moduli spaces, no spacetime. And when we believe there's nothing out there, no relevant maths etc., a theory of everything becomes equivalent to a theory of nothing and it's easy to write it down. That's a wrong and intellectually vacuous answer that should be refused, not explained or adopted. To summarize, background independence is generally an attempt to find as universal, all-encompassing, and elegant formulations of theories, especially string/M-theory, as possible, but it is an emotional expectation, not a solid condition that theories have to obey, and we must actually listen to the evidence if we want to know whether the expectation is right, to what extent it is right, and what new related issues we have to learn even though we had no idea they could matter. It's also possible that the background-independent equations are actually "conditions of consistency of quantum gravity" (which may be written by some quantitative conditions whose precise form is only partially known): when we try to find all the solutions, we find the whole landscape of string/M-theory. Such a formulation of string/M-theory would be extremely non-constructive but after all, that's what "background independence" always wanted. Maybe we don't want too much of background independence. - – user7348 Dec 7 '12 at 16:13 It is impossible to say whether string theory obeys the condition vaguely described by Einstein a very long time ago or not. Its dynamics surely has all the features and consequences of the diffeomorphism invariance built-in, and that's really what Einstein wanted. Most definitions of string theory make the symmetry harder-to-see than Einstein's equations - non-manifest. But even if one decided that string theory doesn't fit into a straitjacket defined by Einstein, it has no consequences for its validity because science isn't mindless worshiping of thinkers who peaked a century ago. – Luboš Motl Dec 10 '12 at 10:31 Lubos's answer is correct. But it's worth stressing that there is no agreed upon definiton of 'background independance' in general. The literature is full of definitions that differ from one another, both in intent, philosophy and crucially mathematical detail. Different theories will use the word in very different ways, and it is often confusing to sort through (especially when you are trying to use the concept as a sieve of theories) In fact, different types of theories won't even agree what a 'background' is in the first place. For instance, in GR a backround is a classical solution of Einstein's equation, given by a metric tensor. Whereas in String theory there is no background that only involves a metric tensor. Instead a background is a far more general creature with various moduli and extra fields (in fact an infinite tower of vibrational modes). Further backround independance is often confused with various buzzwords that mean different things in different contexts. You might hear the words 'no prior geometry', 'lack of absolute structure' and it is also often confused (erroneously in my opinion) with general covariance and the use of the background field method in field theory. In some sense the intent is really to seperate the things that remain fixed in a theory, and those that are left to be dynamic or varied over. Anderson in his GR book from the 1960s started this type of program, and it was generalized by a number of people to quantum gravity in the 80s. I think its fair to say that this type of idea meets with a number of problems. First of which is that it is often easy to take something that is fixed, and make it look dynamic by various tricks. And then there is the reverse. You can take a theory that is dynamic, and write it in a formalism where things are allowed to be fixed. So it's really difficult to actually sort out the essential physical idea, rather than simply take it as an elegant aesthetic criteria. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488988518714905, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/quantum-information
# Tagged Questions Quantum information is the study of the informational content of quantum states. The most common object of study is the "qubit", the information in a two-state quantum system such as spin-1/2 or photon polarization. 2answers 50 views ### Is it possible to use quantum mechanics for an effective time based encryption? This is for an application in cryptography. There is a concept called "time based cryptography", where a message can be decrypted only after a certain time, Say "12/12/2060, 12:30 GMT". There are some ... 2answers 61 views ### Grover algorithm $R_D$ Circuit I need sketch two circuits to understand Grover algorithm. The first is the operator $R_f$ and another is the operator $R_D = H^{\otimes n}(2|0\rangle\langle0|-I)H^{\otimes n}$. I get the first ... 2answers 90 views ### Why does quantum cryptography give us uncrackable codes? Why does quantum cryptography give us uncrackable codes? What makes it 'uncrackable'? Articles in for example pop science magazines always claim QC produces uncrackable coded, however I highly doubt ... 1answer 39 views ### Statistical sum of physical quantities in a quantum system Let $C = A + B$ (statistical sum, so $\mathbb{E}[C] = \mathbb{E}[A] + \mathbb{E}[B]$), and let $p(A = a) = 1$. Are the following true? $\mathbb{E}[C^2] = a^2 + 2a\mathbb{E}[B] + \mathbb{E}[B^2]$ ... 1answer 92 views ### Positivity in the Pauli/Bloch/coherence vector representation Suppose $\rho$ is an $n$-qubit state and $\vec{x}$ is a vector of coefficients in the Pauli representation (also called the Bloch or coherence vector). That is $$x_k = {\rm Tr}(\rho \sigma_k),$$ ... 0answers 36 views ### Creating matrix Hamiltonian for Feynman's CNOT I'm trying to read Quantum Mechanical Computer and to implement the CNOT logical gate with some analytical software. Since i wish to use the SWITCH implementation of the CNOT [Fig.8] i've realized ... 1answer 105 views ### How are qubits better than classical bit? WHAT I KNOW: classical computers store information in bits which can either be 0 or 1, but in quantum computer the qubit can store 0 , 1 or a state that is the superposition of these two states. Now ... 0answers 112 views ### projective measurement & POVM Let us consider the following completely positive map $\mathcal{B}(\mathbb{C}^n)\ni\rho\mapsto L\rho L^\dagger$, where $L\in\mathcal{B}(\mathbb{C}^n)$ is any arbitrary operator (and can have rank ... 0answers 38 views ### Deutsch-Jozsa algorithm [closed] How many calls are required to determine is the function balanced or not on the classical computer with probability of error < 50%. Ref: Deutsch-Jozsa algorithm. 2answers 63 views ### Qubit projections Given the qubit: $$\frac{|0\rangle+i|1\rangle}{\sqrt{2}}$$ What is the corresponding point on the extended complex plane and Bloch sphere? How to perform calculations and get the point representing ... 1answer 56 views ### Two qubits problem [closed] Given the 2 qubit state: (a/b) |00> + (c/b) |01> + (c/b) |10> + (d/b) |11> What is the probability that 2 qubits are equal? Thanks much! 0answers 42 views ### Constructing a Toffoli gate from CNOT and single-qubit gates [closed] Toffoli gate in terms of CNOT and single-qubit gates. Thanks much! 2answers 67 views ### Purpose of Grover's algorithm? How is the output of Grover's algorithm useful if the result is required to use the oracle? If we already know the desired state, what's the point of using the algorithm? 0answers 51 views ### Hamiltonian matrix propertu A professor made an statement to prove the variational theorem: Because the Hamiltonian (H operator of quantum physics) is diagonal in its own eigenfunction, the terms in \$\left \langle \Phi _{m} ... 1answer 32 views ### Violation of the Normalization Constraint? Say we have two qubits $|a\rangle$ and $|b\rangle$ both initialized to $|0\rangle$. We then apply the rotation gate $R_{x}(\frac{\pi}{2})$ of matrix representation \$\left( \begin{array}{} ... 1answer 175 views ### Entangled or unentangled? I got a little puzzled when thinking about two entangled fermions. Say that we have a Hilbert space in which we have two fermionic orbitals $a$ and $b$. Then the Hilbert space $H$'s dimension is just ... 1answer 63 views ### Question on hadamard gate and cnot gate circuit tables I'm trying to solve this problem for homework: Now show that if the CNOT gate is applied in the Hadamard basis - i.e. apply the Hadamard gate to the inputs and outputs of the CNOT gate - then ... 2answers 390 views ### What does the sum of two qubits tell about their correlations? How much can I learn about correlations between two quits by measuring the sum of their values? What is the best way to formalize such a question? Below is my original, longer formulation of ... 0answers 26 views ### Circuit identities HTH [closed] Using this circuit indetities $HXH=Z, HYH=-Y, HZH = X$ prove $HTH=R_x(\pi/4)$. here $H$ is Hadamard matrix, $X,Y$ and $Z$ are Pauli matrix, $R_x$ is a rotation matrix and \$T=\left[ \begin{array}{cc} 1 ... 0answers 42 views ### Partial Measure Probability Let be a $$|\psi\rangle = \dfrac{3}{5\sqrt{2}}|00 \rangle- \dfrac{3i}{5\sqrt{2}}|01 \rangle+ \dfrac{2\sqrt{2}}{5}|10 \rangle - \dfrac{2\sqrt{2} i}{5}|11 \rangle$$ state with two qubits. ... 1answer 152 views ### I am interested in learning Quantum Computing what should I do? [closed] I wish to learn about quantum computing which seems to be a topic of hot research and overall just intrigues me. I have a strong background in discrete mathematics and number theory. And am a pretty ... 1answer 88 views ### Is it ever necessary to extend an analysis of Grover's algorithm beyond $k/N = 1/2$? Is it ever necessary to extend an analysis of Grover's algorithm beyond $k/N = 1/2$, where $k$ is the number of "hits" in a total of $N$ possible values for $|\,x\rangle$? If we know $k$, and know ... 1answer 109 views ### Bloch sphere representation Suppose you know that a qubit is either is in state $|+\rangle$ with probability $p$ or in state $|-\rangle$ with probability $1-p$. If this is the best you know about the qubit's state, where in the ... 4answers 239 views ### Is this statement about quantum mechanics valid? In Philosophy of Language by William G. Lycan, there are the lines: Even apparent truths of logic, such as truths of the form "Either P or not P", might be abandoned in light of suitably weird ... 1answer 60 views ### Types of photon qubit encoding How many types of qubit encoding on photons exist nowadays? I know only two: Encoding on polarization: $$\lvert \Psi \rangle = \alpha \lvert H \rangle + \beta \lvert V \rangle$$ \lvert H ... 3answers 122 views ### Application of non maximally entangled state In quantum information and quantum computation, we generally use Bell type states which are maximally entangled. I find that the set of entangled states as interesting objects from a mathematical ... 1answer 63 views ### Information bearing degrees of freedom of a quantum simple harmonic oscillator I am trying to make sense of arXiv:physics/0210005. I am confused with the concept of information bearing degrees of freedom of a system mentioned at the very beginning. To verify the arguments of the ... 1answer 56 views ### Landauer's principle vs Wien's displacement law Can we argue based on Landauer's principle that if one bit information is changed inside a blackbody, the total radiated energy should be at least or in order of kTln2? If it is so, can we also argue ... 0answers 34 views ### Landauer's principle vs Rayleigh–Jeans law Can we argue based on Landauer's principle that if one bit information is changed inside a blackbody, the total radiated energy should be at least or in order of $kTln2$? If it is so, can we also ... 1answer 44 views ### 2 following gates, inverse circuit I have a circuit that has 4 wires and 2 following each other Toffoli gates. The first Toffoli gate occupies 3 wires from above, the following Toffoli gate occupies 3 wires from below. What will look ... 0answers 44 views ### How large must the Quantum teleportation fidelity have to be in order for it to be useful? This question relates and stems from my original question. Please read this one and the comments before answering this question. Quantum Teleportation Fidelity I know that for discrete variables ... 2answers 100 views ### How is the energy/eigenvalue gap plot drawn for adiabatic quantum computation? I was going through arXiv:quant-ph/0001106v1, the first paper by Farhi on adiabatic quantum computation. Equation 2.24 says, $$\tilde{H}(s) = (1-s)H_B + sH_P$$ which means the adiabatic evolution ... 1answer 97 views ### 2 following gates, permutation matrix I have a circuit that has 4 wires and 2 following each other Toffoli gates. I have permutation matrix for each Toffoli gate (A and B). Do I have to multiply that 2 matrices to get the entire ... 2answers 307 views ### Entropy of a state subject to the action of a set of random unitaries Suppose that we have a known set of unitaries $U_1,...,U_n$ randomly selected from the Haar measure and suppose that each unitary is applied with probability $\frac{1}{n}$ to some input state $\rho$ ... 1answer 87 views ### QFT in Quantum Computing and Control Theory? Is QFT being applied to quantum computing and control theory? I took yesteryear a basic course on quantum computing and if I remember correctly we didn't touch on any QFT (though I think that if it ... 1answer 70 views ### Quantum gates Hadamard before a toffoli gate After applying a Hadamard gate so that the state splits into either $|1\rangle+|0\rangle$ or $|0\rangle-|1\rangle$ what happens when applying a ccnot (toffoli) gate, this flips a third qbit if the ... 2answers 100 views ### How to measure a qubit in a random basis Let a two dimensional system be in the state $\phi=|0\rangle\langle0|$, for any basis $M$ spanned by the orthogonal vectors $|\psi_0\rangle,|\psi_1\rangle$, we can measure $\phi$ in basis $M$ and ... 2answers 95 views ### Quantum gate: Phase shift I dont undestand how to apply a phase shift gate to a qubit. By example how to map $|\psi_0\rangle = \cos (30^\circ) |0\rangle + \sin (30^\circ) |1\rangle$ to \$|\psi_1\rangle = \cos(-15^\circ) ... 1answer 61 views ### mixture of maximally mixed and maximally entangled state Consider the quantum system $\mathcal{B}(\mathbb{C}^d\otimes\mathbb{C}^d)$ and $|\psi\rangle=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|i,i\rangle$ be the (standard) maximally entangled state. Consider the ... 0answers 84 views ### POVM advantage in state discrimination Suppose we are given the task of discriminating, with minimum error, between a set of states $\{|\psi_1\rangle,|\psi_2\rangle,\ldots,|\psi_N\rangle\}$. In other words, we are given an unknown state ... 3answers 171 views ### Is a quantum system mandatory for generating true random sequence? Is a quantum system necessary if we want to generate true random sequence? The mathematical framework used for classical mechanics doesn't involve any random value. But the mathematical framework of ... 1answer 52 views ### How is a Rydberg Blockade Radius defined? Rydberg blockade is a phenomena in 3 or more level systems of Rydberg dressed atoms. 0answers 61 views ### Quantum circuit simulation software [closed] Would anyone be able to recommend some software that you can use to simulate a quantum circuit? Something someone created to easily be able to create nice looking quantum circuits and quickly ... 1answer 63 views ### Which similar properties must objects have to sustain quantum entanglement? Quantum entanglement occurs when particles such as photons, electrons, molecules as large as buckyballs, and even small diamonds interact physically and then become separated; the type of ... 2answers 97 views ### Toric Code and Random Bond Ising Model It was established by Dennis, Kitaev et al. that the 2D Toric Code can be mapped to a 2D Random Bond Ising Model. The original derivation was given in the paper "Topological quantum memory" which ... 2answers 472 views ### How to apply a Hadamard gate? How to apply a Hadamard gate to 3 qubits? by example how to apply $H$ to $(1/\sqrt{2})(\left|000\right> + \left|111\right>)$? 2answers 139 views ### Dealing with environment in a CHSH game I am reading arxiv:1209.0448. I understand that my questions could be highly trivial. I would appreciate if anyone helps me to resolve my confusions. In a CHSH game, Alice and Bob cannot have ... 2answers 189 views ### Quantum Teleportation Fidelity I understand that quantum teleportation fidelity is the overlap of the initial quantum state with the teleported quantum state. If the teleportation is perfect, then the fidelity would equal 1 or 100% ... 1answer 140 views ### Two Qubit problem A two-qubit system was originally in the state $\frac{3}{4}|00\rangle-\frac{\sqrt{5}}{4}|01\rangle+\frac{1}{4}|10\rangle-\frac{1}{4}|11\rangle$ , and then we measured the first qubit to ... 1answer 49 views ### Reversible gates Is it possible to make any gate reversible merely by retaining the input bits in the output and introducing ancilla bits as necessary? That is, given an irreversible gate with $k$ inputs and $l$ ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9015025496482849, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/32525/laws-of-gravity-for-a-universe-that-only-consists-of-two-objects
# Laws of gravity for a universe that only consists of two objects? So, we know that when two objects of normal matter get away from each other, the gravitational pull they feel from each other, decreases. I wanted to see how that would work. And in my over-simplistic understanding of physics, there could be two mechanisms that would create that phenomenon. One is, simply, each body of mass stretches the fabric of space-time in a spherical fashion, relative to its mass, regardless of other bodies. Now, if another body of mass is in the vicinity, it would be pulled towards the first body, and also create the same effect for its own mass, so the first body would feel a pull too. If the second body moves away from the first one, it's moving out of the spherical pull, and its moving its own pull away, so the two bodies feel less gravitational pull. The other mechanism would be somehow different. I should put it this way: There is a certain amount of gravitational force in each body of mass. And all of that force gets "spent" on other bodies that have the force of the same type. If there are only two bodies in the whole universe, their distance wouldn't affect the amount of pull they feel. But when there are other bodies around too (plus all the particles moving around), their distance would affect the gravitational pull that they feel from each other. But only because other "things" in the universe get a bigger "chance" of "catching" the gravitational force of those two bodies. Which means the force of those bodies get spent on other things around them, and less force remains to create pull between the first two bodies. I don't know which one of those two explanations is closer to the current consensus on the workings of gravity, but to get an answer, I'd just simply ask: If only two bodies of mass (earth and mars) make up the whole universe, and there is nothing around or between them (no photons or neutrinos or anything, hypothetically), would their distance affect how much pull they feel? If you need more explanation: Basic definitions of gravity (like $GmM/r^2$) only consider the mass of the two objects and their distance from each other as the playing factors in how much gravitational pull they feel. Based on those definitions, the answer to the question would be yes, obviously. But are there other theories that would predict differently? If not, let's just assume that the answer to the question is "No, their distance wouldn't affect how much pull they feel." If so, are there any observations that would contradict this answer? - 3 Well the first problem is that your guesses are not written mathematically. – Chris Gerig Jul 21 '12 at 17:29 By definition of $GmM/r^2$, the answer is yes. – Chris Gerig Jul 21 '12 at 17:31 That's one definition. But is that the consensus? Are there other theories that describe gravity in other ways? Perhaps the second way that I described? – Aria Jul 21 '12 at 17:53 4 Are you trying to develop your own physics for gravity? And trying to do it without using mathematics?? Your descriptions are 1) vague, 2) non-math based, and *3) use the term "gravitational force" and hence it must be defined. – Chris Gerig Jul 21 '12 at 18:08 1 Your update 1) doesn't remedy anything I said, and 2) this basically asks if we have theories contradicting the Einstein/Newton ones, at least for some hypothetical universe -- but this universe must have some axioms, and you seem to say it's just our universe with only two objects, thus a theory contradicting Newton in the fake universe would also contradict Newton in the real one... There are no such theories (and if one were to exist, it would not be credible) – Chris Gerig Jul 21 '12 at 19:04 show 2 more comments ## 3 Answers There is such thing as Mach principle. In an universe with only two bodies the bodies will have the same mass $\frac{m_1 m_2}{m_1+m_2}$ (where $m_1$ and $m_2$ are the masses of the respective bodies in an universe with infinite mass) regardless of their composition. - 1 This is not reasonable--- Mach's principle can only be formulated properly including horizons, and it doesn't work when you have asymptotically flat backgrounds. You can't just have two bodies--- they will emit gravitational waves, which are some sort of bodies too. What does it mean for the bodies to have the reduced mass if there is no other body with which to measure the mass? If there is such a measuring device, then it will be able to mass the objects separately. – Ron Maimon Jul 22 '12 at 7:40 If you have a universe consisting of just Mars and Earth then Newton's law of gravitation would be an essentially perfect description of the system. The GR description of the system would be marginally more accurate, but since the densities and velocities are low GR and Newton's law would predict virtually the same behaviour. I can't think of any even marginally plausible theory that would predict otherwise. If you scaled up to two neutron stars then you'd need GR to describe the loss of energy due to gravity waves, or to describe a collision, but Newton's law would still be a good long distance approximation. - I think it's meaningless to ask about what the physics laws would tell if there's only two object, for our universe is composed of quite a lot amount of energy and matter..! - 1 Hello Henrychen, These lines would've been good in case of comments. But, made it worse as an answer..! – Ϛѓăʑɏ βµԂԃϔ Sep 29 '12 at 17:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9670147895812988, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/09/10/conjugates/?like=1&_wpnonce=d19428bcb3
# The Unapologetic Mathematician ## Conjugates For some more review, let’s recall the idea of conjugation in a group $G$. We say that two elements $g$ and $h$ are “conjugate” if $g=khk^{-1}$ for some $k\in G$. This is an equivalence relation — reflexive, symmetric, and transitive. Any element $g$ is conjugate to itself by the group identity; if $g=khk^{-1}$, then $h=k^{-1}g\left(k^{-1}\right)^{-1}$; if $f=jgj^{-1}$ and $g=khk^{-1}$, then $f=(jk)h(jk)^{-1}$. Thus the set underlying the group $G$ can be partitioned into “conjugacy classes”: two group elements are in the same conjugacy class if and only if they are conjugate. The conjugacy class containing a group $g$ is commonly written $K_g$. The different conjugacy classes are pairwise disjoint, and their union is all of $G$. Now, since these days we’re concerned with the symmetric group. And it turns out that some nice things happen in this case. We’ve actually already seen a lot of these! First of all, when we write a permutation in cycle notation it’s easy to see what conjugation does to it. Indeed, if $(a_1\,a_2\,\dots\,a_k)$ is a $k$-cycle and if $g$ is any other permutation, then we can write down the conjugate: $\displaystyle g(a_1\,\dots\,a_k)g^{-1}=(g(a_1)\,\dots\,g(a_k))$ And the same goes for any other permutation written in cycle notation: the conjugate by $g$ is given by applying $g$ to each symbol in the cycle notation. In particular, any two conjugate permutations have the same cycle type. In fact, the converse is also true: given any two permutations with the same cycle type, we can find a permutation that sends the one to the other. For example, consider the permutations $(1\,5\,2)(3\,4)$ and $(2\,3\,1)(4\,5)$. Stack them on top of each other: $\displaystyle\begin{aligned}(1\,5\,2)&(3\,4)\\(2\,3\,1)&(4\,5)\end{aligned}$ turn this into two-line notation $\displaystyle\left\lvert\begin{matrix}1&5&2&3&4\\2&3&1&4&5\end{matrix}\right\rvert$ and we’ve got a permutation $(1\,2)(3\,4\,5)$ which sends $(1\,5\,2)(3\,4)$ to $(2\,3\,1)(4\,5)$: $\displaystyle(1\,2)(3\,4\,5)\cdot(1\,5\,2)(3\,4)\cdot(2\,1)(5\,4\,3)=(1\,2\,3)(4\,5)$ which is equivalent to $(2\,3\,1)(4\,5)$. That is, two permutations are conjugate if and only if they have the same cycle type. This is really big. Given a cycle type $\lambda$, we will write the corresponding conjugacy class as $K_\lambda$. We also know from some general facts about group actions that the number of elements in the conjugacy class $K_g$ is equal to the number of cosets of the “centralizer” $Z_g$. We recall that the centralizer of $g$ is the collection of group elements that commute with $g$. That is: $\displaystyle Z_g=\left\{h\in G\vert hgh^{-1}=g\right\}$ and we have the equation $\displaystyle\lvert K_g\rvert=\frac{\lvert G\rvert}{\lvert Z_g\rvert}$ In the case of $S_n$ — and armed with our formula for conjugating permutations in cycle notation — we can use this to count the size of each $K_\lambda$. We know that $\lvert S_n\rvert=n!$, so all we need is to find out how many permutations $h$ leave a permutation $g$ (with cycle type $\lambda$) the same when we conjugate it to get $hgh^{-1}$. We will write this number as $z_\lambda=\lvert Z_g\rvert$, and similarly we will write $k_\lambda=\lvert K_\lambda\rvert=\lvert K_g\rvert$. So, how can we change the cycle notation of $g$ and get something equivalent back again? Well, for any $k$-cycle in $g$ we can rotate it around in $k$ different ways. For instance, $(1\,2\,3)$ is the same as $(2\,3\,1)$ is the same as $(3\,1\,2)$. We can also shuffle around any cycles that have the same length: $(1\,2)(3\,4)$ is the same as $(3\,4)(1\,2)$. Thus if $g$ has cycle type $\lambda=(1^{m_1},2^{m_2},\dots,n^{m_n})$, then we can shuffle the $1$-cycles $m_1!$ ways; we can shuffle the $2$-cycles $m_2!$ ways; and so on until we can shuffle the $n$-cycles $m_n!$ ways. Each $1$-cycle can be rotated into $1$ different position for a total of $1^{m_1}$ choices; each $2$-cycle can be rotated into $2$ different positions for a total of $2^{m_2}$ choices; and so on until each $n$-cycle can be rotated into $n$ different positions for a total of $n^{m_n}$ choices. Therefore we have a total of $1^{m_1}m_1!2^{m_2}m_2!\dots n^{m_n}m_n!$ different ways of writing the same permutation $g$. And each of these ways corresponds to a permutation in $Z_g$. We have thus calculated $\lvert Z_g\rvert=1^{m_1}m_1!2^{m_2}m_2!\dots n^{m_n}m_n!$ and we conclude $\displaystyle k_\lambda=\lvert K_g\rvert=\frac{\lvert S_n\rvert}{\lvert Z_g\rvert}=\frac{n!}{z_\lambda}=\frac{n!}{1^{m_1}m_1!2^{m_2}m_2!\dots n^{m_n}m_n!}$ As a special case, how many transpositions are there in the group $S_n$? Recall that a transposition is a permutation of the form $(a\,b)$, which has cycle type $(1^{n-2},2^1,3^0,\dots,n^0)$. Our formula tells us that there are $\displaystyle\frac{n!}{1^{n-2}(n-2)!2^11!3^00!\dots n^00!}=\frac{n!}{(n-2)!2}=\binom{n}{2}$ permutations in this conjugacy class, as we could expect. ## 4 Comments » 1. [...] say and are conjugate elements of the group . That is, there is some so that . I say that for any -module with [...] Pingback by | October 15, 2010 | Reply 2. [...] an example, we can start writing down the character table of . We know that conjugacy classes in symmetric groups correspond to cycle types, and so we can write down all [...] Pingback by | October 20, 2010 | Reply 3. [...] The Character Table of S4 Let’s use our inner tensor products to fill in the character table of . We start by listing out the conjugacy classes along with their sizes: [...] Pingback by | November 8, 2010 | Reply 4. [...] for the symmetric group , there’s something we can say. We know that conjugacy classes in symmetric groups correspond to cycle types. Cycles correspond to integer partitions of . And from a partition we [...] Pingback by | December 7, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 82, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146653413772583, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/304259/showing-there-is-no-integer-solution-to-equation-2x-4y3/304261
# Showing there is no integer solution to equation $2^x = 4y+3$ I am stuck on this problem and I'm not sure how to approach it, can anyone help me out with figuring how to solve the solution. The question is: Prove that it is impossible to find integers $x, y$ such that $2^x = 4y + 3$. I assumed a proof by cases would be the way to go? Any input? Thanks in advance! - choloboy - please ask a different question in a new post :-) FYI: one possible solution, in integers, to $4^x = 8y$ is $x = 2, y = 2$, and to show a solution is possible, you need only find one solution! – amWhy Feb 15 at 13:39 ## 2 Answers Proof-By-Cases - Sketch: We consider $x \in \mathbb{Z}$. For all $x \in \mathbb{Z}$: 1. $x > 0$ 2. $x = 0,\;$ or 3. $x < 0$ $(1)$ For non-negative integer $x (x >0)$: Show the left hand side will always be even, except when $x = 0$, and the right hand side will always be odd, regardless of the integer value of $y$. (I.e. all positive integral powers of $2$ are even, but $4y+3 = 2\cdot 2 y + 2 + 1 = 2(2y+1) + 1$ must be odd, regardless of the value of $y$.) $(2)$ Then consider the case $x = 0$: $\;2^0 = 1 \neq 4y+3 = 2(2y+1) + 1$, whatever the integer value of $y$. $(3)$ For negative integers $x (x < 0):$ the left-hand side will not be an integer $\left(\text{e.g.,}\;\; 2^{-2} = \dfrac 14\right),\;$ while the right hand side will always be an integer, regardless of the value of integer $y$. Hence the equation has not solution in integers in this case, either. And hence we conclude there are no integer solutions for $x, y$ satisfying the equation: $$2^x = 4y + 3$$ - That would be considered proof by cases wouldn't it? :S – choloboy Feb 14 at 20:14 1 Yes, it would: consider all scenarios: prove in each case, no solution exists. – amWhy Feb 14 at 20:17 2 @theolc Not really. You can write the proof as: $4y+3$ is odd thus $2^x$ is odd integer, thus $x=0$ contradiction.... – N. S. Feb 14 at 20:19 2 @N.S.: The proof as sketched by amWhy is by cases; what you mean is that it can be reorganized to have a different structure. – Brian M. Scott Feb 14 at 22:40 The Op edited the question strangely. +1 – Babak S. Feb 15 at 8:24 show 1 more comment $2^x$ is even and $4 y+3$ is odd... - What if $x < 0$? Also, 1 is not even. – Arkamis Feb 14 at 20:25 @Arkamis If $x<0$, $2^x$ is not an integer and if $x=0$, it is one and the other side cannot be one... – Valtteri Feb 14 at 20:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8881505131721497, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=586036
Physics Forums ## Mechanical Waves On a String - Speed, Amplitude, and Power 1. The problem statement, all variables and given/known data A string of mass 38.5g and length 5.60m is secured so that it is under tension of 220N. A wave with frequency 178 Hz travels on the string. Find the speed of the wave and the amplitude of the wave if it transmits power of 140 Watts. The Given answers are: 179 m/s and 1.35 cm 2. Relevant equations I believe these are the main equations that I can use to solve these problems speed of wave $v=\sqrt{\frac{Tension}{\mu}},\mu = \frac{Mass}{Length}$ Power (from my notes) $P=\frac{\mu \times v \times \omega^2 \times A^2}{2},\omega = 2\pi f$ Power (from my book, think this is average power) $P=\frac{\sqrt{\mu \times F} \times \omega^2 \times A^2}{2},\omega = 2\pi f$ Are my notes correct? Are these equations for the same amount of power? 3. The attempt at a solution Well, I'm given mass, length and tension of the string, and If my understanding is somewhat correct, the speed at which a wave moves through a medium is dependent only on the properties of the medium itself, and for this string, these properties are the mass, length, and tension $mass(M) = 38.5 g(grams) \longrightarrow .385 kg(kilograms)$ $tension(T) = 220N(\frac{kgm}{s^2})$ $length(L)=5.60m(meters)$ With this information, I can use my formula for velocity of the wave to find the speed of the wave. I'll calculate $\mu$ $\mu = \frac{M}{L} \longrightarrow \mu = \frac{.385 kg}{5.60 m}=0.069\frac{kg}{m}$ Now $v$ $v=\sqrt{\frac{T}{\mu}}\longrightarrow v=\sqrt{\frac{220\frac{kgm}{s^2}}{0.069\frac{kg}{m}}}=\boxed{56.5\frac{ m}{s}}$ Since I now have the velocity $v$, I should use the equation for power $P$, where $P = 140 watts(\frac{kgm^2}{s^3})$ and solve for A $P=\frac{\mu \times v \times \omega^2 \times A^2}{2},\omega = 2\pi f$ but before I can use this, I need $\omega$, which is $\omega = 2\pi \times frequencey(f)$ $\omega = 2\pi(rads) \times 178 \frac{1}{s}=1118.41\frac{1}{s}$ now that I have $\omega$, I can solve for A $140\frac{kgm^2}{s^3}=\frac{(0.069\frac{kg}{m}) \times (56.5\frac{m}{s}) \times(1118.41\frac{1}{s})^2 \times A^2}{2}$ $\frac{280\frac{kgm^2}{s^3}}{(0.069\frac{kg}{m}) \times (56.5\frac{m}{s}) \times(1118.41\frac{1}{s})^2}=A^2$ $A^2=5.74\times 10^{-5}m$ or $0.0000574 m \longrightarrow \boxed{A=.0076m \rightarrow .76cm}$ But as you can see, these answers do not correspond with those that are given. There are plenty of concepts that I'm struggling with here in physics, so I want to make sure I'm not straying too far off the path here. any help would be greatly appreciated. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Found my problem.... $.385kg$ This mass should actually be $.0385kg$ Looks like I simply messed up on a conversion. Tags mechanical waves, power, rope, speed Thread Tools | | | | |---------------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Mechanical Waves On a String - Speed, Amplitude, and Power | | | | Thread | Forum | Replies | | | General Physics | 3 | | | Introductory Physics Homework | 5 | | | Introductory Physics Homework | 0 | | | Introductory Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9030839800834656, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/5133/how-to-present-overlap-of-related-sets-closed
## How to present overlap of related sets [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have extracted URL links from a number of webpages and many of the webpages contain the same set of links (or subsets) as other webpages. I have ~1000 webpages and ~10 links per webpage. What is an efficient way to find the minimum set of webpages that will still cover the complete set of links? - closed as off topic - how come? The application is non-math but the principles certainly are. – Richard Nov 12 2009 at 3:08 2 Sorry, as asked it looks like a scientific visualisation or graphics design problem. If there's something mathematical you'd like to ask about, edit and I'll reopen. – Scott Morrison♦ Nov 12 2009 at 5:02 1 fair enough. Have rephrased it now - is that sufficiently 'mathy'? – Richard Nov 13 2009 at 0:03 the question is closed, so I can't answer, but this sounds like classic set cover. the atoms are the links, and the webpages are the sets. – Suresh Venkat Jul 7 2010 at 9:03 ## 2 Answers Let me try a preliminary answer to see if I've understood what you're after. You have, let's say, 1000 web pages, each containing some outbound links. You might define a 1000x1000 matrix D by $$D_{ij} = \mbox{number of links in common between the }i\mbox{th and }j\mbox{th pages}.$$ Then you want to extract from $D$ a single number, the 'level of overlap', which broadly speaking will be large if most entries of $D$ are large, and small if most entries of $D$ are small. I can think of some ways of doing that, but before I go into it, let me check: is that the kind of thing you want? - Yes that is what I am after. But I am not sure how to present it. As Martin said a Venn diagram is out of the question! – Richard Nov 12 2009 at 3:12 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One approach is to represent your data as a graph, and use a network diagram tool to draw it. Variant 1: nodes are websites, weighted edges represent number of common links. Variant 2: a bipartite graph with nodes being either web sites or links, and unweighted edges indicating "web site A contains link B." The magic google phrase to find examples related to your example is "citation visualization". The problem with this technique is that big graphs sometimes turn into hairballs in a graph layout tool, so you may need to prune or filter your data set for it to be intelligible. There are other techniques: you could draw a grid where rows correspond to web pages, and columns to links they contain, and fill in cells (X,Y) where link X appears in web site Y. If you reorder the rows and columns to put related pages near each other and related links near each other this can be an effective analytic tool, but it might not be right for non-technical readers. A lot of this depends on the details of your data. If you have any kind of metadata (a categorization of web pages or links) that could suggest other approaches--feel free to add details in your question. About the only thing you can say for sure is a big Venn diagram is going to be a mess! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435465931892395, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/64781?sort=votes
## Rolling-ball game ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The analyses in two recent MO questions, "Rolling a random walk on a sphere" and "Maneuvering with limited moves on $S^2$," suggest a Rolling-Ball Game, as follows. A unit-radius ball sits on a grid point of a $\delta \times \delta$ regular grid in the plane, with $\delta \neq \pi/2$. Player 1 (Blue) rolls the ball to an adjacent grid point, and the track of the ball-plane contact point is drawn on the ball's surface. Player 2 (Red) rolls to an adjacent grid point. The two players alternate until each possible next move would cause the trace-path to touch itself, at which stage the player who last moved wins. In the following example, Red wins, as Blue cannot move without the path self-intersecting. Q1. What is the shortest possible game, assuming the players cooperate to end it as quickly as possible? For $\delta=\pi/4$, the above example suggests 6, but this min depends on $\delta$. It seems smaller $\delta$ need 8 moves to create a cul-de-sac? Q2. What is the longest possible game, assuming the players cooperate to extend it as much as possible? Q3. Is there any reasonable strategy if the players are truly competing (as opposed to cooperating)? Addendum. We must have $\delta < 2 \pi$ to have even one legal move, and the first player wins immediately with one move for $\pi \le \delta < 2\pi$ (left below). The right image just shows a non-intersecting path of no particular significance for $\delta=\pi/8$. - 4 Does anyone have a convincing argument that the answer to Q2 is finite? The best I have is that it's totally obvious... – Johan Wästlund May 12 2011 at 22:08 @Johan: Yes, that does seem the first question to settle! I originally typed for Q2 that "it is clearly finite," but then backspaced over my typing, imagining some type of tight nesting/packing of the path. It is conceivable and so possible until explicitly excluded. – Joseph O'Rourke May 12 2011 at 22:48 1 I've just tried to find a proof of finiteness by using the pigeonhole principle to say that there are two long bits of path that take the same steps and start in almost exactly the same place with almost exactly the same orientation. It feels promising but I haven't managed to push it through. – gowers May 13 2011 at 14:41 2 If this game were played on a torus (and the nontracing parts were allowed to pass through the tracing board), I think there would be an arbitrarily long staircase path that was playyable. I would thus be impressed by any combinatorial proof that does not hinge on the metric geometry of the object being rolled. Gerhard "Ask Me About System Design" Paseman, 2011.05.13 – Gerhard Paseman May 13 2011 at 16:40 2 From the question: "The first player wins...if $\delta\geq\pi$." What if $\delta\geq 2\pi$ though?? – Kevin Buzzard May 13 2011 at 20:49 show 5 more comments ## 3 Answers The game is, indeed, finite though the reason I see is slightly different. I claim that if $L$ is a sufficiently long piece of a fixed shape and $R$ is a rotation sufficiently close to the identity, then $RL$ intersects $L$. This is a combination of three observations: 1) If the angle $a$ of rotation is small enough, then the "far" intersections are ruled out automatically, i.e., if $E'$ and $E''$ are two edges and $RE'$ intersects $RE''$, then $E'$ and $E''$ are either the same or adjacent. Also, if $L$ and $RL$ do not intersect, then topologically everything is trivial: $L$ just "shifts to one side". 2) There are very many corners on $L$ and each corner sweeps the area about $\delta a$ under the rotation $R$, so the area between $L$ and $RL$ is about $Ca$ with large $C$. 3) The angle deficiency at each endpoint (if we close the curve somehow) is at most $a$. The ending is obvious: there are finitely many possible shapes of fixed large length, each piece can have not too many copies (otherwise there are two pieces differing by a small rotation), so the total length is bounded. - I do not understand your proof, could you please give more details? Also, would it work for the torus too? Because then it would contradict what Gerhard "Ask Me About System Design" Paseman claims in his comment. – domotorp May 15 2011 at 20:23 Sphere has positive curvature while torus is flat, so huge area on the torus is perfectly compatible with zero angle deficiency, which, of course, destroys this argument for the torus. What exactly don't you understand? – fedja May 15 2011 at 21:22 First, what is a "long piece of a fixed shape"? Do you just mean a long game? Why are far intersections ruled out automatically? Because the angle a is small compared to the length of L? If yes, then Ca would not be large... Anyhow, I am sure that your argument has some basic thing that I misunderstand and maybe I am not the only one. – domotorp May 16 2011 at 5:50 "Shape" is the fixed sequence of turns: you know whether you go straight, left, or right at each step but you don't know where and in what direction you start. If you have a broken line without self-intersections, the minimal distance between non-adjacent edges is positive and if the angle is much less than this distance, they cannot meet. $Ca$ does not need to be large in absolute terms, just to be much larger than $a$ to get a contradiction (the area must equal the angle deficiency regardless of whether it is large or small). – fedja May 16 2011 at 12:17 I see, thank you! Now I see why the ending works and why 1 and 2 are true but I do not see how you can get a contradiction from 3 (maybe because I am not sure I know what angle deficiency means). – domotorp May 17 2011 at 7:23 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For testing potential answers to Q3, let me suggest http://www.math.chalmers.se/~wastlund/Quirks/Game.html. The appearance as well as the specific set of bugs might depend on your browser, but as I hope will be clear, the length of the longest path as a function of the angle $\delta$ is quite funny (this is part of the reason I spent too much time on this problem, although I'm also interested in similar path-forming games for more "serious" reasons). After drawing a couple of quick sketches I realized I wasn't even able to figure out the behavior when $\delta$ approaches $\pi$ from below. As it turns out, the number of edges in the longest path is 5 throughout the interval $3\pi/4 \leq \delta < \pi$. For $\pi/2 < \delta < 3\pi/4$ it's 7 (although the game-tree changes also at $2\pi/3$). At $\delta = \pi/2$ it has an isolated local minimum of 5, and for angles just smaller than $\pi/2$, it seems to jump to 23. Here's why I couldn't let go of this problem: In all the cases where I can visualize the entire game-tree, which is when $\delta\geq \pi/2$, (1) Alice, who draws the first arc, wins the game-version. (2) Bob, while losing, can force Alice into a maximal-length path. In other words, Alice cannot force a win in fewer moves than the length of the longest path. This property holds for some similar path-forming games where there are explicit winning strategies. (3) As a consequence of (1) and (2), the length of the longest path is always odd. A couple of other observations: If we allow players to cross their own edges but not the opponent's, then an edge is never a liability, and consequently Alice has a non-losing strategy. If on the other hand we allow players to cross the opponent's edges but not their own, then an extra edge cannot be an advantage, and now Bob has a non-losing strategy (two more questions arise here: are these games too necessarily finite, and can the (winning?) strategies be made explicit rather than just exhibited by strategy-stealing?). Therefore I thought for a while that I was on to something, and that there might be a beautiful reason that Alice must win. So I let Maple analyze the game for some suitably chosen angles, working symbolically to get reliable results. This is feasible when $\delta$ is such that the ring $\mathbb{Z}[\cos\delta, \sin\delta]$ where the coordinates of the points lie, is either a sub-ring of $\mathbb{Q}$ or of some nice algebraic field (degree 2 or 4). To summarize, I found counter-examples to each of (1), (2) and (3): For $\delta = \arctan(24/7)$, Bob wins at move 16 but the longest path has length 23. For $\delta = \arctan(12/5)$, the longest path has length 24 (but Alice can force a win in 13). Moreover, for $\delta=\pi/3$ there is a path of length 29 but Alice wins in 17. For $\delta=\arctan(4/3)$, Bob wins in 16 (I don't know the length of the longest path). Addendum. (by J.O'Rourke). I took the liberty of adding an image of Johan's $\pi/3$ longest path from his applet, as detailed in his comment below. Note the near miss where the 28th segment just misses the 1st segment. - 1 This is amazing analysis!!! And your game applet is awesome! I haven't absorbed it all yet, and so have no substantive remark to make, but I wanted to register my admiration. It might be useful if you could list those $\delta$ for which you know the longest paths. I would be interested to see a path that illustrates the jump to length 23, or the $\pi/3$ path of length 29, just to get some intuitive sense of their structure. Thanks for sharing all this information! – Joseph O'Rourke Jun 1 2011 at 13:48 Here's an example of a path of length 29 for $\delta=\pi/3$ (f,l,r = forward, left, right): f-f-f-r-l-r-r-f-l-l-r-l-f-f-f-f-r-l-r-r-f-l-l-r-l-f-f-f. And one of length 23 for $\delta=0.49\pi$ (actually based on rational arithmetic in Maple with $\delta=\arctan(5101/101)$, which seems to be equivalent). f-l-r-r-l-f-r-r-l-l-f-f-r-r-l-l-f-r-l-l-r-f. Several moves in this sequence are possible with small margins, but at the ninth move (tenth edge) it's really close. Amazingly, at the next move, a right turn would not have been legal. If you follow with the applet you see what I mean. – Johan Wästlund Jun 3 2011 at 7:27 I can show the game is finite which answers one of the questions in the comments and shows the answer to question two is not infinity. The arcs in on the sphere from the rolling are sections of circles of unit radius the angles between the arcs are either 90 or zero. Furthermore there is finite limit to the number of rotations without switching the direction of rotation. So for a large number of rotations there will be a set of arcs with a large number of 90 degree angles. But this will mean a large angle deficiency and an arbitrarily large area but there is a finite limit to the area of subset of the surface of the sphere so there is a contradiction. - Why can't the deficiency be low due to cancellation between $\pi/2$ and $-\pi/2$ angles? – Douglas Zare May 13 2011 at 6:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533022046089172, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/24566/modular-numbers/24568
# Modular numbers I just learned about modular numbers on wikipedia, such as $17 \equiv 3\pmod{7}$. So what is infinity $\pmod{n}$? It can't very well be all the numbers at once, so what happens? - 4 – Eric♦ Mar 2 '11 at 5:15 ## 1 Answer When we say $a (\bmod n)$, we need $a \in \mathbb{Z}$ and $n \in \mathbb{Z} \backslash \{0\}$. So your question "$\infty (\bmod n)$" doesn't make sense in the first place. It is like asking "What is $\text{apple} (\bmod n)$?" What your probably mean and want to know is "What is $\displaystyle \lim_{x \in \mathbb{Z}, x \rightarrow \infty} x (\bmod n)$?". If $n \neq \pm 1$, then the answer is "It doesn't exist". If $n = \pm 1$, then the answer is $0$. - I see. Perhaps I should read up on this whole infinity business. Thanks – Joseph Mar 2 '11 at 5:27 +1, "Apple mod $n$" is really funny. – Eric♦ Mar 2 '11 at 5:29 1 @Joseph: Yes. The take home message is that $\infty$ is not a number. It is a short-form to say that something is unbounded. – user17762 Mar 2 '11 at 5:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593902826309204, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/78948/does-left-invertible-imply-invertible-in-full-group-c-algebras-discrete-case
## Does left-invertible imply invertible in full group C*-algebras (discrete case)? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The following question/problem has been bugging me on and off for some time now: so I thought it might be worth broaching here on MO, as a case of "ask the experts". Let $G$ be a discrete group. Kaplansky observed that since the group von Neumann algebra $VN(G)$ is a finite von Neumann algebra, each left-invertible element in $VN(G)$ is invertible. A proof is outlined in M.S. Montgomery, Left and right inverses in group algebras, Bull. AMS 75 (1969) (Well, she actually states a weaker result, but inspection shows that her argument extends to give what we claim. See also my remarks on this previous MO answer.) The basic idea is to exploit the faithful trace $T\mapsto \langle T\delta_e,\delta_e\rangle$ and how it behaves on idempotents: for if $ab=I$, then $ba$ is an idempotent. In particular, each left-invertible element of the reduced group $C^*$-algebra is invertible. Question. What can we say for the full group $C^*$-algebra? Is every left-invertible element in `$C^*(G)$` automatically invertible? Some basic observations: • The case where $G$ is the free group on two generators follows from a result of M-D Choi [no relation] who showed that $C^*({\mathbb F}_2)$ embeds into a direct product of matrix algebras. • More generally, if `$C^*(G)$` has a faithful trace then one can use the same argument as for the reduced `$C^*$`-algebra to get a positive answer. • If $C^*(G)$ has no non-trivial projections then $ab=I$ implies $ba=I$. (I think this was known to be true for $G={\mathbb F}_2$ but I've forgotten the reference at present.) • There are examples of $G$ where `$C^*(G)$` has no faithful trace; these can be found in work of Bekka and Louvet, and come from exploiting Property (T). Bekka, M. B.(F-METZ-MM); Louvet, N.(CH-NCH) Some properties of `$C^*$`-algebras associated to discrete linear groups. \$C^*-algebras (Münster, 1999), 1–22, Springer, Berlin, 2000. - Would Bill Johnson know? There is an ask-johnson tag. See mathoverflow.net/questions/tagged/ask-johnson – Will Jagy Oct 24 2011 at 3:36 @Will: I think of WBJ as more of a Banach-space specialist than an operator algebraist, but it is entirely possible he might spot something I haven't here. – Yemon Choi Oct 24 2011 at 3:48 I printed out your arXiv piece 1003.1650v2. Very nice. – Will Jagy Oct 24 2011 at 4:02 1 Just a remark: if a $C^*$-algebra $A$ does not have tracial states then $A^{**}$ has an isometry which is not unitary, thus $A^{**}$ does not satisfy Kaplanski condition. Little bit more is true: for every $n$ there are $n$ pairwise orthogonal isometries in $A^{**}$. – Kate Juschenko Oct 24 2011 at 9:51 Nice question, and I do not know the answer. In fact, Yemon would be hard pressed to ask a question about operator algebras to which I know the answer but he does not. – Bill Johnson Oct 24 2011 at 12:43 show 1 more comment ## 2 Answers That's a nice question. I don't know the answer for arbitrary groups, but this finiteness property (left invertible implies invertible in the full group C*-algebra) is known for more groups. M.D. Choi's result was generalized by [Exel and Loring, Internat. J. Math. 1992]. We say that a C*-algebra if residually finite dimensional (RFD) if it has a separating family of finite dimensional representations. RFD algebras have this finiteness property and finite groups, abelian groups, etc., have RFD full group C*-algebras. Exel and Loring show that unital full free products of RFD C*-algebras are RFD. So if $C^*(G_i)$ are RFD (i=1,2), then so is $C^*(G_1*G_2)$. A broader class of C*-algebras than the RFD ones are the MF algebras of [Blackadar and Kirchberg, Math. Ann., 1997]. In MF algebras, all left invertibles are invertible. Recently, [Hadwin, Q. Li, J. Shen, Canad. J. Math. 2011] showed that unital full free products of MF C*-algebras are MF. - 3 Welcome to MO, Ken! – Jon Bannon Oct 25 2011 at 0:23 Thanks for the references - I will have to set aside some time to read up on these results. – Yemon Choi Oct 25 2011 at 0:24 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is an alternative argument for the free group; not using that free groups are residually finite-dimensional. Let $\pi$ be a faithful representation of $C^{\ast}(F)$ on a Hilbert space $H$. Then, as $U(H)$ is connected, $\pi$ can be deformed to the trivial representation in the point-norm topology, i.e. there exists a family of unitary representations $\pi_t$ for $t \in [0,1]$, such that $t \mapsto \pi_t(a)$ is norm-continuous for each $a \in C^{\ast}(F)$, $\pi_0=\pi$ and $\pi_1(g)=1_H$ for all$g \in F$. Now, if $ab=1$ in $C^{\ast}(F)$, then $\pi_t(ba)$ is a continuous path of projections ending at $1_H$. Hence, $\pi_0(ba)=1_H$ and $ba=1$, as $\pi$ was faithful. EDIT: The same argument works if the $C^{\ast}$-algebra embeds into some contractible algebra (i.e. homotopy equivalent to $\mathbb C$). However, even though many reasonable toplogical spaces are quotients of contractible topological spaces, only few reasonable $C^{\ast}$-algebras have this property. There is a close relationship with the concept of quasi-diagonality, which appeared in the work of Voiculescu. - I don't quite follow-- is the point about why this works for a free group that I can take a path through the unitaries for each generator, and then let these (uniquely) induce my *-rep of C^*(F)-- that we have no relations to worry about means that this always works?? – Matthew Daws Oct 27 2011 at 19:28 Thanks Andreas. Reading your answer it reminded me that I'd seen a similar outline before - I guess this kind of argument is very natural for those who've looked at K-theory of group (C*-)algebras. After doing some looking, it seems that this kind of argument was used by J. Cohen in ams.org/mathscinet-getitem?mr=546507 – Yemon Choi Oct 27 2011 at 19:29 Matthew, you are absolutely right. – Andreas Thom Oct 28 2011 at 7:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221549034118652, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/106269?sort=oldest
## which algebraic integers in a cyclotomic field give you integer absolute value? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does anyone know an answer to this question? Question: In an cyclotomic field which algebraic integers have integer absolute value? Revision 1: -1 I like to add this to the above question, Let's take w to be a primitive n-th root of unity, for which set of exponents A of {0,1,...n-1} we have the absolute value of the sum_{i \in A} w^i is an integer. this might not be any help to make it solvable but at least avoid some repetitions - To begin with, everything of the form $n\zeta^r$, where $n$ and $r$ are integers, and $\zeta$ is the root of unity generating the field. – Gerry Myerson Sep 3 at 22:45 1 My guess would be (1) roots of unity (2) elements in imaginary quadratic subfields and (1+2) things which can be written as products of the above. Can anyone prove it? – David Speyer Sep 3 at 22:56 7 Possible counterexample: $z^2$ where $z = 1 + \zeta^2 + \zeta^5 + \zeta^6$ and $\zeta$ is a $13$th root of unity. There is no imaginary quadratic subfield of ${\bf Q}(\zeta)$, but $|z^2| = 3$ because $\lbrace 0, 2, 5, 6 \rbrace$ is a perfect difference set $\bmod 13$. – Noam D. Elkies Sep 4 at 1:09 Nice! Well, that explains why I couldn't prove it. I guess it could still be a product of quadratic imaginaries coming from, for example $\mathbb{Q}(\sqrt{-3})$ and $\mathbb{Q}(\sqrt{-39})$, but even if that is true the methods of proof I was trying wouldn't have found it. – David Speyer Sep 4 at 1:39 1 If $a$ and $b$ are positive integers for which $a+b$ is the square of an integer, then $\sqrt{a}+i\sqrt{b}$ is an algebraic integer in a cyclotomic field whose absolute value is an integer. More generally, the problem reduces to solving $x^2+y^2=z^2$ in real algebraic integers $x,y$ belong to a cyclotomic field, and a rational integer $z$. – Richard Stanley Sep 4 at 13:53 show 2 more comments ## 3 Answers There is a primitive way to do it, given $n$. You can of course an element of such a ring of integers can be written (non-uniquely, but this does not matter) as $\sum a_i\zeta_n^i$, and the absolute value then is $\sum_{i,j} a_ia_jcos(2\pi(i-j)/n)$. So just figure the relations in these cosine values over $\mathbb{Z}$ (for a specific value of $n$), and then write down some relations you need between the $a_i$'s and $a_j$'s to make sure you get an integer in the end. Of course, using a basis instead of a spanning-set simplifies these relations, but it doesn't matter: you can just write the relations and then decompose any element this way and just check the relations. Hope this helps. - As a slight modification, which avoids handling the cosines, I suggest the following: Set $a=\sum a_i\zeta^i$. Then $b=a\bar a$ is an algebraic integer, and it is an integer if and only if $b^\sigma=b$ for each $\sigma$ in the Galois group of $\mathbb Q(\zeta)/\mathbb Q$. But this Galois group acts by replacing $\zeta$ with $\zeta^k$ for $k$ prime to $n$, so $\sum a_i\zeta^i\sum a_i\zeta^{-i}=\sum a_i\zeta^{ik}\sum a_i\zeta^{-ik}$ for each such $k$. Introducing a basis then gives a system of quadratic equations of the $a_i$. (Isn't the question actually whether $\sqrt{b}$ is an integer?) – Peter Mueller Sep 3 at 21:20 For $n=3$ all elements are integers because the abolute value is just the algebraic norm, which is always an integer. The thing you are making sure of is that $(a_0a_1+a_0a_2+a_1a_2)+(a_0a_1+a_0a_2+a_1a_2)$ is even, which of course it is. – Will Sawin Sep 3 at 21:21 Also your formulas for the cosines are wrong. For $n=3$ the cosine is either $-1/2$ or $1$, and for $n=5$ it is $1$ or $(1\pm \sqrt{5})/4$, not the other way around. – Will Sawin Sep 3 at 21:22 Actually as Peter points out for $n=3$ you need to know that the algebraic norm is a perfect square, which ends up being something about the ideal generated by the number that you can check locally at each prime. – Will Sawin Sep 3 at 21:24 Oops, I screwed up, let me fix it. – Joseph Victor Sep 4 at 0:22 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. What follows is not mine. It was posted by Scaroth, and then deleted by Scaroth, with the explanation, "Given the subsequent comments of the OP, the indication was that this answer was not even read, so I will delete it." However, the answer was read by at least 9 people who gave it upvotes. Whether or not it helped OP, several people found it helpful. I have no way to contact Scaroth to ask him/her to reconsider, so I will simply repost it. $\newcommand\p{\mathfrak{p}}$ $\newcommand\Q{\mathbf{Q}}$ Denote the cylotomic field by $F$. Since complex conjugation commutes with every automorphism of $G = \mathrm{Gal}(F/\mathbf{Q})$, it follows that complex conjugation preserves absolute values in $F$. Hence you are asking exactly for a classification of so-called $n^2$-Weil numbers. (By definition, $m$-Weil numbers are algebraic integers which have absolute value $\sqrt{m}$ for every complex embedding.) It is generally thought that if you fix an integer $m$, then there are only finitely many $m$-Weil numbers up to roots of unity in all cyclotomic fields simultaneously, but this is not yet known. On the other hand, if you fix $F$ and increase $m$, such numbers are easy to come by. Indeed, it's easy to find such $\beta$ so that the Galois group $G$ of $F$ acts faithfully on the conjugates of not only the element $\beta$, but the ideal $(\beta)$ as well (which is not the case for $(\zeta \alpha)$ for a root of unity $\zeta$ whenever $\mathbf{Q}(\alpha) \ne F$.) The answer to your question, therefore, is "there are a lot of them." There's no easy classification of Weil numbers, however, since otherwise the conjecture mentioned above would be known. If there's a more precise property of the set of numbers that you are interested in, please ask a follow up question. Recall that, in a cylcotomic field (more generally, a CM field), the real units have finite index inside $\mathcal{O}^{\times}_F$. It follows that the group of units of the form $\epsilon \cdot \overline{\epsilon}$ has finite index, and hence that there exists a integer $N$ such that: For every (totally) real unit $\eta$, the unit $\eta^N$ is of the form $\epsilon \cdot \overline{\epsilon}$ for some unit $\epsilon$. Let us now construct some Weil numbers. Suppose that the prime $p$ splits completely in $F$. There are $[F:\mathbf{Q}] = 2d$ such primes, and thus $2^d$ ways to choose a set $S$ of $d$ of primes which includes exactly one from each pair ${\p,\overline{\p}}$. Let $$I = \prod_{S} \p.$$ Then $I \overline{I} = (p)$. Let $h$ denote the class number of $F$, and write $I^h = (\alpha)$. Then $$\alpha \overline{\alpha} = p^h \eta$$ for some totally real unit $\eta$. It follows that $$\alpha^{N} \overline{\alpha}^N = p^{hN} \epsilon \overline{\epsilon},$$ and hence that $\beta:=\alpha^N/\epsilon$ has absolute value $p^{hN}$. Moreover, from the factorization of $\beta$, we see that the ideal $(\beta)$ is only fixed by $H \subset \mathrm{Gal}(F/\mathbf{Q})$ which fix the set $S$. In particular, if $|G| > 4$, one can choose $S$ such that the stabilizer of $\beta$ is trivial and thus $F = \Q(\beta)$. There are many variations on the above argument. For example, one can probably find $p$-Weil numbers for prime $p$ by finding primes $p$ which split completely in some ray class field of conductor $M \infty$ (designed so that totally positive units which are $1 \mod M$ are of the form $\epsilon \cdot \overline{\epsilon}$). (Given the subsequent comments of the OP, the indication was that this answer was not even read, so I will delete it.) - Thank you, Gerry. I also thought it was unfortunate that scaroth deleted his answer, so that only those with more than 10k points could see it. The OP is not the only one who can benefit. – Todd Trimble Sep 8 at 23:10 Good heavens, who voted this down?! Gerry was being very helpful. – Todd Trimble Sep 8 at 23:43 @Todd By the way, I think scaroth is female. I could be wrong, but might as well not assume. And I agree that scaroth's answer was very helpful. – David Speyer Sep 9 at 12:48 @David: yes, of course you're right. An unfortunate slip on my part. – Todd Trimble Sep 9 at 14:51 Katie asks further about the case where we have a subset $A$ of $\mathbb{Z}/n$ and asks for $\sum_{k \in A} \zeta^k$ to have integer absolute value. She writes that, for $n$ prime, the only solutions should be the trivial ones $|A| =0$, $1$, $p-1$ or $p$. The point of this note is that she is correct, and that there probably isn't a good description like this for $n$ not prime. This answer is built on Noam Elkies's very helpful comment above. Let $p$ be prime and let $A \subseteq \mathbb{Z}/p$. Let $a=|A|$. Let $z=\sum_{k \in A} \zeta^k$. Let `$b_k = \# \{ (i,j) \in A^2 : i-j \equiv k \bmod p \}$` So `$$z \bar{z} = a + \sum_{k=1}^{p-1} b_k \zeta^k$$` Since the minimal polynomial of $\zeta$ is $1+\zeta+\zeta^2+\cdots+\zeta^{p-1}$, the only way that $z \bar{z}$ can be an integer is if all the $b_k$ are equal to some common value, say $b$. In this case, $z \bar{z} = a-b$. Furthermore, we want $\sqrt{z \bar{z}}$ to be an integer, say $n$. So $a=n^2+b$ for some nonnegative integer $n$. Now, we must have $a(a-1) = b(p-1)$ since $\sum_{k=1}^{p-1} b_k$ is clearly $a(a-1)$. So $(n^2+b)(n^2+b-1) = b(p-1)$. If $b=0$ then $|A|=0$ or $1$. If not, we can divide by $b$ to write: `$$p=\frac{(n^2+b)(n^2+b-1)}{b}+1 = \frac{(n^2+n+b)(n^2-n+b)}{b}.$$` (That factorization came out of nowhere, as far as I'm concerned.) Since $p$ is prime, that means that at least one of $n^2+n+b$ and $n^2-n+b$ is $\leq b$. But $n^2 \pm n \geq 0$ for nonnegative integer $n$. So $n=0$ or $1$. This gives $p=b$ and $p=b+2$, and then $a=p$ and $a=p-1$ respectively. Now, if $n$ is not prime, then there are lots of sets $A$ such that all the $b_k$ are equal. The special case $b=1$ is called a perfect difference set, and they are pretty plentiful. For example, $(0,5,6,9,19)$ is a perfect difference set modulo $21$, giving $$|1+\zeta_{21}^5+\zeta_{21}^6+\zeta_{21}^9+\zeta_{21}^{19}| = \sqrt{5-1} = 2.$$ According to the mathworld link above, there are perfect difference sets modulo $q^2+q+1$ for every perfect power $q$; taking $q=p^2$ we deduce that we can always find $A$ in $\mathbb{Z}/(p^4+p^2+1)$ with absolute value $p$. Based on skimming the results of a quick google search, my impression is that there are lots of methods known for constructing perfect difference sets, but no classification. Probably someone has studied the case $b>1$, but I didn't see it. But even if there were a classification, that wouldn't be a complete answer. Because $1+\zeta+\cdots + \zeta^{n-1}$ is not the minimal polynomial of $\zeta$ for $n$ composite. And I don't see how to control the other solutions that might come up because of this. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 164, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9600393772125244, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/142489/upper-bound-for-the-series-sum-n-geq-1-frac1n1a1-sum-k-0n-bk
# Upper bound for the series $\sum_{n\geq 1}\frac{1}{(n+1)^{a+1}}\sum_{k=0}^n b^k\left(\frac{(n-k)!}{n!}\right)^a$ I want to show that the series $$\sum_{n\geq 1}\frac{1}{(n+1)^{a+1}}\sum_{k=0}^n b^k\left(\frac{(n-k)!}{n!}\right)^a$$ converges for $a,b>0$. I have tried this so much that the smallest hint will probably suffice. I asked a question before which would have been enough but it is not true. Right now I am really stuck and frustrated. Any help would be greatly appreciated! - $b\leq1$ is pretty easy. But $b>1$ is killing me :( – wircho May 8 '12 at 4:16 Got it. I will post my own answer! – wircho May 8 '12 at 4:42 ## 2 Answers It is enough to show that the sum for $n\geq0$ converges. Changing sums and manipulating I get: $$\sum_{k\geq0}\frac{b^k}{k!}\sum_{n\geq k}\frac{1}{(n+1)^{a+1}\left(\begin{array}{c}n\\ k\end{array}\right)}$$ $$\leq\sum_{k\geq0}\frac{b^k}{k!}\sum_{n\geq k}\frac{1}{(n+1)^{a+1}}$$ $$\leq\sum_{k\geq0}\frac{b^k}{k!}\sum_{n\geq 0}\frac{1}{(n+1)^{a+1}}$$ $$=\sum_{k\geq0}\frac{b^k}{k!}C=Ce^{b}$$ where $C$ is a constant $<\infty$ because $a+1>1$. - This bound only works for $a\geq 1$. The sum is not bounded by $C e^b$ for $0<a<1$. – oen May 13 '12 at 18:27 For convenience, we consider the sum starting at $n=0$. Then $$\begin{eqnarray*} \sum_{n=0}^\infty \frac{1}{(n+1)^{a+1}}\sum_{k=0}^n b^k\left(\frac{(n-k)!}{n!}\right)^a &=& \sum_{k=0}^\infty \frac{b^k}{(k!)^a} \sum_{n=k}^\infty \frac{1}{(n+1)^{a+1}} \frac{1}{{n\choose k}^{a}} \\ &\leq& \sum_{k=0}^\infty \frac{b^k}{(k!)^a} \sum_{n=k}^\infty \frac{1}{(n+1)^{a+1}} \\ &\leq& \zeta(a+1) \sum_{k=0}^\infty \frac{b^k}{(k!)^a}. \\ \end{eqnarray*}$$ We have used the fact that $1/{n\choose k}^a \leq 1$ for $a>0$. Notice that $k!^a \geq k!$ only if $a\geq 1$. Thus, the sum is bounded by $\zeta(a+1) \sum_{k=0}^\infty b^k/k! = \zeta(a+1)e^b$ only if $a\geq 1$. The sum converges if $\sum_{k=0}^\infty b^k/(k!)^a$ converges. But the ratio of successive terms goes like $b/k^a$, and so vanishes in the limit since $a>0$. Thus, the series converges. Notice the convergence of $\sum_{k=0}^\infty b^k/(k!)^a$ can be very slow. Let $a=1/10$ and $b=10$. It is not until we reach $k=10^{10}$ that the ratio of successive terms is less than $1$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9611841440200806, "perplexity_flag": "head"}
http://theoryclass.wordpress.com/2012/07/30/bayesian-and-dominant-incentive-compatibility/?like=1&source=post_flair&_wpnonce=1a35c759c1
# Bayesian and Dominant Incentive Compatibility (Corrected 9/29/2012) July 30, 2012 in Auctions, Mechanism design, Uncategorized In this post I describe an alternative proof of a nifty result that appears in a forthcoming paper by Goeree, Kushnir, Moldovanu and Xi. They show (under private values) given any Bayesian incentive compatible mechanism, M, there is a dominant strategy mechanism that gives each agent the same expected surplus as M provides. For economy of exposition only, suppose 2 agents and a finite set of possible outcomes, ${A}$. Suppose, also the same type space, ${\{1, \ldots, m\}}$ for both. Let ${f(\cdot)}$ be the density function over types. To avoid clutter, assume the uniform distribution, i.e., ${f(\cdot) = \frac{1}{m}}$. Nothing in the subsequent analysis relies on this. When agent 1 reports type ${s}$ and agent 2 reports type ${t}$, denote by ${z_r(s,t) }$ the probability that outcome ${r \in A}$ is selected. The ${z}$‘s must be non-negative and satisfy ${\sum_{r \in A}z_r(s,t) = 1.}$ Associated with each agent ${i}$ is a vector ${\{a_{ir}\}_{r \in A}}$ that determines, along with her type, the utility she enjoys from a given allocation. In particular, given the allocation rule ${z}$, the utility that agent ${1}$ of type ${t}$ enjoys when the other agent reports type ${s}$ is ${\sum_{r \in A}ta_{1r}z_r(t,s).}$ A similar expression holds agent 2. Let ${q_i(s,t) = \sum_{r \in A}a_{ir}z_r(s,t).}$ Interpret the ${q}$‘s as the `quantity’ of goods that each agent receives. Dominant strategy implies that ${q_1(s,t)}$ should be monotone increasing in ${s}$ for each fixed ${t}$ and ${q_2(s,t)}$ should be monotone increasing in ${t}$ for each fixed ${s}$. The interim `quantities will be: ${mQ_i(s) = \sum_t\sum_{r \in A}a_{ir}z_r(s,t).}$ Bayesian incentive compatibility (BIC) implies that the ${Q_i}$‘s should be monotone. To determine if given BIC interim `quantities’ ${Q_i}$‘s can be implemented via dominant strategies, we need to know if the following system is feasible $\displaystyle \sum_{r \in A}a_{1r}[z_r(i,j) - z_r(i-1,j)] \geq 0\,\, \forall \,\, i = 2, \ldots, m \ \ \ \ \ (1)$ $\displaystyle \sum_{r \in A}a_{2r}[z_r(i,j) - z_r(i,j-1)] \geq 0\,\, \forall \,\, j = 2, \ldots, m \ \ \ \ \ (2)$ $\displaystyle \sum_t\sum_{r \in A}a_{1r}z_r(s,t) = mQ_1(s) \ \ \ \ \ (3)$ $\displaystyle \sum_s\sum_{r \in A}a_{2r}z_r(s,t) = mQ_2(t) \ \ \ \ \ (4)$ $\displaystyle \sum_{r \in A}z_r(s,t) = 1\,\, \forall s,t \ \ \ \ \ (5)$ $\displaystyle z_r(i,j) \geq 0\,\, \forall i,j,r \ \ \ \ \ (6)$ System (1-6) is feasible iff the optimal objective function value of the following program is zero: $\displaystyle \min \sum_{i,j}w_{ij} + \sum_{i,j}h_{ij}$ subject to $\displaystyle \sum_{r \in A}a_{1r}[z_r(i,j) - z_r(i-1,j)] + w_{ij} - u_{ij} = 0\,\, \forall \,\, i = 2, \ldots, m\,\, (\mu_{ij}) \ \ \ \ \ (7)$ $\displaystyle \sum_{r \in A}a_{2r}[z_r(i,j) - z_r(i,j-1)] + h_{ij} - v_{ij} = 0\,\, \forall \,\, j = 2, \ldots, m\,\, (\nu_{ij}) \ \ \ \ \ (8)$ $\displaystyle \sum_j\sum_{r \in A}a_{1r}z_r(i,j) = mQ_1(i)\,\, (\alpha_i) \ \ \ \ \ (9)$ $\displaystyle \sum_i\sum_{r \in A}a_{2r}z_r(i,j) = mQ_2(j)\,\, (\beta_j) \ \ \ \ \ (10)$ $\displaystyle \sum_{r \in A}z_r(i,j) = 1\,\, \forall i,j\,\, (\lambda(i,j)) \ \ \ \ \ (11)$ $\displaystyle w_{ij},\,\, h_{ij}\,\,z_r(i,j) \geq 0\,\, \forall i,j,r \ \ \ \ \ (12)$ Let ${(z^*, w^*, h^*, u^*, v^*)}$ be an optimal solution to this program. Suppose, for a contradiction there is a pair ${(p,q)}$ such that ${w^*_{pq} > 0}$. I will argue that there must exist an ${s}$ such that ${u^*_{ps} > 0}$. Suppose not, then for each ${j \neq q}$, either ${w^*_{pj} > 0}$ or ${w^*_{pj} = 0}$ and ${u^*_{pj} = 0}$ (at optimality, it cannot be that ${w^*_{pq}}$ and ${u^*_{pq}}$ are both non-zero). In this case $\displaystyle m[Q_1(p)-Q_1(p-1)] = \sum_j\sum_{r \in A}a_{1r}z^*_r(p,j) - \sum_j\sum_{r \in A}a_{1r}z^*_r(p-1,j)$ $= \sum_{j: w^*_{pj} > 0}a_{1r}[z^*_r(p,j) - z^*_r(p-1,j)]$ $= -\sum_j w^*_{pj}$. This last term is negative, a contradiction. Therefore, there is a $s$ such that $w^*_{ps} = 0$ but $u^*_{ps} > 0$. Let ${Z_1(i,j) = \sum_{r \in A}a_{1r}z^*_r(i,j)}$, ${Z_2(i,j) = \sum_{r \in A}a_{2r}z^*_r(i,j)}$ and denote by ${Z(i,j)}$ the point ${(Z_1(i,j), Z_2(i,j))}$. Observe that ${Z(i,j)}$ is in the convex hull, ${K}$ of ${\{(a_{1r}, a_{2r}\}_{r \in A}}$ for all ${i,j}$. Thus choosing ${\{z_r(i,j)\}_{r \in A}}$‘s amounts to choosing a point ${Z(i,j) \in K}$. Equivalently, choosing a point ${Z(i,j) \in K}$ gives rise to a set of ${\{z_r(i,j)\}_{r \in A}}$‘s. For convenience assume that ${Z(i,j)}$ is in the strict interior of ${K}$ for all ${(i,j)}$ and that $K$ is full dimensional. This avoids having to deal with secondary arguments that obscure the main idea. Recall, ${w^*_{pq} > 0}$ implies ${Z_1(p,q) 0}$ implies that ${Z_1(p,s) > Z_1(p,s)}$. Take all points ${\{Z(i,q)\}_{i \geq p}}$ and shift them horizontally to the right by ${\delta}$. Call these new points ${\{Z'(i,q)\}_{i \geq p}}$. Observe that ${Z'(i,q) \in K}$ for all ${i \geq p}$. Next, take all points ${\{Z(i,s)\}_{i \geq p}}$ and shift them horizontally to the left by ${\delta}$ to form new points ${\{Z'(i,s)\}_{i \geq p}}$. These points are also in ${K}$. Leave all other points ${\{Z(p,j)\}_{j \neq, q,s}}$ unchanged. Because the vertical coordinates of all points were left unchanged, (8) and (10) are satisfied by this choice of points. Because ${\{Z(i,q)\}_{i \geq p}}$ and ${\{Z(i,s)\}_{i \geq p}}$ were shifted in opposite directions along the horizontal, (9) is still true. Finally, because all points in ${\{Z(i,q)\}_{i \geq p}}$ and ${\{Z(i,s)\}_{i \geq p}}$ were shifted by the same amount, (7) continues to hold. The shift leftwards of ${Z(p,s)}$ reduces ${u_{ps}}$ while the rightward shift of ${Z(p,q)}$ reduces ${w_{pq}}$. Thus, we get a new solution with higher objective function value, a contradiction. If ${Z(p,q)}$ and ${Z(p,s)}$ are not the interior of ${K}$ but on the boundary, then horizontal shifts alone may place them outside of ${K}$. In the case of ${Z(p,q)}$ this can only happen if ${Z_2(p,q) > Z_2(p-1,q)}$. In this case, shift ${Z(p,q)}$ across and to the right by ${\delta}$ as well and then downwards by the same amount. This would have to be matched by a corresponding upward shift by some point ${Z_2(h,q)}$. Similarly with ${Z(p,s)}$. Thanks to Alexey Kushnir and Ahmad Peivandi for comments. ### Email Subscription Join 76 other followers Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 105, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229321479797363, "perplexity_flag": "head"}
http://mathoverflow.net/questions/51604?sort=oldest
## Can Lie algebra cohomology prove Cartan’s Semisimplicity Criterion? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Here is what I mean by "Cartan's semisimplicity criterion": Let $\mathfrak g$ be a finite-dimensional Lie algebra over a field of characteristic $0$. Assume that the center of $\mathfrak g$ is trivial. Then, the following three assertions are equivalent: (1) The Killing form on $\mathfrak g\times \mathfrak g$ is nondegenerate. (2) Every short exact sequence of finite-dimensional representations of $\mathfrak g$ splits. (3) Every subrepresentation of the adjoint representation of $\mathfrak g$ has a complementary subrepresentation. What I am looking for is a slick proof for this equivalence (although the only thing I really need is a proof of (1) $\Longrightarrow$ (3)). I am aware of the proof in Fulton-Harris Appendix C, but this could fill an hour of talking and seems to involve many unmotivated ideas. Is there something more explanatory? Using cohomology perhaps? Is the whole thing obvious from an advanced viewpoint? (I don't mean using the classification of simple Lie algebras, of course...) Maybe newer ideas such as Lie algebroids, algebraic groups etc. can help? - 2 Presumably you've looked at the treaments in Weibel chapter 6 or 7(the one on Lie algebra cohomology), or the proof in Serre's book "Lie algebras and Lie groups". – Daniel Pomerleano Jan 10 2011 at 0:27 1 Throughout you mean finite dimensional reps. – Mariano Suárez-Alvarez Jan 10 2011 at 2:19 6 By the way, this is false as stated. Abelian lie algebras satisfy (3). – Ben Webster♦ Jan 10 2011 at 2:31 3 Yes, it seems like there is the implicit assumption that $\mathfrak{g}$ has trivial center. – Keerthi Madapusi Pera Jan 10 2011 at 3:41 2 Cohomology is implicit in the algebraic version of Weyl's complete reducibility theorem (see Jacobson's 1962 book or Weibel), but not strictly needed. The quadratic Casimir element is crucial, though you can avoid universal enveloping algebras here. The efficient algebraic argument by Brauer is used in Bourbaki Groupes et algebres de Lie (Chap. 1), Serre's lectures, my book, Fulton & Harris, etc. What is "simplest" depends a lot on what you already know. Weyl's original proof for compact groups is the most transparent step beyond Maschke's theorem for finite groups. – Jim Humphreys Jan 10 2011 at 14:51 show 5 more comments ## 2 Answers My favorite proof is showing that the trivial module is projective in the category $\mathcal R$ of finite dimensional reps; you can do this using cohomology and the quadratic Casimir (so here the Killing form comes in). After that, showing that all fin. dim. reps. are semisimple amounts to showing that they are all projective in $\mathcal R$, and this follows from the identity $\hom_\mathfrak g(M,N)=\hom_\mathfrak g(k,\hom(M,N))$ and a little cohomological yoga. - 1 The fact that the trivial module is the maximally complicated one has always marveled me. – Mariano Suárez-Alvarez Jan 10 2011 at 3:39 1 I think once you internalize the translation principle, it suddenly becomes the most natural thing in the world. – Ben Webster♦ Jan 10 2011 at 4:54 2 But it is a more general phenomenon than that: the same thing happens for local algebras, where the residue ring contains all the complexity, for example, or non-negatively graded ones. – Mariano Suárez-Alvarez Jan 10 2011 at 4:58 1 I would argue that that's completely different; that's just because it's the only simple module. In the Lie algebra situation, being able to tensor gets much more out of one simple than you have any right to. – Ben Webster♦ Jan 10 2011 at 5:45 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. My favorite proof is the Harish-Chandra isomorphism. Choose the PBW isomorphism $U(\mathfrak g)\cong U(\mathfrak n_-)\otimes U(\mathfrak h)\otimes U(\mathfrak n_+)$ and let $p: U(\mathfrak g)\to U(\mathfrak h)\cong \mathbb k[\mathfrak h^*]$ be the projection killing all higher order terms in the other factors. Then Theorem. The restriction of $p$ to the center $Z(U(\mathfrak g))$ is an injection whose image is the invariant functions for the $\rho$-shifted Weyl group action $w\bullet \lambda =w(\lambda+\rho)-\rho$ on $\mathfrak h^*$. This means that the center acts in the same way on two different highest weight representations if and only if $\lambda+\rho$ and $\mu+\rho$ are in the same orbit of the Weyl group (consider the action on the highest weight vector). This can't happen for two different dominant weight vectors ($\lambda+\rho$ is in the interior of the dominant Weyl chamber), so the action of the center distinguishes different simple representations. Now we need to show that these can't glue to each other; you can easily reduce to the case of an extension $V\to W \to V$ for a simple $V$. In this case, just pick a splitting on the highest weight space, and let $V'\subset W$ be the sub-rep generated by the preimage of the highest weight in the quotient copy of $V$. By PBW, the intersection of this with the highest weight space is 1-dimensional, so it cannot have any intersection with the sub-module, but it maps surjectively to the quotient. Thus it is a complement and the sequence splits. I think this shows that there really is something delicate about this semi-simplicity; this isn't the lowest tech proof, but they're always a little subtle. You can also see this from how wildly semi-simplicity fails in the infinite-dimensional case; look at some literature on Verma modules and category $\mathcal O$ to see how complicated this can get. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9100157618522644, "perplexity_flag": "head"}
http://scicomp.stackexchange.com/questions/4803/is-there-a-nonlinear-solver-similar-to-cgnr-evaluating-only-atax
# Is there a nonlinear solver similar to CGNR evaluating only $A^TAx$? First of all, I am quite new to this field and I excuse myself in advance for any stupid content in this question. In the field of compressed sensing or deblurring I have a nonlinear optimization problem of the form $\min R(x)$ s.t. $Ax-b<\epsilon$ Currently, I solve the optimization problem using the Langrangian form and the non-linear conjugated gradient method: $f(x) = |Ax-b|^2 +\lambda R(x)$ with the gradient $\nabla f(x) = 2A^TAx-A^Tb +\lambda \nabla R(x)$ $Ax$ and $A^Tx$ are expensive to evaluate since they contain a non-uniform FFT. However, I am able to evaluate $A^TAx$ quite fast on a cartesian grid by convolution with the point spread function of $A^TA$. This way I am able to calculate the gradient fast, but for the line search of the nonlinear CG I still have to evaluate $Ax$ frequently. I am looking for a nonlinear method that only needs the evaluation of $A^TAx$. In a first attempt, somehow similar to linear CGNR, I tried to minimize the residual of the normal equations by using the following (in the line search only): $f2(x) = |A^TAx-A^Tb|^2 +\lambda R(x)$ Without $\lambda Rx$ this method obviously reduces to linear CGNR (with an unnecessary line-search). The method seems to converge to a solution. However, it converges to a different solution than the nonlinear CG, since the constant value of $\lambda$ needs to be adapted . So my questions are: 1) Is this total nonsense what I tried? ;-) 2) Is there something like a "nonlinear CGNR"? 3) Or some other suitable method that solves my nonlinear optimization problem using only $A^TA$? 4) What is the gradient of $f2x()$? - ## 1 Answer Your problem is equivalent to minimizing $f(x)=x^T(Bx-c)+\lambda R(x)~~~~$ where $B=A^TA$, and $c=2A^Tb~~$. The gradient is $g(x)=2Bx-c+\lambda\nabla R(x)$. Thus if you precompute $c$, any gradient-based method with line search only needs multiplications with $B$. In the complex case, you have $f(x)=x^*Bx-Re~(x^*c)+\lambda R(x)~~~~~$ where $B=A^*A$, and $c=2A^*b~~$, and things are essentially as in the real case. - Thank you, this is a good solution. Now I am trying to do the same thing in complex space. In this case I can't factor out $x^H$ that easily, since $(x^HA^Hb)^H \neq x^HA^Hb$. Any idea? – Stiefel Dec 6 '12 at 16:33 Of course you can. See the edit. – Arnold Neumaier Dec 6 '12 at 17:02 Stupid me, the $Re(x^*c)$ is a scalar product and a cheap operation in my case. So there is obviously no need to factor out $x^*$. Thank you, I should have seen this myself! – Stiefel Dec 10 '12 at 13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344913959503174, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/155130-basic-algebra-question-confused.html
# Thread: 1. ## Basic algebra question, confused I didnt know how to name this topic, but hopefully someone can help heres the equation: (one-fourth y minus two thirds y equals 5) 1/4y - 2/3y = 5 I have the answer, which is -12. But can someone explain to me HOW to get the answer without plugging -12 in? When I start moving numbers to the other side I end up getting a number thats waay off by the end of it. Thank you! 2. Multiply every term by 12. Then solve the equation you get. 3. Originally Posted by NecroWinter I didnt know how to name this topic, but hopefully someone can help heres the equation: (one-fourth y minus two thirds y equals 5) 1/4y - 2/3y = 5 I have the answer, which is -12. But can someone explain to me HOW to get the answer without plugging -12 in? When I start moving numbers to the other side I end up getting a number thats waay off by the end of it. Thank you! $\displaystyle \frac {y}{4} -\frac {2y}{3} = 5$ multiply all by 12 $3y-8y=60 \Rightarrow 5y=-60 \Rightarrow y = -12$ or is it $\displaystyle \frac {1}{4y} - \frac {2}{3y} = 5$ Edit : Sorry Plato, didn't see that you have posted answer ... 4. Originally Posted by NecroWinter I didnt know how to name this topic, but hopefully someone can help heres the equation: (one-fourth y minus two thirds y equals 5) 1/4y - 2/3y = 5 I have the answer, which is -12. But can someone explain to me HOW to get the answer without plugging -12 in? When I start moving numbers to the other side I end up getting a number thats waay off by the end of it. Thank you! Two-thirds of some number subtracted from a quarter of the same number equals 5. $\displaystyle\ y\left(\frac{1}{4}\right)-y\left(\frac{2}{3}\right)=5\Rightarrow\ y\left(\frac{1}{4}-\frac{2}{3}\right)=5$ A quarter is just a value where the denominator is 4 times the numerator. $\displaystyle\frac{1}{4}=\frac{2}{8}=\frac{3}{12}= \frac{4}{16}=.....$ Also $\displaystyle\frac{2}{3}=\frac{2}{2}\cdot\frac{2}{ 3}=\frac{4}{6}=\frac{8}{12}=....$ We can add and subtract fractions with the same denominator... $\displaystyle\ y\left(\frac{3}{12}-\frac{8}{12}\right)=y\left(-\frac{5}{12}\right)$ $\displaystyle\ -\frac{5}{12}\left(-\frac{12}{5}\right)=1$ Therefore... $\displaystyle\ y=\left(-\frac{12}{5}\right)5$ Since LHS=RHS, then 12(LHS)=12(RHS) so a shortcut is to multiply both sides by 12 to avoid having to combine the fractions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204951524734497, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/30946/why-is-dark-matter-the-best-theory-available-to-explain-missing-mass-problems
why is dark matter the best theory available to explain missing mass problems? Why is dark matter the best theory to explain the missing mass problem? Why is dark matter mathematically necessary to explain the missing mass problem? On a side not I believe dark matter is definately intuitive the next logical step in physics is to accept the fact the direct observations can not always be the determining factor in the acceptance of a theory. Positivism is not going to be the only driving force in physics forevermore. One theory that seems almost as good is the supersymetric particle. What determination which are actually detectible and how would we distinguish the differences between the two? References: http://blogs.scientificamerican.com/observations/2011/04/14/underground-xenon100-experiment-closes-in-on-dark-matters-hiding-place/ http://www.science20.com/hammock_physicist/dark_matter_plot_thickens. http://www.holoscience.com/wp/synopsis/synopsis-5-electric-galaxies/5/ - 5 – DJBunk Jun 29 '12 at 2:15 3 First, it's "foolhardy" and second, why would it be foolhardy to consider the existence of matter that only interacts gravitationally? Should we develop the capability to "see" gravitational waves, dark matter, if it exists, will no longer be dark, correct? – Alfred Centauri Jun 29 '12 at 2:16 1 If you don't want to start a discussion, I would avoid inflammatory terms like "haphazard" and "foolhearty" (sic) – user2963 Jun 29 '12 at 2:53 3 @Argus Some empiric evidence has been collected in favour of the dark matter theory. In fact, a specialist even created a "map" of the possile distribution of dark matter in our galaxy, or some bounded space near us, by certain measurements involving light waves and how they were bent by the gravitational field of dark matter. Make some research. – Peter Tamaroff Jun 29 '12 at 3:21 2 It's not clear to me what you're actually asking here, Argus. If you explain your question more directly, it will improve the quality and perhaps it could be reopened. (Also using proper grammar wouldn't hurt - in general, something which doesn't include a question mark can't really be a question) – David Zaslavsky♦ Jun 29 '12 at 3:44 show 7 more comments 2 Answers The missing mass problems are several sets of observations that could be explained if there were some matter that has mass (interacts with other matter via gravity) but does not interact with light. The same distribution of this missing mass would explain all of them. All competitors that have been explored fail to explain at least one. I only partially understood the evidence explained in the wikipedia article, but let me try to summarize the major pieces of evidence that I understood. Historically, the first piece of evidence is the galactic rotational curves. If you have an object with a mass distribution, you can predict different parts of it should rotate about the object's center of mass. When astronomers apply this idea to galaxies, they find that the stars far away from the center move faster than they'd expect based on the mass distribution that they can see. Gravitational lensing is a result of general relativity. When one massive object is in front of another object, the mass from the front object distorts the light that arrives from the rear object. Astronomers infer the mass of the front object by how the appearance of the rear object changes when the front object moves out of the way. General relativity is the only theory that has successfully described this phenomena, and it predicts that the mass must be there. Finally, several experiments have measured the cosmic microwave background radiation in quite some detail. This background radiation looks almost, but not exactly, the same in all directions. This variation is called the anisotropy. I don't fully understand why, matter that interacts with light has a different signal in the anisotropy than matter that doesn't interact with light. So by measuring the anisotropy in enough detail, astronomers can infer how much normal matter there is, and how much dark matter there is. The dark matter community talks about "candidates" for dark matter. These are particles that could be the missing mass. The Scientific American article you cite is about one experiment that is looking for a specific candidate. Supersymmetric particles are another candidate for dark matter, not a competing theory. Both of your other links set off a lot of red flags for me. The Electric Universe really looks like it's just a crank with a pet theory. It doesn't seem like it's even trying to address the anomalies that dark matter explains; it's only trying to argue against black holes. Their theory will not explain any of the missing mass anomalies. Science 2.0 looks like it has a mix of good and bad content. The particular article you cited is about MOND, modified Newtonian dynamics. The idea is to modify either Newton's second law ($F = ma$) or Newton's law of universal gravitation (so that the force due to gravity would fall off as something other than $r^{-2}$). MOND was originally invented to explain galactic rotation curves, and it does that. Its problem is that it doesn't explain, for example, gravitational lensing. - Go out and discover those "other explanations" (and accumulate sufficient supporting evidence, of course) and you can laugh at the dark matter specialists. Until then dark matter is the simplest hypothesis on offer that explains multiple observations in one go (galactic rotation curves, cluster dynamics, super cluster dynamics, the bullet cluster, the lumpiness of the universe, the currently small value of the Hubble constant in light of the large observed size of the universe and the small observed baryonic matter density). Nor is it like there haven't been competitors along the way. People have been challenging this idea all along. Some of those challenges have turned out to be part of the explanation and have been incorporated into the big picture (for instances MACHOs), others have not panned out (MOND is not in very high regard just now). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485437273979187, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/169155-negative-binomial-distribution-function-question-range-x.html
Thread: 1. Negative Binomial distribution function - a question on range of X Consider the mean formula of the Negative Binomial distribution: $\Sigma^{\infty}_{x=r}{{x-1}\choose{r-1}}p^rq^{x-r}$ I don't understand why the range for X in the summation formula starts from r. I've looked up several textbooks and they seem to be treating this as obvious - but this is not obvious to me. (The mean formula is just an example of one of the functions of NegBin: I have the same question about any other (variance, moment generating function) formulas for NegBin which include the same range of X.) Here is the most relevant comment on the range of R I found (internet): "Let X be the random variable which is the number of trials up to and including r-th success. This means that the range of X is the set {r, r+1, r+2,...}." This sounds couter-intuitive to me, if X is the number of trials UP to r, then why do we start counting from r? why not start from x=1 (first trial) until x=r (the last successful outcome we need to 'complete' the r)? 2. "Let X be the random variable which is the number of trials up to and including r-th success. This means that the range of X is the set {r, r+1, r+2,...}." As this quote says, the total number of trials X is greater than or equal to the number of successes r. If you are told to wait for r successes, then you know that you need at least r trials. The actual number of trials can be r with some probability, r + 1 with some other probability and so on. The mean, therefore, is $r\cdot P(X=r) + (r+1)\cdot P(X=r+1)+\dots=\sum_{x=r}^\infty xP(X=x)$. Since $P(X=x)={x-1\choose r-1}p^rq^{x-r}$, the mean is $\sum_{x=r}^\infty x{x-1\choose r-1}p^rq^{x-r}$, so I think there is a factor $x$ missing in your formula. 3. Originally Posted by Volga Consider the mean formula of the Negative Binomial distribution: $\Sigma^{\infty}_{x=r}{{x-1}\choose{r-1}}p^rq^{x-r}$ I don't understand why the range for X in the summation formula starts from r. I've looked up several textbooks and they seem to be treating this as obvious - but this is not obvious to me. (The mean formula is just an example of one of the functions of NegBin: I have the same question about any other (variance, moment generating function) formulas for NegBin which include the same range of X.) Here is the most relevant comment on the range of R I found (internet): "Let X be the random variable which is the number of trials up to and including r-th success. This means that the range of X is the set {r, r+1, r+2,...}." This sounds couter-intuitive to me, if X is the number of trials UP to r, then why do we start counting from r? why not start from x=1 (first trial) until x=r (the last successful outcome we need to 'complete' the r)? Why are you summing? The minimum number of trials is always going to be equal to the number of successes. Read this: Negative binomial distribution - Wikipedia, the free encyclopedia From what I can see, what you're calling a success Wikipedia is calling a failure and x = k + r. Since $0 \leq k < +\infty$ it follows that $r \leq x < +\infty$. 4. Originally Posted by emakarov If you are told to wait for r successes, then you know that you need at least r trials. thanks! now THAT makes sense to me ))) yes, you are right, x is missing from the mean formula (I was deliberating which 'summation' formula to include, and ultimately made a mistake)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9527881145477295, "perplexity_flag": "head"}
http://mathoverflow.net/questions/57524/martingale-part-of-the-discontinuous-put-payoff/57581
## Martingale part of the discontinuous put payoff ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I need the martingale part of the put payoff (not $C^2$..). Where $S_t=exp(\sigma W_t -\frac{\sigma^2t}{2})$ $d[(S_t -K)^+ ]$ ?? I guess I need to use local times but how? - ## 2 Answers Thanks you all! (proof, for $\phi(t,S_t)=(K-S_t)^+$: Step 1 smoothing : $\phi_\epsilon(x)=1_{S_t\leq K+\epsilon}\cdot\phi(x)+1_{S_t\in]K-\epsilon,K]} \cdot \psi(x)$, where $\psi(x)=-\frac{1}{\epsilon^2}(K-x)^2(K-x-2\epsilon)$ . This function is $C^1$, and also $C^2$ excepting in a countable set. Step 2 Itô on $\phi_\epsilon(S_t)= \phi_\epsilon(S_0)+\int^t_0\phi_\epsilon^'(S_t)dS_t+\frac{1}{2}\int^t_0 1_{S_t\in[K-\epsilon,K]}\phi_\epsilon^{''}(S_t)d\langle S\rangle _t$ because $\phi _\epsilon=0$ out of $[K-\epsilon,K]$ Let's denote by $L_t=lim_{\epsilon \to0}{\frac{1}{2\epsilon^2}*\int{_{K-\epsilon}^K(3S_t+4\epsilon-3K)dS_t}}$ it's a finite variation process since it is increasing Step 3 We have that $\phi_\epsilon(S_t)-\phi_\epsilon(S_0)-\int^t_0\phi_\epsilon^'(S_t)dS_t \space \xrightarrow {L^2} \space\phi(S_t) -\phi(S_0) -\int^t_0\phi^'(S_t)dS_t$ (because $\int^t_0\phi_\epsilon^'(S_t) 1_{S_u\in[0,K-\epsilon]}dS_u \xrightarrow {L^2}\int_0^t\phi^'(S_u)du$ by using Itô isometry) Finally We get the formula 'à la bridge' namely $(K-S_t)^+=(K-S_0)^+-\int_0^t1_{K\leq S_u}dS_u+L_t$ and the martingale part is $(K-S_0)^+-\int_0^t1_{K\leq S_u}\sigma S_u dB_u$ - I think that the half factor difference on the Local Ltime term that we have is about the way Local Time is deifned. Different "autors" use different conventions (multiplying $L_t$ by a factor 2 or not in their definition). The definition I use in my answer doesn't use the factor "2" and yours use it. Regards. – The Bridge Mar 7 2011 at 17:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hi, Simply use Itô-Tanaka's formula I guess this should give something like : $df(S_t)=D_-f(S_t)dS_t+\frac{1}{2}dL^s_tf''(ds)$ with $f(S)=(S-K)^+$ so $D_-f(S)=1_{]K,+\infty}(S)$ and $f''(ds)=\delta_K(ds)$ This gives if I am not mistaken : $d(S_t-K)^+=df(S_t)=1_{]K,+\infty}(S_t)dS_t+\frac{1}{2}dL^K_t$ With $L^K_t$ being the local time of your geometric Brownian Motion $S$ around level $K$ at time $t$. Regards Edit NB: -$D_-$ stands for the left derivative of $f$ -$f''(ds)$ stands for second derivatives in the distribution-sense. -The use of Itô-Tanaka's formula allows to avoid the derivation of the Mollifiers-type argument for the direct proof of the result (which is quite cumbersome in my opinion). I should add that Ito-Tanaka's formula is applicable to every $f$ that is the differnce of two convex functions if I remember well, which is the case here with $f(x)=(X-K)^+$. - So, the martingale part is $\sigma\int1_{[K,\infty)}(S)S\,dW$, which was the question. – George Lowther Mar 6 2011 at 17:02 @george lowther: that's right thanks for the precision – The Bridge Mar 6 2011 at 18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213622212409973, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/27250/isometries-of-ellp-n-mathbbc?answertab=votes
# Isometries of $\ell^p_n(\mathbb{C})$ Let $1<p<\infty$, and define an isometry of normed linear spaces to be a norm-preserving surjection. Then all isometries from $\ell^p_n(\mathbb{R})$ to itself are given by linear transformations $T$ such that Mat$(T)$ is product of a permutation matrix and a diagonal matrix with $1$'s and $-1$'s along the diagonal. However it is slightly more complicated to classify all isometries from $\ell^p_n(\mathbb{C})$ to itself since $z\mapsto \overline{z}$ is an isometry of $\mathbb{C}$. Question: What are all isometries from $\ell^p_n(\mathbb{C})$ to itself? - I have my doubts about your characterization, and in particular the fact that it doesn't depend on $p$. Take for example $T=\begin{bmatrix}\cos t & \sin t \\ -\sin t & \cos t\end{bmatrix}$: this is a norm-preserving surjection of $\ell^2_2(\mathbb{R})$, but I don't think it has the form you claim. And it is not an isometry for choices of $p$ other than 2. – Martin Argerami Jan 14 '12 at 2:10 ## 1 Answer The definition of "isometry" here is inadequate. A norm-preserving surjection (which I take to mean a surjection satisfying $\|f(x)\| = \|x\|$ for all $x$) need not preserve distances, or be linear, or even be continuous. (Choose for each $r \geq 0$ an arbitrary surjection $f_r$ from $\{x: \|x\| = r\}$ to itself. Then the map on the whole space given by $x \mapsto f_{\|x\|}(x)$ is a norm-preserving surjection.) In particular, if "isometry" is interpreted in this sense, there are far more isometries of $\ell^p_n(\mathbb{R})$ than the ones listed in the question. But one can take the list as a hint at a more appropriate definition of isometry. Here are two candidates: • A function on normed linear spaces $f: V \to W$ is an isometry if $f$ is linear and $\|f(x)\| = \|x\|$ for all $x \in V$. • A function on normed linear spaces $f: V \to W$ is an isometry if $f$ is real linear (that is, satisfies $f(x+y) = f(x) + f(y)$ and $f(tx) = t f(x)$ for all $x, y \in V$ and all $t \in \mathbb{R}$) and satisfies $\|f(x)\| = \|x\|$ for all $x \in V$. These are the same thing when the field of scalars is taken to be $\mathbb{R}$, but the first is more restrictive than the second if the field of scalars is $\mathbb{C}$ (e.g. the conjugation $z \mapsto \overline{z}$ on $\mathbb{C}$ is not an isometry in the first sense as it is not complex linear). The main motivation for using the second definition with a complex vector space is that in strongly convex spaces at least, this condition completely characterizes the distance-preserving maps (functions $f$ satisfying $\|f(x) - f(y)\| = \|x - y\|$ for all $x,y$) that map $0$ to itself. If one wants an "isometry" to be as close to a "distance-preserving function" as possible, complex linearity is perhaps not essential. Offhand, I do not know a characterization of the isometries, in the second sense, of $\ell^p_n(\mathbb{C})$. But if one uses the first definition and phrases things appropriately, the complex story is identical to the real story. Identify linear operators on $\ell^p_n$ (real or complex, your choice) with $n \times n$ matrices. Say a matrix is a generalized permutation matrix if it is a product of a diagonal matrix whose entries have modulus $1$ and a permutation matrix. Then when $1 < p < \infty$ and $p \neq 2$, the only isometries of $\ell^p_n$ are the generalized permutation matrices. It is perhaps surprising that these matrices, which are rather obviously isometries of $\ell^p_n$, are the only isometries of $\ell^p_n$--- and that the set of isometries does not depend on $p$. (As Martin Argerami points out, the Hilbert space case $p=2$ is special; but the sets of isometries in this case, the so-called "orthogonal" and "unitary" matrices, are well understood.) Assuming you know the basic duality theory of $\ell^p_n$ spaces, you can give a short and elementary proof of this fact. One approach is given (in the real case, but the complex case is almost the same) in Isometries of the $\ell^p$ norm by Chi-Kwong Li and Wasin So in the American Mathematical Monthly Vol. 101 No. 5, pp 452-53. (The authors note that their argument was previously and independently found by R. Mathias.) This is an exceedingly special case of what is often called the Banach-Lamperti theorem, which like many results in the theory of normed linear spaces is really a family of results, varying in the generality of the hypotheses. The basic theme is that when $p \neq 2$, an isometry from one $L^p$ space to another (the measure spaces need not be finite or the same) must be the product of a suitably "nice" multiplication operator and a composition operator (ie, a map of the form $f \mapsto f \circ \sigma$, where $\sigma$ is a map on the underlying measure spaces). In general, the set of isometries does depend on $p$ and the measures used to define the $L^p$ spaces. Lamperti's original paper from 1958 is available at Project Euclid; there have been various generalizations and improvements since then, as you will find if you Google for them. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943149983882904, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/6482/what-if-lhc-finds-susy?answertab=active
# What if LHC finds SUSY? Here and on many other forums and blogs people ask the question "What if LHC does not find SUSY?". I would like to ask the opposite. What if it finds it? What would the implications be? Is it going to just confirm something understood and expected or is it gong to bring something new? Will there be implications for string theory? String theorists will take it as a good sign, but by itself it is not a confirmation of string theory. I do realize this is more that one question, but they are on the same topic. p.s If this question is not suitable for this site, please delete/close - 1 We change the "What next?" language in our grant requests. – dmckee♦ Mar 7 '11 at 16:46 @MBN I remember lectures in supersymmetry back in the eighties and the justification was that it made computations finite, nothing about strings. I guess it is history in retrospect. The thesis of my doctorate ( experimental) dealt with Regge poles but it is from Lubos I first heard the string connection. – anna v Mar 7 '11 at 18:25 ## 2 Answers Something analogous happened a couple of decades ago. The discovery of the $W$ and $Z$ bosons simultaneously confirmed the Weinberg/Salam electroweak model and made "how is electroweak symmetry broken?" the key new unanswered question. It's still unanswered, and it's much more challenging, experimentally and theoretically. The discovery of supersymmetry and a Higgs would go a long way toward answering it (the standard scenario in the MSSM is called "radiative electroweak symmetry breaking.") Analogously, the discovery of superpartners would make the key unanswered question in particle physics "how is supersymmetry broken?" Actually, there are two parts to this question: the first is how the symmetry is broken, and the second is how that breaking is "mediated" to the Standard Model. (This is a somewhat technical story, but the point is that supersymmetry breaking must involve fields beyond just the Standard Model, together with some mechanism that allows the breaking to be felt by the Standard Model.) The question of how the breaking is mediated is likely to be easier to answer than the question of how the breaking itself actually happens. In any case, though, these will be complicated questions that will depend on the accurate measurement of a number of masses, together with a lot of guesswork and hopefully some luck. There are several good ideas for models, but in the absence of data it's hard to know which ones to believe, and they all have some fairly significant problems. - ## Did you find this question interesting? Try our newsletter email address An even more excellent question than the opposite one! Experimenters' job The first happy group of people for whom the discovery of SUSY at the LHC would be spectacular would be the experimenters. They would experience fireworks of activity, facing the task to find as many superpartners as possible and to measure their properties. All their masses - and several other parameters - would be news for the theorists. We would have to learn lots of new, currently unknown numbers from the experiments, the so-called soft SUSY-breaking parameters. It's conceivable that the masses would display some patterns - e.g. mass ratios - that agree with one of the scenarios of supersymmetry breaking. So some of the mechanisms of supersymmetry breaking would be supported; others would ultimately be abandoned. Figuring out the logic of SUSY breaking Quite a lot of data is needed from the first hints of SUSY to the point in the previous paragraph because one needs to see many new particles and some of them are inevitably harder than others. However, if this job is completed, we will know whether SUSY breaking is mediated by gravity mediation, anomaly mediation, gauge mediation, mirage mediation, or something else. It would be a lot of interesting work for the theorists and it is more likely than not that they have already done the essential work for any major type of SUSY breaking that could be observed by the LHC. Obviously, one of the schemes would be studied in much more detail if we knew that it is the correct one. Some knowledge about the mechanism of SUSY breaking - which may be guessed from the superpartner masses - would tell us a great deal about the required underlying compactification of string theory etc. Also, the role of the newly found SUSY for all the problems that SUSY is capable of solving - that have been discussed under the question What if the LHC doesn't see SUSY? (I mean dark matter, gauge coupling unification, and the hierarchy problem) would be studied in much more detail, too. I don't have to enumerate them again because this text would become highly redundant. It's obvious that if the existence of low-energy SUSY were supported by the experiments, all other things that were expected to be solved by SUSY would be studied much more seriously, much more materially, and in much more detail. Some of the advantages of SUSY are only understood superficially - this situation would have to improve. LHC SUSY discovery and string theory The discovery of SUSY would be amazing - the first discovery of a new spacetime structure and symmetry since Albert Einstein's relativistic adventures. It would be great, it may happen, and string theorists must be ready to take credit for it. No doubt, at least in the Western (non-Soviet) context, SUSY is a daughter of string theory. First, it was discovered by Pierre Ramond on the 2-dimensional world sheet of a string, before it was exported to the 4D spacetime and other higher-dimensional spacetimes and before it was found in the 10D spacetime of superstring theory as well. Historically, supersymmetry was one of the first amazing ideas that the physicists were forced to discover because string theory led them to discover them. All the critics of string theory would be proved to be spectacularly wrong - in fact, narrow-minded folks who wanted to prevent the mankind from discovering one of the most fundamental properties of Nature; everyone could suddenly see that they are on par with the geocentrists and they would hopefully never show up again in the public. I would win a USD 10,000 bet against a phenomenologist who agreed with 100-to-1 odds - this "uneven" number itself is enough to show that some of the enemies of supersymmetry resemble a fundamentalist religious sect. The pragmatic and largely non-anthropic phenomenological attitude to string theory would prevail. People would probably agree that a feature of the vacua doesn't have to be "generic" for it to become true. The anthropic principle would fade away. Because low-energy SUSY would become a fact, people would kind of accept that with some extra knowledge about the reality used as assumptions, supersymmetry is a consequence of string theory. See http://motls.blogspot.com/2010/06/why-string-theory-implies-supersymmetry.html Why string theory implies supersymmetry Lots of knowledge about the likely compactification of string theory would become much more accessible. String theorists - which would become a quickly growing group - would very likely converge to some opinion whether heterotic string theory; heterotic M-theory; M-theory on a G2 holonomy manifolds; type IIA intersecting braneworlds; or F-theory on Calabi-Yau four-folds is the most viable approach to phenomenology. I have discussed in what sense string theory probably implies supersymmetry. Now, you're obviously interested in the opposite hypothetical implication - whether supersymmetry implies string theory. We can't prove this implication as a mathematical theorem but it would become extremely persuasive. First of all, supergravity (SUGRA) would become an inevitable component of all effective field theories because it follows from general relativity (established) and supersymmetry (hypothetically established in our thought experiment). In this text, http://motls.blogspot.com/2008/07/two-roads-from-n8-sugra-to-string.html Two roads from N=8 SUGRA to string theory, I argue that supergravity suffers from two kinds of problems: non-perturbative inconsistencies; and the unacceptable phenomenological limitations of its maximally supersymmetric N=8 version (which is the only perturbatively finite one). Attempts to fix either of those limitations of supergravity inevitably leads to string theory with its more powerful toolkit. If you want to watch a 30-minute lecture by a Dirac Medalist explaining why supergravity can't be decoupled from string theory and why one needs all of string theory to preserve the consistency, see this November 2010 talk by Michael Greene in Trieste: Similarly, in the text a few paragraphs above, I argue that locality of general relativity implies that there must exist magnetically charged objects - at least black holes - and the Dirac quantization rule implies that the charges must belong to a lattice. The choice of the lattice is equivalent to the point of the moduli space of inequivalent stringy vacua; the noncompact symmetry group of SUGRA is inevitably broken down to its discrete exception subgroup, the U-duality group. In the appropriate limits of the moduli space of the inequivalent vacua, we may derive the existence of objects known from string/M-theory as well as their excitations, and we may pretty much complete the rest of string theory by consistency arguments. Green-Schwarz anomaly cancellation There is one more characteristically stringy structure that would become necessary if we add one more assumption: the Green-Schwarz mechanism that mixes tree-level terms with one-loop terms in a very stringy way - originally discovered by Green and Schwarz in 1984 when their discovery sparked the first superstring revolution. This mixing of contributions at different orders is extremely unnatural in perturbative field theory. And we would have evidence that it takes place in Nature assuming that there exists at least one axion - or, in SUGRA terms, at least one linear supermultiplet. If there are axions, which may be needed to solve the strong CP-problem, there are also new kinds of anomalies (in particular, the "conformal anomaly" of supergravity) analogous to the 10-dimensional anomalies addressed by Green and Schwarz in 1984. A 4-dimensional version of the stringy Green-Schwarz mechanism would be needed to cancel those anomalies. Theorems may be rigorously proved in the highly symmetric vacua only but not in the real world. However, in the real world, the evidence that string theory is right would become overwhelming. - 1 How would all critics be proved wrong if susy does not imply string theory? – Holowitz Mar 7 '11 at 17:42 1 It does under extremely mild assumptions. Please read my answer. Also, one of the reasons why it does - that I sketched - was also discussed in a 30-minute talk by Michael Green in Trieste - lectures of the Dirac Medalists. See youtube.com/watch?v=UVqCAhLiZDc and youtube.com/watch?v=S8wSl2R3G1o – Luboš Motl Mar 7 '11 at 17:43 +1, thanks. As usually you take the questions seriously and give detailed answers. Corrected me if I am wrong but doesn't SUSY predate string theory, and what do you mean that it is a daughter of string theory? – MBN Mar 7 '11 at 17:57 4 Dear Lubos, testing my memory I went to the CERN library and checked for the two words "supersymmetry" and "strings" in any field . The first publication came out at 1985. When checking for supersymmetry, the first publications comes out at 1980. So, us simple experimentalists should be excused not to know all the esoteric history and the way it connected to strings. According to wikipedia it was first found by Hironary Miyazawa in 1966, and rediscovered by others in 1971 concurrent with Ramond. Maybe somebody should edit the supersymmetry article in Wiki. – anna v Mar 7 '11 at 21:35 2 String skeptics are largely not about preventing people from studying string theory. They are against string theorists wildly overstating the evidence for string theory. It is clearly an interesting avenue of research with many potential benefits. Flippantly claiming that it is already proven is absurd. For one thing, SUSY does not imply extra dimensions, which string theory also needs to work. – Jerry Schirmer Mar 7 '11 at 22:28 show 14 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548690915107727, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/16301/replacing-textexpression-epsilon-by-textexpression-leq-epsilon/16334
# Replacing $\text{Expression}<\epsilon$ by $\text{Expression} \leq \epsilon$ As an exercise I was doing a proof about equicontinuity of a certain function. I noticed that I am always choosing the limits in a way that I finally get: $\text{Expression} < \epsilon$ However it wouldn't hurt showing that $\text{Expression} \leq \epsilon$ would it, since $\epsilon$ is getting infinitesimally small? I have been doing this type of $\epsilon$ proofs quite some time now, but never asked myself that question. Am I allowed to write $\text{Expression} \leq \epsilon$? If so, when? - ## 4 Answers Suppose for every $\epsilon >0$, there is an $N$ such that $n>N$ implies $x_n\leq\epsilon$. Let $k>0$. Then $\frac{k}{2}>0$, so there is an $M$ such that $n>M$ implies $x_n\leq\frac{k}{2}$ which implies that $x_n<k$. This corresponds to the original definition of continuity, doesn't it? - thank you very much. This is actually very clear, without having to argue about anything. Since this is the answer I understand best, I accept it. – ftiaronsem Jan 4 '11 at 16:53 Since $\varepsilon>0$ is arbitrary it does not matter. If you have a non strict inequality for each $\varepsilon>0$ you can get strict inequality by adding arbitrary $\eta>0$. - Thank you for your answer. However I am still not 100% convinced. Why can you be so sure that this not changing anything in the many possible proofs? After all you are adding an element which is strictly greater than zero. The whole point of $\epsilon > 0$ is $\epsilon$ approaching 0. However in case of adding a $\eta>0$, i would approach that $\eta$ wouldn't I? – ftiaronsem Jan 4 '11 at 14:30 @ftiaronsem, the whole point of $\varepsilon>0$ is that it is arbitrary, not that it approaches zero. Only due to arbitrary nature the strict inequality versus non-strict inequality does not matter. Note that in all proofs you have for each $\varepsilon$, not for some small $\varepsilon$. Usually we concern ourselves with small $\varepsilon$ since for large $\varepsilon$ the proofs are very easy. – mpiktas Jan 4 '11 at 14:51 ok, thanks for your answe. I hadn't fully comprehended that arbitrary point before. (Comes from always writing lim ^^) But as far as I can judge you are right. Thanks for your answer. – ftiaronsem Jan 4 '11 at 16:51 You are right, it doesn't matter. Given an arbitrary $\epsilon > 0$, you can make your expression strictly less than $\epsilon$ by choosing your $\delta$ such that the expression is less than or equal to $\epsilon/2$, for example. (This is possible since you know that you can choose $\delta$ so that your expression is less than or equal to any positive number that you wish.) - Thank you too, for your answer, you all have helped me a lot in understandig that. Thanks – ftiaronsem Jan 4 '11 at 16:56 There are already some good answers here, so I won't bother repeating them. However, I'd like to add a cautionary note that there are instances where using the wrong inequality gives nonsense — for example, it is essential in the statement of the Banach fixed point theorem that the Lipschitz constant is strictly less than one. If I remember correctly, even $\dfrac{\| f(x) - f(y) \|}{\| x - y \|} < 1$ for all $x \ne y$ is not good enough; we must have $\sup \left\{ \dfrac{\| f(x) - f(y) \|}{\| x - y \|} : x \ne y \right\} < 1$. - ahh, thought that it couldn't be that simple in a general case. However since I don't know anything about Banach, I'll for the moment believe math being so nice as to allowe me using both the strict and the non strict variant ;-). But nevertheless, thank you – ftiaronsem Jan 4 '11 at 16:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9695664644241333, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Lissajous_Curve&oldid=33392
# Lissajous Curve ### From Math Images Revision as of 15:47, 5 July 2012 by Pzhao1 (Talk | contribs) Lissajous Box Field: Geometry Image Created By: Michael Trott Website: www.wolfram.com ask for permission before using it elsewhere! Lissajous Box This is a beautiful Lissajous Box. The Lissajous Curves on its sides have an angular frequency ratio of 10:7. # Basic Description Lissajous Curves, or Lissajous Figures, are patterns formed when two harmonic vibrationsIn physics, harmonic vibration is a type of periodic motion where the restoring force is proportional to the displacement. Its equation of motion is x = A sin (ωt + φ), in which A is its magnitude, ω is its angular frequency, and φ is its initial phase. This equation gives us a sinusoidal x - t graph. See Image:. along perpendicular lines are superimposed. For example, the following animation tells us how to generate a Lissajous Curve: Click to stop animation. Figure 1How to generate a Lissajous curve In the animation above, points X and Y are simple harmonic oscillators in x and y directions. They have the same magnitude of 10, but their angular frequencies are different. As we can see in the animation, the x - vibrator completes 3 cycles from the beginning to the end, while the y - vibrator completes only 2. In fact, these vibrators follow the equations of motion x = sin (3t ), and y = sin (2t ), respectively. Now, we will try to get the superposition of these two vibrations, which is what we really care about. To get this superposition, we can draw from X a line perpendicular to x-axis, and from Y a line perpendicular to y-axis, and locate their intersection P. By simple geometry, P will have the same x-coordinate as X, and y-coordinate as Y, so it combines the motion of X and Y. As we can see in Figure 1, the trace of P turns out to be a complicated and beautiful curve, which we refer to as the "Lissajous Curve". More specifically, it's one Lissajous Curve in a big family, since we can easily generate more Lissajous Curves with other angular frequencies and phases using the same mechanism. Mathematically speaking, since the motion of point P consists of two component vibrations, whose equations of motions are already known to us, we can easily get the parametric equations of P 's motion: $\left.\begin{array}{rcl} x & \mbox{=} & A \sin(at + \phi) \\ y & \mbox{=} & B \sin(bt) \end{array}\right.$ in which A and B are magnitudes of two harmonic vibrations, a and b are their angular frequencies, and φ is their phase difference. The term "phase difference" means that it's the difference between the two vibrations' initial phases. To see exactly what these terms are, and how they affect the appearance of Lissajous Curves, please refer to the More Mathematical Explanation section. The Lissajous Curve in Figure 1 has A = B = 10, a = 3, b = 2, and φ = 0. As we have stated before, we can get more Lissajous Curves by changing these parameters. The following images show some of these figures: Figure 2-aLissajous Curve: a = 1, b = 2 Figure 2-bLissajous Curve: a = 3, b = 4 Figure 2-cLissajous Curve: a = 5, b = 4 # A Dip Into the History Figure 3-a Photograph of Joules Lissajous. Year and Photographer Unknown Lissajous Curves were named after French mathematician Jules Antoine Lissajous (1822–1880)[1], who devised a simple optical method to study compound vibrations. Lissajous entered the Ecole Normale Superieure in 1841, and later became a professor of physics at the Lycee Saint-Louis in Paris, where he studied vibrations and sound. During that age, people were enthusiastic about standardization in science. And the science of acoustics was no exception, since musicians and instrument makers were crying out for a standard in pitches. In response to their demand, Lissajous invented the Lissajous Tuning Forks, which turned out to be a great success since they not only allowed people to visualize and analyse sound vibrations, but also showed the beauty of math through interesting patterns. The structure and usage of Lissajous Tuning Forks are shown in Figure 3-b. Each tuning fork is manufactured with a small piece of mirror attached to one prong, and a small metal ball attached to the other as counterweight. Two tuning forks like this are placed besides each other, oriented in perpendicular directions. A beam of light is bounced off the two mirrors in turn and directed to a screen. If we put a magnifying glass between the second tuning fork and the screen (to make the small deflections of light beam visible to human eyes), we can actually see Lissajous Curves forming on the screen. Figure 3-b Demonstration of Lissajous Tuning Forks The idea of visualizing sound vibrations may not be surprising nowadays, but it was ground-breakingly new in Lissajous' age. Moreover, as we are going to see in the More Mathematical Explanation section, the appearances of Lissajous Curves are extremely sensitive to the frequency ratio of tuning forks. The most stable and beautiful patterns only appear when the two forks vibrate at frequencies of simple ratios, such as 2:1 or 3:2. These frequency ratios correspond to the musical intervals of the octave and perfect fifth, respectively. So, by observing the Lissajous Curve formed by an unadjusted fork and a standard fork of known frequency, people were able to make tuning adjustments far more accurately than tuning by ear. Because of his contributions to acoustic science, Lissajous was honored as member of a musical science commission set up by the French Government in 1858, which also featured great composers such as Hector Berlioz (1803-1869) and Gioachino Rossini (1792-1868). Acknowledgement: Most of the historical information in this section comes from this website: click here, and Trigonometric Delights, by Eli Maor[2][3]. # A More Mathematical Explanation [Click to view A More Mathematical Explanation] In previous sections, we have encountered this question for many times: • What determines the appea [...] [Click to hide A More Mathematical Explanation] In previous sections, we have encountered this question for many times: • What determines the appearance of Lissajous Curves? In this section, we are going to answer this question in two ways. The first method is simple and direct, but is limited to several special cases. The second one applies to almost all Lissajous Curves, but as a result it's more subtle and complicated. ## First Method: Direct Elimination of t Since Lissajous Curves are defined by the following parametric equations: $\left.\begin{array}{rcl} x & \mbox{=} & \sin(a \cdot t + \phi) \\ y & \mbox{=} & \sin(b \cdot t) \end{array}\right.$ In principle, one can use trigonometric formulas to eliminate t from these equations and get a relationship between x and y. See the following examples: (Note: in all examples below, we are going to assume that A = B = 1, since changing these magnitudes will only make the curves dilate or contract in horizontal or vertical direction. They don't affect the structure of Lissajous curves.) ### Example 1: line segment Figure 3-a Lissajous Curve 1: Line Segment If in addition to A = B = 1, we have a = b = 1, and φ = 0, then the parametric equations will become: $\left.\begin{array}{rcl} x & \mbox{=} & \sin(t) \\ y & \mbox{=} & \sin(t) \end{array}\right.$ from which we can easily get: $x = y$ Moreover, since the range of sin(x ) is from -1 to 1, we have: $-1 \leq x \leq 1$ Together, they give us the line segment shown in Figure 3-a. ### Example 2: circle (Starting in this example we will use some trigonometric formulas to help us reduce the equations. These formulas, together with some explanations, can be found here[4].) Figure 3-b Lissajous Curve 2: Circle In this case, we still have a = b = 1. But instead of leting φ = 0, we change it to $\pi \over 2$: $\left. \begin{array}{rcl} x & \mbox{=} & \sin(t + {\pi \over 2}) \\ y & \mbox{=} & \sin(t) \end{array}\right.$ Using the trigonometric identity $\sin (t + {\pi \over 2}) = \cos(t)$, we will get: $\left. \begin{array}{rcl} x & \mbox{=} & \cos(t) \\ y & \mbox{=} & \sin(t) \end{array}\right.$ Using the trigonometric identity $\sin^2(\theta) + \cos^2(\theta) = 1$, we will get: $x^2 + y^2 = 1$ Which gives us the circle shown in Figure 3-b. ### Example 3: parabola Figure 3-c Lissajous Curve 3: Parabola This time, if we change the parameters into a = 1, b = 1, and φ = $\pi \over 4$, then the parametric equations will become: $x = \sin(t + {\pi \over 4})$ $y = \sin(2t)$ from Eq. 1 we can get: $2x^2 - 1 = 2\sin^2(t + {\pi \over 4}) - 1$ Using the trigonometric identity $\cos (2\theta) = 1 - 2 \sin^2(\theta)$, we can get: $2x^2 - 1 = - \cos (2t + {\pi \over 2})$ Applying the formula $\cos(\theta + {\pi \over 2}) = - \sin(\theta)$, we can get: $2x^2 - 1 = \sin(2t)$ Combining it with Eq. 2, we can get: $y = 2x^2 - 1$ with x confined between -1 and 1. This gives us the parabola in Figure 3-c. ### Conclusion: pros and cons In the examples above, we can clearly see some advantages of the direct elimination method: it's clear, accurate, and easy to understand. However, these advantages are quickly shadowed by the complexity of calculation when we get to larger frequency ratios. For example, see the following parametric equations of a Lissajous Curve: $\left. \begin{array}{rcl} x & \mbox{=} & \sin(9t) \\ y & \mbox{=} & \sin(8t) \end{array}\right.$ In principle, this could be solved by expanding the x- and y- function into powers of $\sin(t)$ and $\cos(t)$: $x = \sin 9t = \sin^9 t - {9\cdot8 \over 2!}\sin^7 t\cos^2 t + {9\cdot8\cdot7\cdot6 \over 4!}\sin^5 t \cos^4 t -{9\cdot8\cdot7\cdot6\cdot5\cdot4 \over 6!}\sin^3 t\cos^6 t + {9! \over 8!}\sin t\cos^8 t$ $y = \sin 8t = \sin^8 t - {8\cdot7 \over 2!}\sin^6 t\cos^2 t + {8\cdot7\cdot6\cdot5 \over 4!}\sin^4 t \cos^4 t -{8\cdot7\cdot6\cdot5\cdot4\cdot3 \over 6!}\sin^2 t\cos^6 t + {8! \over 8!}\cos^8 t$ Notice that in these equations, if we consider sin(t ) and cos(t ) as unknowns, then we will have a set of two polynomial equations with two unknowns, and in principle we can solve sin(t ) and cos(t ) in terms of x and y. Then, the identity sin2(t ) + cos2(t ) = 1 will give us a direct relationship between x and y. However, in practice, few people are willing to carry on with the algebra, because the calculations involved are just so cumbersome and annoying. To make things worse, as group theory tells us, not all polynomial equations of powers higher than 5 can be solved with exact expression of roots [5]. So there is no guarantee that our effort will lead us to the answer. Even if they can, the relationship between x and y is going to be too complicated to tell us anything useful about the shape of the curve. So the method of elimination fails here, and we would like a new way to study these curves. ## Second Method: Experiment and Observation As shown in the previous discussion, the attempt to directly solve Lissajous Curves fails when we try to deal with large angular frequencies, so we have to find another way to study them. One such way is to study them through experiment and observation. That is, we can use computer software to draw some Lissajous Curves with different parameters, and see how they affect the appearance of Lissajous Curves. When we study something with multiple variable parameters, it's much easier to study these parameters separately, rather than together. There are three variable parameters, a, b, and φ , in Lissajous Curves. So in the rest of this section we will fix the phase difference φ to study the angular frequencies a and b, and then fix the angular frequencies to study the phase difference. ### Study a and b with φ fixed The following table shows the Lissajous Curves: $\left.\begin{array}{rcl} x & \mbox{=} & \sin(a \cdot t + \phi) \\ y & \mbox{=} & \sin(b \cdot t) \end{array}\right.$ with angular frequencies a and b varying from 1 to 5, and phase difference φ fixed at 0: Figure 4-a A table of Lissajous Curves with different angular frequency ratios There are many interesting properties associated with this table: 1. All Lissajous Curves in the table are confined in a 2 * 2 square box. The curves can touch, but cannot go beyond, the lines x = 1, x = –1, y = 1, and y = –1, because the amplitudes of both horizontal and vertical vibrations are set to 1. 2. The Lissajous Curve with a = b = 1 is identical to the curves with a = b = 2, a = b = 3 ... Similarly, the Lissajous Curve with a = 1, b = 2 is identical to the curve with a = 2, b = 4, as shown in Figure 4-b. In other words, the only thing that matters is the ratio between a and b. It can be shown that Lissajous Curves with the same angular frequency ratio must have the same appearance. For example, if we do the substitution t = 2u in the Lissajous Curve with a = 1 and b = 2: $\left. \begin{array}{rcl} x & \mbox{=} & \sin(t) \\ y & \mbox{=} & \sin(2t) \end{array}\right.$ we will get: $\left.\begin{array}{rcl} x & \mbox{=} & \sin(2u) \\ y & \mbox{=} & \sin(4u) \end{array}\right.$ which is nothing different from the Lissajous Curve with a = 2 and b = 4, because whether we use the symbol t or u doesn't matter here. This analysis can be generalized to all Lissajous Curves with rational frequency ratios. Figure 4-bProperty #2 3.The Lissajous Curve with a = 1 and b = 2 is the reflection of the Lissajous Curve with a = 2 and b = 1 about line y = x, as shown in Figure 4-c. In fact, if we exchange the values of a and b in any Lissajous Curve, the result will be the original curve "flipped" about line y = x. To prove this, let's see the Lissajous Curve: $\left. \begin{array}{rcl} x & \mbox{=} & \sin(at) \\ y & \mbox{=} & \sin(bt) \end{array}\right.$ if we replace a with b, and b with a, we will get the following Lissajous Curve: $\left. \begin{array}{rcl} x & \mbox{=} & \sin(bt) \\ y & \mbox{=} & \sin(at) \end{array}\right.$ However, the same resulting curve could also be achieved by replacing x with y, and y with x in the original curve. In other words, the exchange of a and b is equivalent to the exchange of x and y. Moreover, in Cartesian coordinates, exchanging x and y in the equation of the curve is equivalent to flipping the curve about line y = x. So exchanging a and b is also equivalent to flipping about line y = x. Figure 4-cProperty #3 From these properties, we can see that many Lissajous Curves with different angular frequencies are actually the same thing, and we do not need to study all of them. In fact, we can use the following family to represent all Lissajous Curves: $\left.\begin{array}{rcl} x & \mbox{=} & \sin(rt) \\ y & \mbox{=} & \sin(t) \end{array}\right.$ in which r stands for the angular frequency ratio of two component vibrations. It could be rational of irrational. Here we are only studying the rational case because it's simpler, and the irrational version comes in later sections. The argument for this representation goes as following: For any Lissajous Curve, $\left. \begin{array}{rcl} x & \mbox{=} & \sin(at) \\ y & \mbox{=} & \sin(bt) \end{array}\right.$ in which a and b are integers. We can assume that a ≤ b , since if a > b then we can exchange their values, and according to property #3 the curve will only be flipped about line y = x. This doesn't affect the curve's structure, which is what we really care about. The next step is to divide both angular frequencies by b. According to property #2, the curve will not change, and we will get: $\left. \begin{array}{rcl} x & \mbox{=} & \sin({a \over b}t) \\ y & \mbox{=} & \sin(t) \end{array}\right.$ The last set of parametric equations belongs to the family we mentioned above. So we only need to study this family of Lissajous Curves, since others can all be reduced to this case. The following animation shows some of the Lissajous Curves in this family, with the frequency ratio a / b varying continuously from 0 to 1: Click to stop animation. Figure 4-dLissajous Curves with varying frequency ratioClick to stop or replay animation Surprisingly, as we can see in the animation, most of these Lissajous Curves are rather convoluted. But there are some simple ones scattered in them. A more careful examination shows that, when these simple patterns occur, the frequency ratio must be equal to a simple fraction. In fact, if we reduce the frequency ratio a / b to simplest fraction (that is, a and b have a greatest common divisor of 1), then the larger a and b are, the more complicated our Lissajous Curve is going to be. This phenomenon is not hard to understand if we look at the generation process of Lissajous Curves once again. Suppose the two component vibrations start at t = t0. As long as the frequency ratio is rational, the moving point will eventually return to its starting place, and make a closed Lissajous Curve. Suppose this happens at t = t1. So the time period between t0 and t1 is a complete cycle of this Lissajous Curve. Moreover, since the starting point and ending point overlap, we must have: $\left. \begin{array}{rcl} x (t_0) & \mbox{=} & x (t_1) \\y(t_0) & \mbox{=} & y(t_1) \end{array}\right.$ Substitute into the parametric equations of Lissajous Curve with rational frequency ratio, we can get: $\left. \begin{array}{rcl} \sin({a \over b}t_0) & \mbox{=} & \sin({a \over b}t_1) \\ \sin(t_0) & \mbox{=} & \sin(t_1) \end{array}\right.$ which leads to: ${a \over b}(t_1 - t_0) = 2k_1 \pi$ $(t_1 - t_0) = 2k_2 \pi$ in which k1 and k2 are integers. The other possibility $t_1 + t_0 = (2k + 1)\pi$ is omitted because they represent the intersections inside one cycle. At these intersections, although the positions overlap, the velocities don't. So the Lissajous Curve is not closed at these points. Substituting Eq.2 into Eq.1, we get: ${a \over b} = {k_1 \over k_2}$ since a / b is assumed to be an irreducible fraction (if they weren't, we could divide them by their common factor without changing the Lissajous Curve), the smallest k1 and k2 that satisfy this equation are k1 = a and k2 = b. Substituting back into Eq.2, we can get: $(t_1 - t_0) = 2b \pi$ So the larger b is, the longer it's going to take before the Lissajous Curve closes and repeats itself, and the more convoluted it's going to be. For a simple angular frequency ratio like 1/2, the vibrations soon start to repeat, and the Lissajous Curve is simple, as shown in the previous table. However, a ratio like 37/335 will make the curve much more complicated. In an extreme case, if the ratio is irrational, then both a and b will be "infinitely large", and the Curve is no longer closed. This special case is treated later in this section. Click here to see. In conclusion, the angular frequency ratio a / b, reduced to simplest fraction, determines the complexity of Lissajous Curves. Large a and b lead to complicated Lissajous Curves; small a and b give us simple ones. This is why Lissajous Tuning Forks are so suitable for tuning notes. In music theory, most of the important intervals are simple fractions. For example, the interval of a perfect octave is 1:2, perfect fifth is 2:3, perfect fourth is 3:4, and so on[6]. These intervals all correspond to simple Lissajous Curves with distinctive features. ### Study φ with a and b fixed In the previous subsection we have figured out how angular frequencies of a Lissajous Curve affect its appearance. Now we are going to fix the angular frequencies to study the third, and last, variable parameter: the phase difference φ . The following animation shows the Lissajous Curve $\left. \begin{array}{rcl} x & \mbox{=} & \sin(t + \phi) \\y & \mbox{=} & \sin(3t) \end{array}\right.$ with φ varying continuously from $0$ to $2\pi$: Click to stop animation. Figure 5-aLissajous Curve with varying phase difference φClick to stop or replay animation An interesting fact to notice is that the animation above looks more like a rotating 3-D curve, rather than a changing 2-D one. The reason for this illusion is related to another way to define Lissajous Curves. In the beginning of this page, we introduced the following definition: A Lissajous Curve is the superposition of two harmonic vibrations in perpendicular directions. However, this is not the only available definition for Lissajous Curves. These curves can also be viewed as the projection of a 3-D harmonic height function over a circular base. The following set of images explain this definition in more detail: Figure 5-bCircular base of harmonic height function Figure 5-cRaising process Figure 5-dProjection onto y-z plane The first step to generate this harmonic height function is to draw a circular base in x-y plane, as shown in Figure 5-b. The parametric equation of this circular base is: $\left. \begin{array}{rcl} x & \mbox{=} & \cos(t + \phi) \\ y & \mbox{=} & \sin(t + \phi) \end{array}\right.$ The variable parameter φ here doesn't change the shape of the circle, as we still have the relationship $x^2 + y^2 = 1$. But if we change the value of φ , then the circle will rotate about the origin O. Of course we can't see the motion here, because O is also the center of the circle. However, this rotation is going to make a difference later. In the next step, we raise (or lower) each point in this circular base to a certain height. This height is determined by the function: $z = \sin(3t)$ The raising process is shown in Figure 5-c. Note that if we change φ now, the rotation is visible, since the curve's rotational symmetry is broken in the raising process. Finally, if we make the projection of that rotating height curve onto the y-z plane, as shown in Figure 5-d, we can see that it's exactly same as the animation in Figure 5-a. In other words, this Lissajous Curve can be viewed as the projection of this 3-D height function. Changing the value of φ makes the 3-D curve rotate, and in turn changes the 2-D curve. In fact, this is why we had the 3-D illusion in Figure 5-a. Algebraic analysis agrees with this result. As we have seen, the parametric equations of this harmonic height function are: $\left.\begin{array}{rcl} x & \mbox{=} & \cos(t + \phi) \\ y & \mbox{=} & \sin(t + \phi) \\ z & \mbox{=} & \sin(3t)\end{array}\right.$ To project it onto the y-z plane, we can fix its x component to be 0: $\left.\begin{array}{rcl} x & \mbox{=} & 0 \\ y & \mbox{=} & \sin(t + \phi) \\ z & \mbox{=} & \sin(3t)\end{array}\right.$ Compare this projection to the Lissajous Curve we had before: $\left. \begin{array}{rcl} x & \mbox{=} & \sin(t + \phi) \\y & \mbox{=} & \sin(3t) \end{array}\right.$ we can see that they are indeed the same thing. Although we used a special case a = 1, b = 3 in our discussion, the result applies to all Lissajous Curves with rational frequency ratios. The following images show the 3-D height function of some other Lissajous Curves: Figure 5-ea = 2, b = 3, φ = 0 Figure 5-fa = 3, b = 5, φ = $\pi$/10 ### To put it all together: a java applet So far we have talked a lot about the appearance of Lissajous Curves. We know that some simple cases can be solve by direct elimination of t in the parametric equations, that the frequency ratio of a Lissajous Curve determines its complexity, and that the phase difference φ affects a Lissajous Curve by rotating its corresponding 3-D height function. Here is an interactive java applet that puts these all together. It allows the user the change both angular frequencies from 1 to 9, and animate the curve by changing φ[7]: ### What happens when things get irrational? We have limited previous discussions to Lissajous Curves with rational frequency ratios. So, one may naturally wonder, what happened to all those with irrational frequency ratios? Well, they have all died painfully because of their irrationality ... Just kidding. They are still there, waiting for us to study. For example, see the following Lissajous Curve: $\left. \begin{array}{rcl} x & \mbox{=} & \sin(2t) \\y & \mbox{=} & \sin(\pi t) \end{array}\right.$ It's a known fact that $\pi$ is irrational. So the frequency ratio $2 \over \pi$ here is also irrational, and this curve is going to be radically different from any one we have encountered so far. See the animation below to get a sense of what it looks like: Click to stop animation. Figure 6Lissajous Curve with irrational angular frequency ratioClick to stop or replay animation Figure 6 shows the trace this Lissajous Curve in accelerating motion. In the beginning, it looks just like an ordinary Lissajous Curve. However, soon we can see the difference: this curve is never closed! It keeps going on and on, and eventually fills the whole 2*2 box. In fact, it can be shown that all Lissajous Curves with irrational frequency ratios cannot close. Without loss of generality, let the parametric equation of such a Lissajous Curve be: $\left.\begin{array}{rcl} x & \mbox{=} & \sin(rt) \\ y & \mbox{=} & \sin(t) \end{array}\right.$ in which r is an irrational number. Now we suppose that the curve is closed, and try to derive a contradiction. Let the starting time be t0 and the closing time be t1 as before, so we must have: $\left.\begin{array}{rcl} x(t_0) & \mbox{=} & x(t_1) \\ y(t_0) & \mbox{=} & y(t_1) \end{array}\right.$ which gives us: $\left.\begin{array}{rcl} \sin(rt_0) & \mbox{=} & \sin(rt_1) \\ \sin(t_0) & \mbox{=} & \sin(t_1) \end{array}\right.$ which leads to: $r(t_1 - t_0) = 2p \pi$ $(t_1 - t_0) = 2q \pi$ in which p and q are integers. The other possibility $t_1 + t_0 = (2q + 1)\pi$ is omitted due to the same reason discussed before. Substituting Eq-2 into Eq-1, we can get: $r = {p \over q}$ However, recall that we assumed r to be irrational, which means it cannot be written as an integer fraction ${p \over q}$. So the equation above cannot be true, and this Lissajous Curve is never closed. # Why It's Interesting As a family of beautiful figures, Lissajous Curves are themselves an interesting subject to study. Moreover, they also have some practical applications, including oscilloscopes and harmonographs. ## Application to Oscilloscopes Figure 7-a Structure of oscilloscope Oscilloscope is a type of electronic instrument in physics that allows observation of constantly varying signal voltages. The following image shows the simplified structure of a typical Cathode Ray Oscilloscope: In Figure 7-1, the electron gun at left generates a beam of electrons when heated, which is then directed through a deflecting system. The deflecting system is made of two sets of parallel metal plates, one for deflection in x - direction, and the other for y - direction. A signal voltage applied to the X-plates gives them an electronic potential difference, generates a uniform electronic field between them, and makes the electron beam deflect in x - direction. Same for the y - plates. The angle of deflection is proportional to the voltage applied. After passing the deflecting system, the electron beam is then directed to a screen, which is covered by fluorescent material so that we can see green light on the places hit by electrons. If there is no voltage applied to the deflecting system, then the electron beam hits the screen right at the center. If there is voltage applied, then the electrons will hit somewhere else. So the oscilloscope makes signal voltages visible to us. Now if we apply a sinusoidal signal on each set of the plates, then both the X-plates and the Y-plates will have varying electronic fields between them, and the electron beam will oscillate in both directions. As a result, the trace on the screen should be the superposition of these two oscillations. As we have discussed before, this is a Lissajous Curve. The following images show some of the Lissajous Curves achieved on oscilloscopes: Figure 7-b Figure 7-c Figure 7-d Similar to Lissajous Tuning Forks, Lissajous Figures on the oscilloscope can give us some information about the two component vibrations. For example, just by looking at the Lissajous Curve in Figure 7-b, experienced observers can tell that the frequency ratio between its two component vibrations is 1:3, and the phase difference is $\pi \over 2$. Engineers and physicists often use this method to analyze signals and waves. ## Application to Harmonographs A harmonograph is a mechanical apparatus that employs pendulums to create geometric images. The drawings created are typically Lissajous curves, or related drawings of greater complexity. See the following video the get a sense of how it works[8]: As we can see in the video, a typical three pendulum rotary harmonograph consists of a table, a drawing board, a pen, and 3 pendulums. Two of them are linear pendulums oriented in perpendicular directions, and they control the motion of the pen. The third pendulum is free to swing in both directions, and it's connected to the drawing board. Harmonographs can be used to draw Lissajous Curves. We only need to fix the pendulum connecting to the drawing board, and assume that there is no friction in the other two pendulums. In mechanics, it is a known fact that motion of a frictionless pendulum can be viewed as simple harmonic motion, provided that the swinging angle is small[9]. So the pen's motion is the superposition of two perpendicular harmonic vibrations, which is a Lissajous Curve by definition. However, in practice, friction cannot be completely eliminated. So the two linear pendulums are actually doing damped oscillations, rather than simple harmonic motion. Physicists give us the following equation of motion for damped harmonic oscillations[10]: $x(t) = A e ^{- \gamma t} \sin(\omega t + \phi)$ in which $\gamma$ is called the damping constant. The larger $\gamma$ is, the more heavily this oscillator is damped, and the faster its magnitude decreases. Since both linear pendulums are doing damped harmonic oscillations, the pen should have the following equation of motion: $\left.\begin{array}{rcl} x & \mbox{=} & A e ^{- \gamma _1 t} \sin(\omega _1 t + \phi) \\ y & \mbox{=} & B e ^{- \gamma _2 t} \sin(\omega _2 t) \end{array}\right.$ If $\gamma _1 = \gamma _2$, then the common factor $e ^ {- \gamma t}$ can be extracted from the equations above, and we are left with the parametric equations of a Lissajous Curve: $(x(t),y(t)) = e^ {- \gamma t}(A \sin(\omega _1 t + \phi),B \sin(\omega _2 t))$ which gives us a Lissajous Curve with exponentially decreasing magnitudes. For example, see the following computer simulations of the harmonograph with A = B =10, $\omega_1$ = 3, $\omega_2$ = 2, and φ = π/2: Figure 8-aLissajous Curve with decreasing magnitudes Figure 8-bThe corresponding frictionless case Figure 8-a shows the damped case with $\gamma$ = 0.04, and Figure 8-b shows the corresponding frictionless motion with $\gamma$ = 0. One can clearly see that they have similar shapes, except the magnitude of the curve in Figure 8-a decreases a little bit after each cycle, which is exactly what we mean by damping. If $\gamma _1 \neq \gamma _2$, then things get more complicated, because the shape of the curve is distorted during the damping process. For example, see the following images: Figure 8-cThe frictionless case Figure 8-d$\gamma _1 = \gamma _2 = 0.04$ Figure 8-e$\gamma _1 = 0.01$, $\gamma _2 = 0.04$ The three curves above all have A = B =10, $\omega_1$ = $\omega_2$ = 1, and φ = π/4. Figure 8-c shows the frictionless case, which is a Lissajous Curve we have discussed before. Figure 8-d shows the damped case with equal damping constants, and one can see that the motion decreases uniformly in both directions. Figure 5-e shows the damped case with unequal damping constants. Since the motion in y - direction is damped much more heavily than in x - direction, the shape of this curve is distorted towards x - axis during the damping process. If we add more complexity by releasing the free pendulum connecting to the drawing board, then the curve will be the superposition of all these motions. We are not going to study the math behind this, since it's way too complicated with more than 10 variable parameters. However, as we have seen in the video, more complexity also gives us more beautiful and interesting images. The following images are works created by harmonographs. Some of them are computer simulations, others are real pictures from harmonograph makers: # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References 1. ↑ Jules Antoine Lissajous, from Wikipedia. This is a biography of Jules Lissajous, discoverer of Lissajous Curves. 2. ↑ Lissajous tuning forks: the standardization of musical sound from Wipple Collections. This is a brief introduction to Lissajous' Tuning Forks and his contribution in acoustic science. 3. ↑ Trigometric Delights, by Eli Maor, Princeton Press. Pg. 145 - 149: Jules Lissajous and his figures. 4. ↑ List of Trigonometric identities, from Wikipedia. This page lists some of the trigonometric formula we used the derive the shape of Lissajous curves. 5. ↑ Polynomial, from Wikipedia. This briefly explains why we can't find a general solution for equations of powers higher than 5. 6. ↑ Interval (music), from Wikipedia. This article explains more about musical notes and their frequency intervals. 7. ↑ Animated Lissajous figures.This is the source of the embedded java applet. 8. ↑ Three Pendulum Harmonograph, from youtube. This is the source of the embedded video. 9. ↑ Pendulum, from Wikipedia. This article explains the physics behind pendulums. 10. ↑ Damping, from Wikipedia. This article explains how we derive the equation of motion for damped oscillators. # Future Directions for this Page . Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 79, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280073642730713, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/117544?sort=votes
## Solution to differential equation ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) a) How to solve, or at least to prove the existence of a solution to differential equation for given initial condition $y(s)=y_0>0$ and $y'(s)=y_1$, $s<0$, $$y''+(2-n)\coth(t) y'=\frac{(n-1)\sinh(2y)}{2}, t<0.$$ Here $n$ is an integer $>2$. b) Can the previous equation have two different solution (with different initial conditions) in $(-2,-1)$, such that one is bounded and the second is not bounded? - @Shahrooz And which is the solution? Can you spell it? Thanks. – djoke Dec 29 at 20:59 1 @Shahrooz Probably you have "ask matlab" to solve slightly different equation with the right hand side containing sinh(2t)! – djoke Dec 30 at 6:52 Sorry, you are right. – Shahrooz Dec 30 at 16:19 ## 2 Answers $\coth(t)$ has a singularity at $t=0$, so the hypotheses of the existence and uniqueness theorems are not satisfied there. In fact if $\lim_{t \to 0} y(t) = y_0$ and $\lim_{t \to 0} y'(t) = y_1$ exist, $y''(t) \sim (n-2) y_1 t^{-1}$ as $t \to 0$. If $y_1 \ne 0$, this is impossible, as $t^{-1}$ is not integrable at $0$. So there are no solutions with such an initial condition. EDIT: For some initial conditions at $t=-2$, the solution will "blow up" before $t=-1$. It suffices to prove, e.g., that on any interval $[-2,a)$ where the solution exists we have $\dfrac{dy}{dt} \ge y^2$ for $t \ge -2$ with $y(-2) > 1$, as then $$a - (-2) \le \int_{y(-2)}^{y(a)} \dfrac{dy}{y^2} < \int_{1}^\infty \dfrac{dy}{y^2} = 1$$ Now note that if $f = \dfrac{dy}{dt} - y^2$ we have $$\dfrac{df}{dt} = \dfrac{d^2y}{dt^2} - 2 y \dfrac{dy}{dt} = ((n-2) \coth(t) - 2 y)(y^2 + f) + \dfrac{n-1}{2} \sinh(2y)$$ Given $n$, there is some $Y$ such that for all $f \in [0,1]$, $t \in [-2,-1]$ and $y \ge Y$, the right side is positive. So if $y(-2) > Y$ and $y'(-2) > y(-2)^2$, we will have $y' > y^2$ for $t \in [-2,a]$. - I updated the formulation. The question is can we find a global solution for example in (-2,-1). – djoke Jan 3 at 18:17 Why you can assume that $f\in[0,1]$. You should prove that $f>0$? – djoke Jan 4 at 16:38 You start with $f > 0$ at $t=-2$. $f$ is continuous. In order for $f$ to get to $0$, it would have to pass through the interval $(0, f(-2))$, which it can't because $df/dt > 0$ whenever $0 \le f \le 1$. – Robert Israel Jan 4 at 19:17 Thanks! One more question. If the initial speed $y'(-2)=0$, can you have a similar conclusion? – djoke Jan 4 at 21:16 Presumably. E.g. for $n=3$, numerical methods indicate the solution blows up before $t=-1$ for $y(-2) \ge .902$ approximately. – Robert Israel Jan 6 at 7:10 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The result you want is true because of existence and Uniqueness for second order non-linear ODE, for example see Boyce and Diprima. Writing $y^{\prime\prime}=f(t,y,y^\prime)$ and verifying that $f$, $f_y$, and $f_{y^\prime}$ are continuous you can guarantee existence and uniqueness in a small interval of the initial condition. Here you need to see that the functions you have in the problem ($coth$ and $sinh$) are smooth. - Thanks, but I need to prove more than the existence. – djoke Dec 30 at 13:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391149282455444, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Discrete_Fourier_transform
# Discrete Fourier transform Fourier transforms Continuous Fourier transform Fourier series Discrete-time Fourier transform Discrete Fourier transform Fourier analysis Related transforms Relationship between the (continuous) Fourier transform and the discrete Fourier transform. Left column: A continuous function (top) and its Fourier transform (bottom). Center-left column: Periodic summation of the original function (top). Fourier transform (bottom) is zero except at discrete points. The inverse transform is a sum of sinusoids called Fourier series. Center-right column: Original function is discretized (multiplied by a Dirac comb) (top). Its Fourier transform (bottom) is a periodic summation (DTFT) of the original transform. Right column: The DFT (bottom) computes discrete samples of the continuous DTFT. The inverse DFT (top) is a periodic summation of the original samples. The FFT algorithm computes one cycle of the DFT and its inverse is one cycle of the DFT inverse. Illustration of using Dirac comb functions and the convolution theorem to model the effects of sampling and/or periodic summation. At lower left is a DTFT, the spectral result of sampling s(t) at intervals of T. The spectral sequences at (a) upper right and (b) lower right are respectively computed from (a) one cycle of the periodic summation of s(t) and (b) one cycle of the periodic summation of the s(nT) sequence. The respective formulas are (a) the Fourier series integral and (b) the DFT summation. Its similarities to the original transform, S(f), and its relative computational ease are often the motivation for computing a DFT sequence. In mathematics, the discrete Fourier transform (DFT) converts a finite list of equally-spaced samples of a function into the list of coefficients of a finite combination of complex sinusoids, ordered by their frequencies, that has those same sample values. It can be said to convert the sampled function from its original domain (often time or position along a line) to the frequency domain. The input samples are complex numbers (in practice, usually real numbers), and the output coefficients are complex too. The frequencies of the output sinusoids are integer multiples of a fundamental frequency, whose corresponding period is the length of the sampling interval. The combination of sinusoids obtained through the DFT is therefore periodic with that same period. The DFT differs from the discrete-time Fourier transform (DTFT) in that its input and output sequences are both finite; it is therefore said to be the Fourier analysis of finite-domain (or periodic) discrete-time functions. The DFT is the most important discrete transform, used to perform Fourier analysis in many practical applications. In digital signal processing, the function is any quantity or signal that varies over time, such as the pressure of a sound wave, a radio signal, or daily temperature readings, sampled over a finite time interval (often defined by a window function). In image processing, the samples can be the values of pixels along a row or column of a raster image. The DFT is also used to efficiently solve partial differential equations, and to perform other operations such as convolutions or multiplying large integers. Since it deals with a finite amount of data, it can be implemented in computers by numerical algorithms or even dedicated hardware. These implementations usually employ efficient fast Fourier transform (FFT) algorithms;[1] so much so that the terms "FFT" and "DFT" are often used interchangeably. The terminology is further blurred by the (now rare) synonym finite Fourier transform for the DFT, which apparently predates the term "fast Fourier transform" but has the same initialism. ## Definition The sequence of N complex numbers x0, ..., xN−1 is transformed into an N-periodic sequence of complex numbers according to the DFT formula: $X_k = \sum_{n=0}^{N-1} x_n \cdot e^{-i 2 \pi k n / N}.$ () The transform is sometimes denoted by the symbol $\mathcal{F}$, as in $\mathbf{X} = \mathcal{F} \left \{ \mathbf{x} \right \}$ or $\mathcal{F} \left ( \mathbf{x} \right )$ or $\mathcal{F} \mathbf{x}$.[note 1] Eq.1 can be interpreted or derived in various ways, for example: • It completely describes the discrete-time Fourier transform (DTFT) of an N-periodic sequence, which comprises only discrete frequency components. (Discrete-time Fourier transform#Periodic data) • It can also provide uniformly spaced samples of the continuous DTFT of a finite length sequence. (Sampling the DTFT) • It is the cross correlation of the input sequence, xn, and a complex sinusoid at frequency k/N.  Thus it acts like a matched filter for that frequency. • It is the discrete analogy of the formula for the coefficients of a Fourier series: $x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k \cdot e^{i 2 \pi k n / N},$ () which is the inverse DFT (IDFT). Each $X_k$ is a complex number that encodes both amplitude and phase of a sinusoidal component of function $x_n$. The sinusoid's frequency is k/N cycles per sample.  Its amplitude and phase are: $|X_k|/N = \sqrt{\operatorname{Re}(X_k)^2 + \operatorname{Im}(X_k)^2}/N$ $\arg(X_k) = \operatorname{atan2}\big( \operatorname{Im}(X_k), \operatorname{Re}(X_k) \big),$ where atan2 is the two-argument form of the arctan function. The normalization factor multiplying the DFT and IDFT (here 1 and 1/N) and the signs of the exponents are merely conventions, and differ in some treatments. The only requirements of these conventions are that the DFT and IDFT have opposite-sign exponents and that the product of their normalization factors be 1/N.  A normalization of $\scriptstyle \sqrt{1/N}$ for both the DFT and IDFT, for instance, makes the transforms unitary. In the following discussion the terms "sequence" and "vector" will be considered interchangeable. ## Properties ### Completeness The discrete Fourier transform is an invertible, linear transformation $\mathcal{F}\colon\mathbb{C}^N \to \mathbb{C}^N$ with C denoting the set of complex numbers. In other words, for any N > 0, an N-dimensional complex vector has a DFT and an IDFT which are in turn N-dimensional complex vectors. ### Orthogonality The vectors $u_k=\left[ e^{ \frac{2\pi i}{N} kn} \;|\; n=0,1,\ldots,N-1 \right]^T$ form an orthogonal basis over the set of N-dimensional complex vectors: $u^T_k u_{k'}^* = \sum_{n=0}^{N-1} \left(e^{ \frac{2\pi i}{N} kn}\right) \left(e^{\frac{2\pi i}{N} (-k')n}\right) = \sum_{n=0}^{N-1} e^{ \frac{2\pi i}{N} (k-k') n} = N~\delta_{kk'}$ where $~\delta_{kk'}$ is the Kronecker delta. (In the last step, the summation is trivial if $k=k'$, where it is 1+1+⋅⋅⋅=N, and otherwise is a geometric series that can be explicitly summed to obtain zero.) This orthogonality condition can be used to derive the formula for the IDFT from the definition of the DFT, and is equivalent to the unitarity property below. ### The Plancherel theorem and Parseval's theorem If Xk and Yk are the DFTs of xn and yn respectively then the Plancherel theorem states: $\sum_{n=0}^{N-1} x_n y^*_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k Y^*_k$ where the star denotes complex conjugation. Parseval's theorem is a special case of the Plancherel theorem and states: $\sum_{n=0}^{N-1} |x_n|^2 = \frac{1}{N} \sum_{k=0}^{N-1} |X_k|^2.$ These theorems are also equivalent to the unitary condition below. ### Periodicity If the expression that defines the DFT is evaluated for all integers k instead of just for $k = 0, \dots, N-1$, then the resulting infinite sequence is a periodic extension of the DFT, periodic with period N. The periodicity can be shown directly from the definition: $X_{k+N} \ \stackrel{\mathrm{def}}{=} \ \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} (k+N) n} = \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} k n} \underbrace{e^{-2 \pi i n}}_{1} = \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} k n} = X_k.$ Similarly, it can be shown that the IDFT formula leads to a periodic extension. ### Shift theorem Multiplying $x_n$ by a linear phase $e^{\frac{2\pi i}{N}n m}$ for some integer m corresponds to a circular shift of the output $X_k$: $X_k$ is replaced by $X_{k-m}$, where the subscript is interpreted modulo N (i.e., periodically). Similarly, a circular shift of the input $x_n$ corresponds to multiplying the output $X_k$ by a linear phase. Mathematically, if $\{x_n\}$ represents the vector x then if $\mathcal{F}(\{x_n\})_k=X_k$ then $\mathcal{F}(\{ x_n\cdot e^{\frac{2\pi i}{N}n m} \})_k=X_{k-m}$ and $\mathcal{F}(\{x_{n-m}\})_k=X_k\cdot e^{-\frac{2\pi i}{N}k m}$ ### Circular convolution theorem and cross-correlation theorem The convolution theorem for the discrete-time Fourier transform indicates that a convolution of two infinite sequences can be obtained as the inverse transform of the product of the individual transforms. An important simplification occurs when the sequences are of finite length, N. In terms of the DFT and inverse DFT, it can be written as follows: $\mathcal{F}^{-1} \left \{ \mathbf{X\cdot Y} \right \}_n \ = \sum_{l=0}^{N-1}x_l \cdot (y_N)_{n-l} \ \ \stackrel{\mathrm{def}}{=} \ \ (\mathbf{x * y_N})_n\ ,$ which is the convolution of the $\mathbf{x}$ sequence with a $\mathbf{y}$ sequence extended by periodic summation: $(\mathbf{y_N})_n \ \stackrel{\mathrm{def}}{=} \ \sum_{p=-\infty}^{\infty} y_{(n-pN)} = y_{n (mod N)}. \,$ Similarly, the cross-correlation of  $\mathbf{x}$  and  $\mathbf{y_N}$  is given by: $\mathcal{F}^{-1} \left \{ \mathbf{X^* \cdot Y} \right \}_n = \sum_{l=0}^{N-1}x_l^* \cdot (y_N)_{n+l} \ \ \stackrel{\mathrm{def}}{=} \ \ (\mathbf{x \star y_N})_n\ .$ A direct evaluation of either summation (above) requires $\scriptstyle O(N^2)$ operations for an output sequence of length N.  An indirect method, using transforms, can take advantage of the $\scriptstyle O(N\log N)$ efficiency of the fast Fourier transform (FFT) to achieve much better performance. Furthermore, convolutions can be used to efficiently compute DFTs via Rader's FFT algorithm and Bluestein's FFT algorithm. Methods have also been developed to use circular convolution as part of an efficient process that achieves normal (non-circular) convolution with an $\mathbf{x}$ or $\mathbf{y}$ sequence potentially much longer than the practical transform size (N). Two such methods are called overlap-save and overlap-add.[2] ### Convolution theorem duality It can also be shown that: $\mathcal{F} \left \{ \mathbf{x\cdot y} \right \}_k \ \stackrel{\mathrm{def}}{=} \sum_{n=0}^{N-1} x_n \cdot y_n \cdot e^{-\frac{2\pi i}{N} k n}$ $=\frac{1}{N} (\mathbf{X * Y_N})_k, \,$   which is the circular convolution of $\mathbf{X}$ and $\mathbf{Y}$. ### Trigonometric interpolation polynomial The trigonometric interpolation polynomial $p(t) = \frac{1}{N} \left[ X_0 + X_1 e^{2\pi it} + \cdots + X_{N/2-1} e^{(N/2-1)2\pi it} + X_{N/2} \cos(Nt/2) + X_{N/2+1} e^{(-N/2+1)2\pi it} + \cdots + X_{N-1} e^{-2\pi it} \right]$ for N even , $p(t) = \frac{1}{N} \left[ X_0 + X_1 e^{2\pi it} + \cdots + X_{\lfloor N/2 \rfloor} e^{\lfloor N/2 \rfloor 2\pi it} + X_{\lfloor N/2 \rfloor+1} e^{-\lfloor N/2 \rfloor 2\pi it} + \cdots + X_{N-1} e^{-2\pi it} \right]$ for N odd, where the coefficients Xk are given by the DFT of xn above, satisfies the interpolation property $p(2\pi n/N) = x_n$ for $n=0,\ldots,N-1$. For even N, notice that the Nyquist component $\frac{X_{N/2}}{N} \cos(Nt/2)$ is handled specially. This interpolation is not unique: aliasing implies that one could add N to any of the complex-sinusoid frequencies (e.g. changing $e^{-it}$ to $e^{i(N-1)t}$ ) without changing the interpolation property, but giving different values in between the $x_n$ points. The choice above, however, is typical because it has two useful properties. First, it consists of sinusoids whose frequencies have the smallest possible magnitudes: the interpolation is bandlimited. Second, if the $x_n$ are real numbers, then $p(t)$ is real as well. In contrast, the most obvious trigonometric interpolation polynomial is the one in which the frequencies range from 0 to $N-1$ (instead of roughly $-N/2$ to $+N/2$ as above), similar to the inverse DFT formula. This interpolation does not minimize the slope, and is not generally real-valued for real $x_n$; its use is a common mistake. ### The unitary DFT Another way of looking at the DFT is to note that in the above discussion, the DFT can be expressed as a Vandermonde matrix: $\mathbf{F} = \begin{bmatrix} \omega_N^{0 \cdot 0} & \omega_N^{0 \cdot 1} & \ldots & \omega_N^{0 \cdot (N-1)} \\ \omega_N^{1 \cdot 0} & \omega_N^{1 \cdot 1} & \ldots & \omega_N^{1 \cdot (N-1)} \\ \vdots & \vdots & \ddots & \vdots \\ \omega_N^{(N-1) \cdot 0} & \omega_N^{(N-1) \cdot 1} & \ldots & \omega_N^{(N-1) \cdot (N-1)} \\ \end{bmatrix}$ where $\omega_N = e^{-2 \pi i/N}\,$ is a primitive Nth root of unity. The inverse transform is then given by the inverse of the above matrix: $\mathbf{F}^{-1}=\frac{1}{N}\mathbf{F}^*$ With unitary normalization constants $1/\sqrt{N}$, the DFT becomes a unitary transformation, defined by a unitary matrix: $\mathbf{U}=\mathbf{F}/\sqrt{N}$ $\mathbf{U}^{-1}=\mathbf{U}^*$ $|\det(\mathbf{U})|=1$ where det()  is the determinant function. The determinant is the product of the eigenvalues, which are always $\pm 1$ or $\pm i$ as described below. In a real vector space, a unitary transformation can be thought of as simply a rigid rotation of the coordinate system, and all of the properties of a rigid rotation can be found in the unitary DFT. The orthogonality of the DFT is now expressed as an orthonormality condition (which arises in many areas of mathematics as described in root of unity): $\sum_{m=0}^{N-1}U_{km}U_{mn}^*=\delta_{kn}$ If $\mathbf{X}$ is defined as the unitary DFT of the vector $\mathbf{x}$ then $X_k=\sum_{n=0}^{N-1} U_{kn}x_n$ and the Plancherel theorem is expressed as: $\sum_{n=0}^{N-1}x_n y_n^* = \sum_{k=0}^{N-1}X_k Y_k^*$ If we view the DFT as just a coordinate transformation which simply specifies the components of a vector in a new coordinate system, then the above is just the statement that the dot product of two vectors is preserved under a unitary DFT transformation. For the special case $\mathbf{x} = \mathbf{y}$, this implies that the length of a vector is preserved as well—this is just Parseval's theorem: $\sum_{n=0}^{N-1}|x_n|^2 = \sum_{k=0}^{N-1}|X_k|^2$ ### Expressing the inverse DFT in terms of the DFT A useful property of the DFT is that the inverse DFT can be easily expressed in terms of the (forward) DFT, via several well-known "tricks". (For example, in computations, it is often convenient to only implement a fast Fourier transform corresponding to one transform direction and then to get the other transform direction from the first.) First, we can compute the inverse DFT by reversing the inputs: $\mathcal{F}^{-1}(\{x_n\}) = \mathcal{F}(\{x_{N - n}\}) / N$ (As usual, the subscripts are interpreted modulo N; thus, for $n=0$, we have $x_{N-0}=x_0$.) Second, one can also conjugate the inputs and outputs: $\mathcal{F}^{-1}(\mathbf{x}) = \mathcal{F}(\mathbf{x}^*)^* / N$ Third, a variant of this conjugation trick, which is sometimes preferable because it requires no modification of the data values, involves swapping real and imaginary parts (which can be done on a computer simply by modifying pointers). Define swap($x_n$) as $x_n$ with its real and imaginary parts swapped—that is, if $x_n = a + b i$ then swap($x_n$) is $b + a i$. Equivalently, swap($x_n$) equals $i x_n^*$. Then $\mathcal{F}^{-1}(\mathbf{x}) = \textrm{swap}(\mathcal{F}(\textrm{swap}(\mathbf{x}))) / N$ That is, the inverse transform is the same as the forward transform with the real and imaginary parts swapped for both input and output, up to a normalization (Duhamel et al., 1988). The conjugation trick can also be used to define a new transform, closely related to the DFT, that is involutary—that is, which is its own inverse. In particular, $T(\mathbf{x}) = \mathcal{F}(\mathbf{x}^*) / \sqrt{N}$ is clearly its own inverse: $T(T(\mathbf{x})) = \mathbf{x}$. A closely related involutary transformation (by a factor of (1+i) /√2) is $H(\mathbf{x}) = \mathcal{F}((1+i) \mathbf{x}^*) / \sqrt{2N}$, since the $(1+i)$ factors in $H(H(\mathbf{x}))$ cancel the 2. For real inputs $\mathbf{x}$, the real part of $H(\mathbf{x})$ is none other than the discrete Hartley transform, which is also involutary. ### Eigenvalues and eigenvectors The eigenvalues of the DFT matrix are simple and well-known, whereas the eigenvectors are complicated, not unique, and are the subject of ongoing research. Consider the unitary form $\mathbf{U}$ defined above for the DFT of length N, where $\mathbf{U}_{m,n} = \frac1{\sqrt{N}}\omega_N^{(m-1)(n-1)} = \frac1{\sqrt{N}}e^{-\frac{2\pi i}N (m-1)(n-1)}.$ This matrix satisfies the matrix polynomial equation: $\mathbf{U}^4 = \mathbf{I}.$ This can be seen from the inverse properties above: operating $\mathbf{U}$ twice gives the original data in reverse order, so operating $\mathbf{U}$ four times gives back the original data and is thus the identity matrix. This means that the eigenvalues $\lambda$ satisfy the equation: $\lambda^4 = 1.$ Therefore, the eigenvalues of $\mathbf{U}$ are the fourth roots of unity: $\lambda$ is +1, −1, +i, or −i. Since there are only four distinct eigenvalues for this $N\times N$ matrix, they have some multiplicity. The multiplicity gives the number of linearly independent eigenvectors corresponding to each eigenvalue. (Note that there are N independent eigenvectors; a unitary matrix is never defective.) The problem of their multiplicity was solved by McClellan and Parks (1972), although it was later shown to have been equivalent to a problem solved by Gauss (Dickinson and Steiglitz, 1982). The multiplicity depends on the value of N modulo 4, and is given by the following table: Multiplicities of the eigenvalues λ of the unitary DFT matrix U as a function of the transform size N (in terms of an integer m). size N λ = +1 λ = −1 λ = -i λ = +i 4m m + 1 m m m − 1 4m + 1 m + 1 m m m 4m + 2 m + 1 m + 1 m m 4m + 3 m + 1 m + 1 m + 1 m Otherwise stated, the characteristic polynomial of $\mathbf{U}$ is: $\det (\lambda I - \mathbf{U})= (\lambda-1)^{\left\lfloor \tfrac {N+4}{4}\right\rfloor} (\lambda+1)^{\left\lfloor \tfrac {N+2}{4}\right\rfloor} (\lambda+i)^{\left\lfloor \tfrac {N+1}{4}\right\rfloor} (\lambda-i)^{\left\lfloor \tfrac {N-1}{4}\right\rfloor}.$ No simple analytical formula for general eigenvectors is known. Moreover, the eigenvectors are not unique because any linear combination of eigenvectors for the same eigenvalue is also an eigenvector for that eigenvalue. Various researchers have proposed different choices of eigenvectors, selected to satisfy useful properties like orthogonality and to have "simple" forms (e.g., McClellan and Parks, 1972; Dickinson and Steiglitz, 1982; Grünbaum, 1982; Atakishiyev and Wolf, 1997; Candan et al., 2000; Hanna et al., 2004; Gurevich and Hadani, 2008). A straightforward approach is to discretize the eigenfunction of the continuous Fourier transform, namely the Gaussian function. Since periodic summation of the function means discretizing its frequency spectrum and discretization means periodic summation of the spectrum, the discretized and periodically summed Gaussian function yields an eigenvector of the discrete transform: • $F(m) = \sum_{k\in\mathbb{Z}} \exp\left(-\frac{\pi\cdot(m+N\cdot k)^2}{N}\right)$. A closed form expression for the series is not known, but it converges rapidly. Two other simple closed-form analytical eigenvectors for special DFT period N were found (Kong, 2008): For DFT period N = 2L + 1 = 4K +1, where K is an integer, the following is an eigenvector of DFT: • $F(m)=\prod_{s=K+1}^L\left[\cos\left(\frac{2\pi}{N}m\right)- \cos\left(\frac{2\pi}{N}s\right)\right]$ For DFT period N = 2L = 4K, where K is an integer, the following is an eigenvector of DFT: • $F(m)=\sin\left(\frac{2\pi}{N}m\right)\prod_{s=K+1}^{L-1}\left[\cos\left(\frac{2\pi}{N}m\right)- \cos\left(\frac{2\pi}{N}s\right)\right]$ The choice of eigenvectors of the DFT matrix has become important in recent years in order to define a discrete analogue of the fractional Fourier transform—the DFT matrix can be taken to fractional powers by exponentiating the eigenvalues (e.g., Rubio and Santhanam, 2005). For the continuous Fourier transform, the natural orthogonal eigenfunctions are the Hermite functions, so various discrete analogues of these have been employed as the eigenvectors of the DFT, such as the Kravchuk polynomials (Atakishiyev and Wolf, 1997). The "best" choice of eigenvectors to define a fractional discrete Fourier transform remains an open question, however. ### Uncertainty principle If the random variable $X_k$ is constrained by: $\sum_{n=0}^{N-1}|X_n|^2=1$ then $P_n=|X_n|^2$ may be considered to represent a discrete probability mass function of n, with an associated probability mass function constructed from the transformed variable: $Q_m=N|x_m|^2$ For the case of continuous functions P(x) and Q(k), the Heisenberg uncertainty principle states that: $D_0(X)D_0(x)\ge\frac{1}{16\pi^2}$ where $D_0(X)$ and $D_0(x)$ are the variances of $|X|^2$ and $|x|^2$ respectively, with the equality attained in the case of a suitably normalized Gaussian distribution. Although the variances may be analogously defined for the DFT, an analogous uncertainty principle is not useful, because the uncertainty will not be shift-invariant. Nevertheless, a meaningful uncertainty principle has been introduced by Massar and Spindel.[3] However, the Hirschman uncertainty will have a useful analog for the case of the DFT.[4] The Hirschman uncertainty principle is expressed in terms of the Shannon entropy of the two probability functions. In the discrete case, the Shannon entropies are defined as: $H(X)=-\sum_{n=0}^{N-1} P_n\ln P_n$ and $H(x)=-\sum_{m=0}^{N-1} Q_m\ln Q_m$ and the Hirschman uncertainty principle becomes:[4] $H(X)+H(x) \ge \ln(N)$ The equality is obtained for $P_n$ equal to translations and modulations of a suitably normalized Kronecker comb of period A where A is any exact integer divisor of N. The probability mass function $Q_m$ will then be proportional to a suitably translated Kronecker comb of period B=N/A.[4] ### The real-input DFT If $x_0, \ldots, x_{N-1}$ are real numbers, as they often are in practical applications, then the DFT obeys the symmetry: $X_{N-k} \equiv X_{-k} = X_k^*,$  where $X^*\,$ denotes complex conjugation. It follows that X0 and XN/2 are real-valued, and the remainder of the DFT is completely specified by just N/2-1 complex numbers. ## Generalized DFT (shifted and non-linear phase) It is possible to shift the transform sampling in time and/or frequency domain by some real shifts a and b, respectively. This is sometimes known as a generalized DFT (or GDFT), also called the shifted DFT or offset DFT, and has analogous properties to the ordinary DFT: $X_k = \sum_{n=0}^{N-1} x_n e^{-\frac{2 \pi i}{N} (k+b) (n+a)} \quad \quad k = 0, \dots, N-1.$ Most often, shifts of $1/2$ (half a sample) are used. While the ordinary DFT corresponds to a periodic signal in both time and frequency domains, $a=1/2$ produces a signal that is anti-periodic in frequency domain ($X_{k+N} = - X_k$) and vice-versa for $b=1/2$. Thus, the specific case of $a = b = 1/2$ is known as an odd-time odd-frequency discrete Fourier transform (or O2 DFT). Such shifted transforms are most often used for symmetric data, to represent different boundary symmetries, and for real-symmetric data they correspond to different forms of the discrete cosine and sine transforms. Another interesting choice is $a=b=-(N-1)/2$, which is called the centered DFT (or CDFT). The centered DFT has the useful property that, when N is a multiple of four, all four of its eigenvalues (see above) have equal multiplicities (Rubio and Santhanam, 2005)[5] The term GDFT is also used for the non-linear phase extensions of DFT. Hence, GDFT method provides a generalization for constant amplitude orthogonal block transforms including linear and non-linear phase types. GDFT is a framework to improve time and frequency domain properties of the traditional DFT, e.g. auto/cross-correlations, by the addition of the properly designed phase shaping function (non-linear, in general) to the original linear phase functions (Akansu and Agirman-Tosun, 2010).[6] The discrete Fourier transform can be viewed as a special case of the z-transform, evaluated on the unit circle in the complex plane; more general z-transforms correspond to complex shifts a and b above. ## Multidimensional DFT The ordinary DFT transforms a one-dimensional sequence or array $x_n$ that is a function of exactly one discrete variable n. The multidimensional DFT of a multidimensional array $x_{n_1, n_2, \dots, n_d}$ that is a function of d discrete variables $n_\ell = 0, 1, \dots, N_\ell-1$ for $\ell$ in $1, 2, \dots, d$ is defined by: $X_{k_1, k_2, \dots, k_d} = \sum_{n_1=0}^{N_1-1} \left(\omega_{N_1}^{~k_1 n_1} \sum_{n_2=0}^{N_2-1} \left( \omega_{N_2}^{~k_2 n_2} \cdots \sum_{n_d=0}^{N_d-1} \omega_{N_d}^{~k_d n_d}\cdot x_{n_1, n_2, \dots, n_d} \right) \right) \, ,$ where $\omega_{N_\ell} = \exp(-2\pi i/N_\ell)$ as above and the d output indices run from $k_\ell = 0, 1, \dots, N_\ell-1$. This is more compactly expressed in vector notation, where we define $\mathbf{n} = (n_1, n_2, \dots, n_d)$ and $\mathbf{k} = (k_1, k_2, \dots, k_d)$ as d-dimensional vectors of indices from 0 to $\mathbf{N} - 1$, which we define as $\mathbf{N} - 1 = (N_1 - 1, N_2 - 1, \dots, N_d - 1)$: $X_\mathbf{k} = \sum_{\mathbf{n}=0}^{\mathbf{N}-1} e^{-2\pi i \mathbf{k} \cdot (\mathbf{n} / \mathbf{N})} x_\mathbf{n} \, ,$ where the division $\mathbf{n} / \mathbf{N}$ is defined as $\mathbf{n} / \mathbf{N} = (n_1/N_1, \dots, n_d/N_d)$ to be performed element-wise, and the sum denotes the set of nested summations above. The inverse of the multi-dimensional DFT is, analogous to the one-dimensional case, given by: $x_\mathbf{n} = \frac{1}{\prod_{\ell=1}^d N_\ell} \sum_{\mathbf{k}=0}^{\mathbf{N}-1} e^{2\pi i \mathbf{n} \cdot (\mathbf{k} / \mathbf{N})} X_\mathbf{k} \, .$ As the one-dimensional DFT expresses the input $x_n$ as a superposition of sinusoids, the multidimensional DFT expresses the input as a superposition of plane waves, or multidimensional sinusoids. The direction of oscillation in space is $\mathbf{k} / \mathbf{N}$. The amplitudes are $X_\mathbf{k}$. This decomposition is of great importance for everything from digital image processing (two-dimensional) to solving partial differential equations. The solution is broken up into plane waves. The multidimensional DFT can be computed by the composition of a sequence of one-dimensional DFTs along each dimension. In the two-dimensional case $x_{n_1,n_2}$ the $N_1$ independent DFTs of the rows (i.e., along $n_2$) are computed first to form a new array $y_{n_1,k_2}$. Then the $N_2$ independent DFTs of y along the columns (along $n_1$) are computed to form the final result $X_{k_1,k_2}$. Alternatively the columns can be computed first and then the rows. The order is immaterial because the nested summations above commute. An algorithm to compute a one-dimensional DFT is thus sufficient to efficiently compute a multidimensional DFT. This approach is known as the row-column algorithm. There are also intrinsically multidimensional FFT algorithms. ### The real-input multidimensional DFT For input data $x_{n_1, n_2, \dots, n_d}$ consisting of real numbers, the DFT outputs have a conjugate symmetry similar to the one-dimensional case above: $X_{k_1, k_2, \dots, k_d} = X_{N_1 - k_1, N_2 - k_2, \dots, N_d - k_d}^* ,$ where the star again denotes complex conjugation and the $\ell$-th subscript is again interpreted modulo $N_\ell$ (for $\ell = 1,2,\ldots,d$). ## Applications The DFT has seen wide usage across a large number of fields; we only sketch a few examples below (see also the references at the end). All applications of the DFT depend crucially on the availability of a fast algorithm to compute discrete Fourier transforms and their inverses, a fast Fourier transform. ### Spectral analysis When the DFT is used for spectral analysis, the $\{x_n\}\,$ sequence usually represents a finite set of uniformly spaced time-samples of some signal $x(t)\,$, where t represents time. The conversion from continuous time to samples (discrete-time) changes the underlying Fourier transform of x(t) into a discrete-time Fourier transform (DTFT), which generally entails a type of distortion called aliasing. Choice of an appropriate sample-rate (see Nyquist rate) is the key to minimizing that distortion. Similarly, the conversion from a very long (or infinite) sequence to a manageable size entails a type of distortion called leakage, which is manifested as a loss of detail (aka resolution) in the DTFT. Choice of an appropriate sub-sequence length is the primary key to minimizing that effect. When the available data (and time to process it) is more than the amount needed to attain the desired frequency resolution, a standard technique is to perform multiple DFTs, for example to create a spectrogram. If the desired result is a power spectrum and noise or randomness is present in the data, averaging the magnitude components of the multiple DFTs is a useful procedure to reduce the variance of the spectrum (also called a periodogram in this context); two examples of such techniques are the Welch method and the Bartlett method; the general subject of estimating the power spectrum of a noisy signal is called spectral estimation. A final source of distortion (or perhaps illusion) is the DFT itself, because it is just a discrete sampling of the DTFT, which is a function of a continuous frequency domain. That can be mitigated by increasing the resolution of the DFT. That procedure is illustrated at Sampling the DTFT. • The procedure is sometimes referred to as zero-padding, which is a particular implementation used in conjunction with the fast Fourier transform (FFT) algorithm. The inefficiency of performing multiplications and additions with zero-valued "samples" is more than offset by the inherent efficiency of the FFT. • As already noted, leakage imposes a limit on the inherent resolution of the DTFT. So there is a practical limit to the benefit that can be obtained from a fine-grained DFT. ### Filter bank See FFT filter banks and Sampling the DTFT. ### Data compression The field of digital signal processing relies heavily on operations in the frequency domain (i.e. on the Fourier transform). For example, several lossy image and sound compression methods employ the discrete Fourier transform: the signal is cut into short segments, each is transformed, and then the Fourier coefficients of high frequencies, which are assumed to be unnoticeable, are discarded. The decompressor computes the inverse transform based on this reduced number of Fourier coefficients. (Compression applications often use a specialized form of the DFT, the discrete cosine transform or sometimes the modified discrete cosine transform.) Some relatively recent compression algorithms, however, use wavelet transforms, which give a more uniform compromise between time and frequency domain than obtained by chopping data into segments and transforming each segment. In the case of JPEG2000, this avoids the spurious image features that appear when images are highly compressed with the original JPEG. ### Partial differential equations Discrete Fourier transforms are often used to solve partial differential equations, where again the DFT is used as an approximation for the Fourier series (which is recovered in the limit of infinite N). The advantage of this approach is that it expands the signal in complex exponentials einx, which are eigenfunctions of differentiation: d/dx einx = in einx. Thus, in the Fourier representation, differentiation is simple—we just multiply by i n. (Note, however, that the choice of n is not unique due to aliasing; for the method to be convergent, a choice similar to that in the trigonometric interpolation section above should be used.) A linear differential equation with constant coefficients is transformed into an easily solvable algebraic equation. One then uses the inverse DFT to transform the result back into the ordinary spatial representation. Such an approach is called a spectral method. ### Polynomial multiplication Suppose we wish to compute the polynomial product c(x) = a(x) · b(x). The ordinary product expression for the coefficients of c involves a linear (acyclic) convolution, where indices do not "wrap around." This can be rewritten as a cyclic convolution by taking the coefficient vectors for a(x) and b(x) with constant term first, then appending zeros so that the resultant coefficient vectors a and b have dimension d > deg(a(x)) + deg(b(x)). Then, $\mathbf{c} = \mathbf{a} * \mathbf{b}$ Where c is the vector of coefficients for c(x), and the convolution operator $*\,$ is defined so $c_n = \sum_{m=0}^{d-1}a_m b_{n-m\ \mathrm{mod}\ d} \qquad\qquad\qquad n=0,1\dots,d-1$ But convolution becomes multiplication under the DFT: $\mathcal{F}(\mathbf{c}) = \mathcal{F}(\mathbf{a})\mathcal{F}(\mathbf{b})$ Here the vector product is taken elementwise. Thus the coefficients of the product polynomial c(x) are just the terms 0, ..., deg(a(x)) + deg(b(x)) of the coefficient vector $\mathbf{c} = \mathcal{F}^{-1}(\mathcal{F}(\mathbf{a})\mathcal{F}(\mathbf{b})).$ With a fast Fourier transform, the resulting algorithm takes O (N log N) arithmetic operations. Due to its simplicity and speed, the Cooley–Tukey FFT algorithm, which is limited to composite sizes, is often chosen for the transform operation. In this case, d should be chosen as the smallest integer greater than the sum of the input polynomial degrees that is factorizable into small prime factors (e.g. 2, 3, and 5, depending upon the FFT implementation). #### Multiplication of large integers The fastest known algorithms for the multiplication of very large integers use the polynomial multiplication method outlined above. Integers can be treated as the value of a polynomial evaluated specifically at the number base, with the coefficients of the polynomial corresponding to the digits in that base. After polynomial multiplication, a relatively low-complexity carry-propagation step completes the multiplication. #### Convolution When data is convolved with a function with wide support, such as for downsampling by a large sampling ratio, because of the Convolution theorem and the FFT algorithm, it may be faster to transform it, multiply pointwise by the transform of the filter and then reverse transform it. Alternatively, a good filter is obtained by simply truncating the transformed data and re-transforming the shortened data set. ## Some discrete Fourier transform pairs Some DFT pairs $x_n = \frac{1}{N}\sum_{k=0}^{N-1}X_k e^{i 2 \pi kn/N}$ $X_k = \sum_{n=0}^{N-1}x_n e^{-i 2 \pi kn/N}$ Note $x_n e^{i 2 \pi n\ell/N} \,$ $X_{k-\ell}\,$ Shift theorem $x_{n-\ell}\,$ $X_k e^{-i 2 \pi k\ell/N} \,$ $x_n \in \mathbb{R}$ $X_k=X_{N-k}^*\,$ Real DFT $a^n\,$ $\left\{ \begin{matrix} N & \mbox{if } a = e^{i 2 \pi k/N} \\ \frac{1-a^N}{1-a \, e^{-i 2 \pi k/N} } & \mbox{otherwise} \end{matrix} \right.$ from the geometric progression formula ${N-1 \choose n}\,$ $\left(1+e^{-i 2 \pi k/N} \right)^{N-1}\,$ from the binomial theorem $\left\{ \begin{matrix} \frac{1}{W} & \mbox{if } 2n < W \mbox{ or } 2(N-n) < W \\ 0 & \mbox{otherwise} \end{matrix} \right.$ $\left\{ \begin{matrix} 1 & \mbox{if } k = 0 \\ \frac{\sin\left(\frac{\pi W k}{N}\right)} {W \sin\left(\frac{\pi k}{N}\right)} & \mbox{otherwise} \end{matrix} \right.$ $x_n$ is a rectangular window function of W points centered on n=0, where W is an odd integer, and $X_k$ is a sinc-like function (specifically, $X_k$ is a Dirichlet kernel) $\sum_{j\in\mathbb{Z}} \exp\left(-\frac{\pi}{cN}\cdot(n+N\cdot j)^2\right)$ $\sqrt{cN} \cdot \sum_{j\in\mathbb{Z}} \exp\left(-\frac{\pi c}{N}\cdot(k+N\cdot j)^2\right)$ Discretization and periodic summation of the scaled Gaussian functions for $c>0$. Since either $c$ or $\frac{1}{c}$ is larger than one and thus warrants fast convergence of one of the two series, for large $c$ you may choose to compute the frequency spectrum and convert to the time domain using the discrete Fourier transform. ## Generalizations ### Representation theory For more details on this topic, see Representation theory of finite groups#Discrete Fourier transform. The DFT can be interpreted as the complex-valued representation theory of the finite cyclic group. In other words, a sequence of n complex numbers can be thought of as an element of n-dimensional complex space Cn or equivalently a function f from the finite cyclic group of order n to the complex numbers, Zn → C. So f is a class function on the finite cyclic group, and thus can be expressed as a linear combination of the irreducible characters of this group, which are the roots of unity. From this point of view, one may generalize the DFT to representation theory generally, or more narrowly to the representation theory of finite groups. More narrowly still, one may generalize the DFT by either changing the target (taking values in a field other than the complex numbers), or the domain (a group other than a finite cyclic group), as detailed in the sequel. ### Other fields Main articles: Discrete Fourier transform (general) and Number-theoretic transform Many of the properties of the DFT only depend on the fact that $e^{-\frac{2 \pi i}{N}}$ is a primitive root of unity, sometimes denoted $\omega_N$ or $W_N$ (so that $\omega_N^N = 1$). Such properties include the completeness, orthogonality, Plancherel/Parseval, periodicity, shift, convolution, and unitarity properties above, as well as many FFT algorithms. For this reason, the discrete Fourier transform can be defined by using roots of unity in fields other than the complex numbers, and such generalizations are commonly called number-theoretic transforms (NTTs) in the case of finite fields. For more information, see number-theoretic transform and discrete Fourier transform (general). ### Other finite groups Main article: Fourier transform on finite groups The standard DFT acts on a sequence x0, x1, …, xN−1 of complex numbers, which can be viewed as a function {0, 1, …, N − 1} → C. The multidimensional DFT acts on multidimensional sequences, which can be viewed as functions $\{0, 1, \ldots, N_1-1\} \times \cdots \times \{0, 1, \ldots, N_d-1\} \to \mathbb{C}.$ This suggests the generalization to Fourier transforms on arbitrary finite groups, which act on functions G → C where G is a finite group. In this framework, the standard DFT is seen as the Fourier transform on a cyclic group, while the multidimensional DFT is a Fourier transform on a direct sum of cyclic groups. ## Alternatives Main article: Discrete wavelet transform For more details on this topic, see Discrete wavelet transform#Comparison with Fourier transform. There are various alternatives to the DFT for various applications, prominent among which are wavelets. The analog of the DFT is the discrete wavelet transform (DWT). From the point of view of time–frequency analysis, a key limitation of the Fourier transform is that it does not include location information, only frequency information, and thus has difficulty in representing transients. As wavelets have location as well as frequency, they are better able to represent location, at the expense of greater difficulty representing frequency. For details, see comparison of the discrete wavelet transform with the discrete Fourier transform. ## Notes 1. As a linear transformation on a finite-dimensional vector space, the DFT expression can also be written in terms of a DFT matrix; when scaled appropriately it becomes a unitary matrix and the Xk can thus be viewed as coefficients of x in an orthonormal basis. ## Citations 1. Cooley et al., 1969 2. T. G. Stockham, Jr., "High-speed convolution and correlation," in 1966 Proc. AFIPS Spring Joint Computing Conf. Reprinted in Digital Signal Processing, L. R. Rabiner and C. M. Rader, editors, New York: IEEE Press, 1972. 3. Massar, S.; Spindel, P. (2008). "Uncertainty Relation for the Discrete Fourier Transform". Physical Review Letters 100 (19). doi:10.1103/PhysRevLett.100.190401. 4. ^ a b c DeBrunner, Victor; Havlicek, Joseph P.; Przebinda, Tomasz; Özaydin, Murad (2005). "Entropy-Based Uncertainty Measures for $L^2(\mathbb{R}^n),\ell^2(\mathbb{Z})$, and $\ell^2(\mathbb{Z}/N\mathbb{Z})$ With a Hirschman Optimal Transform for $\ell^2(\mathbb{Z}/N\mathbb{Z})$". IEEE Transactions on Signal Processing 53 (8): 2690. Retrieved 2011-06-23. 5. Santhanam, Balu; Santhanam, Thalanayar S. "Discrete Gauss-Hermite functions and eigenvectors of the centered discrete Fourier transform", Proceedings of the 32nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2007, SPTM-P12.4), vol. III, pp. 1385-1388. 6. Akansu, Ali N.; Agirman-Tosun, Handan "Generalized Discrete Fourier Transform With Nonlinear Phase", IEEE Transactions on Signal Processing, vol. 58, no. 9, pp. 4547-4556, Sept. 2010. ## References • Brigham, E. Oran (1988). The fast Fourier transform and its applications. Englewood Cliffs, N.J.: Prentice Hall. ISBN 0-13-307505-2. • Oppenheim, Alan V.; Schafer, R. W.; and Buck, J. R. (1999). Discrete-time signal processing. Upper Saddle River, N.J.: Prentice Hall. ISBN 0-13-754920-2. • Smith, Steven W. (1999). "Chapter 8: The Discrete Fourier Transform". The Scientist and Engineer's Guide to Digital Signal Processing (Second ed.). San Diego, Calif.: California Technical Publishing. ISBN 0-9660176-3-3. • Cormen, Thomas H.; Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein (2001). "Chapter 30: Polynomials and the FFT". (Second ed.). MIT Press and McGraw-Hill. pp. 822–848. ISBN 0-262-03293-7.  esp. section 30.2: The DFT and FFT, pp. 830–838. • P. Duhamel, B. Piron, and J. M. Etcheto (1988). "On computing the inverse DFT". IEEE Trans. Acoust., Speech and Sig. Processing 36 (2): 285–286. doi:10.1109/29.1519. • J. H. McClellan and T. W. Parks (1972). "Eigenvalues and eigenvectors of the discrete Fourier transformation". IEEE Trans. Audio Electroacoust. 20 (1): 66–74. doi:10.1109/TAU.1972.1162342. • Bradley W. Dickinson and Kenneth Steiglitz (1982). "Eigenvectors and functions of the discrete Fourier transform". IEEE Trans. Acoust., Speech and Sig. Processing 30 (1): 25–31. doi:10.1109/TASSP.1982.1163843.  (Note that this paper has an apparent typo in its table of the eigenvalue multiplicities: the +i/−i columns are interchanged. The correct table can be found in McClellan and Parks, 1972, and is easily confirmed numerically.) • F. A. Grünbaum (1982). "The eigenvectors of the discrete Fourier transform". J. Math. Anal. Appl. 88 (2): 355–363. doi:10.1016/0022-247X(82)90199-8. • Natig M. Atakishiyev and Kurt Bernardo Wolf (1997). "Fractional Fourier-Kravchuk transform". J. Opt. Soc. Am. A 14 (7): 1467–1477. doi:10.1364/JOSAA.14.001467. • C. Candan, M. A. Kutay and H. M.Ozaktas (2000). "The discrete fractional Fourier transform". IEEE Trans. on Signal Processing 48 (5): 1329–1337. doi:10.1109/78.839980. • Magdy Tawfik Hanna, Nabila Philip Attalla Seif, and Waleed Abd El Maguid Ahmed (2004). "Hermite-Gaussian-like eigenvectors of the discrete Fourier transform matrix based on the singular-value decomposition of its orthogonal projection matrices". IEEE Trans. Circ. Syst. I 51 (11): 2245–2254. doi:10.1109/TCSI.2004.836850. • Shamgar Gurevich and Ronny Hadani (2009). "On the diagonalization of the discrete Fourier transform". Applied and Computational Harmonic Analysis 27 (1): 87–99. arXiv:0808.3281. doi:10.1016/j.acha.2008.11.003. preprint at.  Text "arXiv • Shamgar Gurevich, Ronny Hadani, and Nir Sochen (2008). "The finite harmonic oscillator and its applications to sequences, communication and radar". IEEE Transactions on Information Theory 54 (9): 4239–4253. arXiv:0808.1495. doi:10.1109/TIT.2008.926440. preprint at.  Text "arXiv • Juan G. Vargas-Rubio and Balu Santhanam (2005). "On the multiangle centered discrete fractional Fourier transform". IEEE Sig. Proc. Lett. 12 (4): 273–276. doi:10.1109/LSP.2005.843762. • J. Cooley, P. Lewis, and P. Welch (1969). "The finite Fourier transform". IEEE Trans. Audio Electroacoustics 17 (2): 77–85. doi:10.1109/TAU.1969.1162036. • F.N. Kong (2008). "Analytic Expressions of Two Discrete Hermite-Gaussian Signals". IEEE Trans. Circuits and Systems –II: Express Briefs. 55 (1): 56–60. doi:10.1109/TCSII.2007.909865.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 203, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.874117910861969, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/262415/for-x-mathbbrn-let-bx-r-denote-the-closed-ball-in-mathbbrnwit
# For $x∈\mathbb{R}^n$ , let $B(x,r)$ denote the closed ball in $\mathbb{R}^n$(with Euclidean norm) of radius $r$ centered at $x$ For $x∈\mathbb{R}^n$ , let $B(x,r)$ denote the closed ball in $\mathbb{R}^n$(with Euclidean norm) of radius $r$ centered at $x$. Write $B=B(0,1)$.If $f,g:B→\mathbb{R}^n$ are continuous functions such that $f(x)≠g(x)$ for all $x∈ B$, then which of the followings are true? 1. $f(B)∩g(B)=\varnothing$ 2. There exist $ϵ>0$ such that $||f(x)-g(x)||> ϵ$ for all $x∈ B$ 3. There exist $ϵ>0$ such that $B(f(x), ϵ) ∩ B(g(x), ϵ)=\varnothing$ for all $x∈ B$ 4. ${\rm int }(f(B)) ∩ {\rm int }(g(B))=\varnothing$ , where ${\rm int}(E)$ denotes the interior of a set $E$ How can I solve this problem? Can anyone help? - Is it supposed to read $f,g:\Bbb R^n\to\Bbb R^n$? Also, $f(x)$ and $g(x)$ are not sets, as far as I can tell here. What should $1.$ read? – Peter Tamaroff Dec 20 '12 at 2:27 sorry for my mistakes.now i have corrected it – daichi Dec 20 '12 at 2:32 1 – leo Dec 23 '12 at 21:28 ## 2 Answers The assumption that $f,g$ are defined on $B(0,1)$ is entirely spurious. You can decide on the truth of these statements and get an idea of possible proofs and counterexamples by considering the case $f, g:[-1.1] \to \mathbb{R}$. Using this simplification, draw graphs for each case and decide whether the statements are true or false. - Edited to reflect the new question. (2) immediately implies (3). (Can you see why?) (1) is not necessarily true. If $n=1$, $f(x)=x$, $g(x)=x+1$, then $g(x)\neq f(x)$, $B=[-1,1]$ and $g(B)=[0,2]$, $f(B)=[-1,1]$. So the intersection is $[0,1]$. This also shows that (4) is false, since the intersection of the interiors is $(0,1)$. Finally, (2) is true: Consider the function $h:B\to\Bbb R$ given by $$h(x)=||f(x)-g(x)||.$$ Thus $h$ is strictly positive everywhere on $B$. $B$ is compact and $h$ is continuous so $h$ attains its minimum (and its maximum). Pick $\epsilon=\min\{h(x):x\in B\}$. - then all the 4 options are correct. am i right? – daichi Dec 20 '12 at 3:26 Wait. This was a true/false type of question? If so you should have mentioned this...I haven't thought about (1) that deeply, so I don't know if (1) is true yet. I felt like I should at least let you do that part (and not confuse you with an answer I would have to come up with on the spot). – proximal Dec 20 '12 at 3:37 very sorry i have to mention that.yes its a true/false type question – daichi Dec 20 '12 at 3:42 Please edit your question in that case. – proximal Dec 20 '12 at 3:44 I've merged our answers. Feel free in reverse it. – leo Dec 23 '12 at 21:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503213763237, "perplexity_flag": "head"}
http://mathoverflow.net/questions/94512/understanding-zeta-function-regularization
## Understanding zeta function regularization ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I attended a talk this morning on Ray-Singer torsion, in which Rafael Siejakowski introduced zeta function regularization in a compelling way. The goal is to define the determinant of a positive self-adjoint operator $A$ with "pure point spectrum" $0>\lambda_1>\lambda_2>\cdots$. The definition of the determinant is $\exp(-\zeta_A^\prime(0))$ where $\zeta_A$ is the zeta function $\zeta_A(s)=\sum_{i=1}^\infty (-\lambda_i)^{-s}$. This sum diverges in general- but it converges for values of $s$ with large enough real part, and we define it for other values of $s$ (including zero) by analytic continuation. Why should this be related to the determinant? Well, in the finite dimensional case (the motivating case is when $A$ is the `combinatorial Laplacian'), then $\zeta_A(s)=\sum_{i=1}^N (-\lambda_i)^{-s}$ is a finite sum. In this case: $\zeta^\prime_A(s)=\sum_{i=1}^N -\ln (-\lambda_i)(-\lambda_i)^{-s}$ and $\zeta^\prime_A(0)=-\ln \prod_{i=1}^N(-\lambda_i)=-\ln \det A$. This looks to me like an ad-hoc trick, indicating that I don't understand what is actually going on. The equation $\det(A)=\exp(-\zeta_A^\prime(0))$ (in the finite dimensional case an equation, in the infinite dimensional case a definition) equates two familiar mathematical quantities: 1. The determinant, which I can think of as a volume, as an action on a highest exterior power, or maybe most evocatively as the signed sum of weights of non-intersecting paths in a graph between "source" vertexes $a_1,\ldots,a_n$ and "sink" vertices $b_1,\ldots,b_n$. See this blog post. 2. The Riemann zeta function, which I don't understand conceptually almost as well, but which is heavily studied and so is clearly important and natural. Question: Is there a conceptual (hand-wavy is fine) explanation for zeta function regularization, and for how this expression in the zeta function is capturing the idea of a "determinant"? How is the derivation which I wrote above more than an ad-hoc trick? Is there a sense in which the derivative of a zeta function at zero heuristically calculates a signed sum of weights of non-intersecting paths, or something like that? - 3 Where are all those minus signs coming from if $A$ is positive? – Qiaochu Yuan Apr 19 2012 at 14:52 ## 4 Answers Not a complete answer. First, here is an alternate derivation of the result in the finite-dimensional case which might be more enlightening. If $A$ is positive self-adjoint, we can write $A = \exp(L)$ for some self-adjoint $L$. This lets us define $$A^s = \exp(sL)$$ for all real $s$. The trace $$\text{tr}(A^s) = \sum_{i=1}^n \lambda_i^s = \zeta_A(s)$$ is then the zeta function associated to $A$ (I am getting rid of all of the minus signs). Now, for small $\epsilon$ we can write $$A^{s+\epsilon} = A^s A^{\epsilon} = A^s (1 + \epsilon L + O(\epsilon^2))$$ so it follows that $$\zeta_A(0)' = \text{tr}(L).$$ But Jacobi's identity $\det \exp M = \exp \text{tr } M$ gives $$\det A = \det \exp L = \exp \text{tr } L = \exp \zeta_A(0)'$$ and we conclude. So what conceptual significance can we attach to the above? Well, it seems to me like we should think of the map $s \mapsto A^s$ as a representation of the Lie group $\mathbb{R}$, so the zeta function is the character of the corresponding representation. The derivative of the zeta function at zero gives the trace of the infinitesimal generator of this representation, $L$, which generates the abelian Lie algebra $\mathbb{R}$. And this is connected to the determinant of $A$ by Jacobi's identity. So I think most of what needs explanation is Jacobi's identity. I freely admit that I do not have a good conceptual explanation of Jacobi's identity (beyond the fact that it's obvious for diagonalizable matrices). In these two blog posts I attempted to meander towards a combinatorial proof of Jacobi's identity in the form $$\det (I - At)^{-1} = \exp \text{tr } \log (I - At)^{-1}$$ (where $A$ was the adjacency matrix of a graph) but didn't quite succeed. There is a combinatorial proof of Jacobi's identity in the literature due to Foata but I haven't gone through it in detail. - I very much like this answer. Thank you! I'm more-or-less happy with Jacobi's identity popping up in this context, because it is how the Alexander polynomial (equivalent to Ray-Singer torsion for a knot complement) indeed arises as a quantum invariant (Aarhus integral gives rise to exp tr of the equivariant linking matrix, whatever that means), so it fits well with my preconceptions. – Daniel Moskovich Apr 20 2012 at 1:26 @Tom: done. (More characters.) – Qiaochu Yuan Apr 20 2012 at 14:45 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is a variant of Qiaochu Yuan's answer. Suppose $A$ is some sort of differential operator. Then it is hard to think of things like the trace and determinant of $A$. But there is one quantity which is well behaved: it is the "heat trace" $Tr(e^{tA})$. Of course the heat trace is connected to the solution of the heat equation $$\left(\frac{\partial}{\partial t} - A\right) u = 0.$$ So how can we get at the eigenvalues of $A$ if we know $Tr(e^{tA})$? Suppose first that $A$ is finite dimensional. Then, $$Tr(e^{tA}) = \sum_i e^{t\lambda_i},$$ but it is somewhat inconvenient to extract $\sum \log \lambda_i$ directly from this. But if you plug in the identity $$\lambda^s = \frac{1}{\Gamma(s)} \int_0^\infty e^{t \lambda} t^s \frac{dt}{t}$$ you get $$\zeta_A(s) = \frac{1}{\Gamma(s)} \int_0^\infty Tr(e^{tA}) t^s \frac{dt}{t} \qquad \qquad (*)$$ and then $\log \det A = \sum \log \lambda_i = -\zeta_A'(0)$. So in some sense the zeta function is almost forced on you if you want to extract the determinant from the heat trace. Now if $A$ is a differential operator, then the equation (*) makes sense, but you cannot directly set $s=0$ because the integral diverges at the limit $t=0$. However, using the well known asymptotics of the heat trace as $t \to 0$, one can subtract the leading order terms, and then the rest will converge. This allows one to define the determinant of $A$, and one can write an expression which does not involve the zeta function at all, but only the heat trace. In some sense this is the real mathematical definition of det A. Otherwise how do we even know that the zeta function has an analytic continuation to $s=0$? So perhaps the zeta function regularization is just some shorthand for this argument. One can say "analytic continuation of the zeta function" instead of specifying exactly which terms to subtract from the heat trace. - Here are some suggestions, until somebody more knowledgeable appears. 1. Why does this definition make sense: Analytic continuation is unique, and hence if there is analytic continuation to neighbourhood of $0$, and the formula is valid in finite dimension, then why not take it. Unfortunately, we have in general that $$\det (AB) \neq \det(A) \det(B).$$ 2. Why does it possibly generalize the theory of the Riemann zeta function: If you take the circle with the Laplace operator $-\frac{\partial^2}{\partial^2 x}$ on $L^2[0,1]$ has eigenfunctions $\exp(2 \pi \textup{i} nx)$ for $n \in \mathbb{Z}$, you actually will get the Riemann zeta function as the spectral zeta function $\zeta_A(s) = 2 \zeta(2s)$. 3. Why does it not: Certain properties are not shared by the general spectral zeta function. There is in general no functional equation relating $\zeta_A(s)$ to $\zeta_A( D-s)$ for some $D$, and there is no Euler product. But here is also a quote from page 20, of "Ten physical applications of spectral zeta functions" from Elizalde, pg.20 Actually, a universal definition for the determinant [...] is still missing. Also Singer and Ray refer in http://www.sciencedirect.com/science/article/pii/0001870871900454 to Seeley http://www.ams.org/mathscinet/search/publications.html?pg1=IID&s1=157950 , who has proven the regularity at $0$. He has a nice formula there: $$A^s=i/2π\int_{Γ} λ^s(A−λ)^{−1}dλ$$ - 2 It seems that this whole theory (of spectral zeta functions, etc) was developed by Bob Seeley, who is not quite getting all the credit he deserves. – Igor Rivin Apr 19 2012 at 16:31 2 It's not the first time, or is it? – Marc Palm Apr 19 2012 at 17:24 $det(A.B)=detA.detB$ in case its commutator is zero (i think) $[A,B]=0$ – Mathman Apr 21 2012 at 13:56 in general ONLY the quotient of 2 functional determinant is defined inside the Zeta regularization SEEK Elizalde zeta regularization theory :) this is the BEST book about the subject i learned ZR from it in general Zeta regualrization is valid to define QUOTIENT of functional determinants so the infinite constant goes out $\frac{ det(I-Az)}{det(I-Bz)}$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249244332313538, "perplexity_flag": "head"}
http://mathoverflow.net/questions/60417/will-a-random-walk-on-0-inf-tend-to-infinity/60419
## Will a random walk on [0, inf) tend to infinity? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a random walk on [0, inf) where you start at 0. With probability p = 0.5, you increase by 1. With probability (1-p) = 0.5, you decrease by 1, but not below 0. As time goes to infinity, will your position tend to infinity? If not, to what finite value does it converge? Edit: To be a bit more precise, what is the limit of the average position as time goes to infinity? If your position tends to infinity for p = 0.5, for which other probabilities p is this true? (Clearly p > 0.5 will cause you to tend to infinity, so p < 0.5 is what I'm after) What is the probability of being at position x after an arbitrary amount of steps? I made a simple simulation to test the p = 0.5 case, and after 500 million iterations, it seems to tend to infinity, but I'd like a more solid explanation. Thanks! - Random walks with barrier conditions ought to be covered in probability textbooks. My memory is hazy here, but have you tried Feller Volume I, or maybe even Grimmett and Stirzaker's book. – Yemon Choi Apr 3 2011 at 8:54 4 Maybe you should ask this on math.stackexchange.com; there are plenty of people on this website who can give you good answers, but they all seem to be ignoring your question because it's too elementary to belong on this website, and so it turns out that you're not getting that many good answers. To answer one of your questions, $$\lim_{n\rightarrow \infty} \mathrm{Pr}(\mathrm{at\ position\ }x) \cdot \sqrt{n}\rightarrow c$$ for some constant $c$, where $n$ is the number of steps taken. – Peter Shor Apr 3 2011 at 13:17 ## 4 Answers The symmetric random walk $(X_k)$ on $\mathbb{Z}$ is recurrent. Therefore, with probability one, you will visit infinitely often $0$. The same is true for $(|X_k|)$, which is more ore less your random walk (the more or less depends on what happens exactly at the origin). In other words, with probability one, the $\liminf$ is $0$. The $\limsup$ behaviour is the subject of the law of the iterated logarithm. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As it has been said: 1. if $p=\frac{1}{2}$ this is more or less the same thing as the absolute value of the standard symmetric random walk on $\mathbb{Z}$, wich is recurrent. 2. if $p > \frac{1}{2}$, the law of large numbers immediately shows you that it tends to $+\infty$. 3. if $p < \frac{1}{2}$, you can even check the detailed balance equations and find the invariant distribution: the Markov chain is positive recurrent. - Let me try again. Let $R_n$ be the random variable denoting the longest contiguous run of heads for $n$ independent $p$-biased coin tosses. It is well-known 1 that $E R_n\sim\log_{1/p}((1-p)n)$ plus small correction terms (the variance is $O(1)$). This means that $S_n=\sum_{i=1}^n X_i$ grows at least as $\log n$ for any fixed $p$, and proves almost-sure escape to $\infty$. - 1 Why "This means that $S_n$"... ? – camomille Apr 3 2011 at 11:33 With high probability, after $n$ steps you'll see $O(\log n)$ contiguous heads, which will advance you $O(\log n)$ steps to the right. – Aryeh Kontorovich Apr 3 2011 at 11:45 Maybe I do not understand what you mean by "escape to \infty". If you mean that the walker goes arbitrarely far the the right with probability one this is obvious. If you mean that the walker tends to infinity with probability one this is false (see my answer below). Actually, many thinks are known on such a walk, this is really very standard (and maybe this should not be discussed here actually). – camomille Apr 3 2011 at 11:49 I wasn't being precise. The random walk visits 0 and every other natural number infinitely many times, as has been said here. – Aryeh Kontorovich Apr 3 2011 at 12:22 All right, but your argument is a sophisticated way to prove that the walker can reach any point (this is just irreducibility). – camomille Apr 3 2011 at 12:41 For any $p>0$ and any finite $N$, with probability one, eventually your random walk will exceed $N$. - True, but that's not quite what I'm after. – wjomlex Apr 3 2011 at 8:57 1 But that's not the same as saying one "almost surely tends to +infinity" ... – Yemon Choi Apr 3 2011 at 8:57 I'll elaborate. You are flipping a biased coin, with probability $p$ of heads. With probability 1, you will at some point observe a sequence of $p^N$ consecutive heads, which will necessarily take you to the right of $N$. – Aryeh Kontorovich Apr 3 2011 at 8:59 We understand what you mean, it's just that you're answering a different question :) – wjomlex Apr 3 2011 at 9:00 True, my answer does not imply almost-sure escape to infinity. Must think some more... – Aryeh Kontorovich Apr 3 2011 at 9:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451568722724915, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/65576-help-needed-calculus-questions.html
# Thread: 1. ## Help needed with calculus questions 1. determine the derivative of F(x)=(cos(2x-4))^3 at x=pie/b 2. determine d/dx 3x^4-3x/3x^4+3x 3. compute 4x^2x^3+4dx 4. an open rectangular box with volume 6m^3 has a square base. Express the surface area of the box as a function s(x) of the length x of a side of the base . 5. a box with an open top is to be constructed from a rectangular peice of cardboard with dmensions. b=5in. a=28in by cutting out equal squares of side x at each corner and then folding up the sides. find the maximum volume. 6. a spherical balloon with raduis r inches has volume 4/3pie^3. Find a function that represents the amount of air required to inflate the balloon from a radius of r innches to a radius of r+1 inches. 7. Lim as t approaches 2= t^2-4/t^3-8 8. absolute max. y=36-x (-6,6) 9. How many points of inflection in this graph f(x)=12x^3=14x^2-7x-p 10. estimate the area from 0 to 5 under the graph f(x)=81-x^2 using 5 rectangles from right end points. 2. Would you type those up using LaTex? There is a subsection with a tutorial. I will help more if you can do that since I don't want to type them up and make sure that we are on the same page. I will tell you that for number nine find y'' and set it to zero and solve to find the inflection points. Edit:I typed this up yesterday and it should help you find 10's answer just change the interval, $\Delta x$, the summation, and where it says left use right a) Sketch the region R. Simply sketch f(x) on the given interval b) Partition [0,2] into 4 subintervals and show the four rectangles that LRAM uses to approximate the area of R. Compute the LRAM sum without a calculator. First you need to find the width of the 'subintervals'. We will call the width $\Delta x$. To find $\Delta x$ on the interval $[a,b]$ we use the following equation $(b-a)/n$ where $n$ is the number of subintervals. You are doing an LRAM so you use the left point when making your rectangles. To find the height just insert the $x$ into $f(x)$ where you need to find the height. The area under curve (what you are finding) is - $Area= A_1 + A_2 ... + A_n$. To simplify this we can do a summation - $\sum_{j=0}^{2} A_j = \Delta x \sum_{j=0}^{2} f(x_j)$ Let me know if that makes any sense. 3. I don't know how to use the LaTex because I'm new here ! 4. Originally Posted by calculus-geeks09 I don't know how to use the LaTex because I'm new here ! http://www.mathhelpforum.com/math-he...-tutorial.html
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.864906907081604, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/33333/what-do-we-actually-mean-when-we-say-that-matter-is-a-wave/33343
# What do we actually mean when we say that matter is a wave? • What do we actually mean when we say that matter is a wave? • What does the wavelength of this matter wave indicate? The idea of a particle behaving like a wave is kinda incomprehensible to me. • Further, why is the wavelength inversely proportional to the momentum? - ## 5 Answers Though @Christoph and @poorsod cover the mathematical concepts, the basic meaning of attributing a wave nature to matter is not emphasized enough. It is not a matter wave in space time, it is a probability wave that is described by quantum mechanics. A probability tells me what are my chances to find the particle at a particular (x,y,z,t) and nothing more than that. That the probability has a wave solution due to the nature of quantum mechanical equations, does not make it into a mystical field or entity. It just says that potentially the behavior of matter in a measurement can have the attributes of a wave. That is the nature of probability functions: when we say that the probability of finding a classical particle with energy E follows a gaussian distribution about E, we do not mean that the particle is really distributed in increments of E. We just estimate the probability of finding the value E when we measure the energy. Further, why is the wavelength inversely proportional to the momentum? Because it is a conjecture to start with consistent with the Heisenberg Uncertainty Principle which is a lynch pin of Quantum Mechanics which arises from its basic equations. There is ample experimental verification of this relation. The answer is because the statement is consistent with experiments. - What do we actually mean when we say that matter is a wave? We mean that particles like electrons and photons exhibit wave-like phenomena, such as superposition and diffraction. This is more properly known as wave-particle duality. The thought experiment that best illustrates this is the double-slit experiment, in which electrons behave as waves while propagating, and particles when they arrive at the detector. The explanation is that our ideas of particle and wave are in fact ill-defined - every physical object is both a particle and a wave. Edit 2 to expand upon Anna V's comment below: This idea is mathematically built-in to the modern formulation of quantum mechanics, which considers the wavefunction $\psi$ to be the fundamental mathematical object describing the particle. The wavefunction interacts with its environment similarly to how a light or water wave interacts with its environment. The wave-particle duality manifests itself when you measure it: the square of the wavefunction $|\psi|^2$ gives you the probability of finding your particle at a given position. What does the wavelength of this matter wave indicate? The idea of a particle behaving like a wave is kinda incomprehensible to me. You're in good company - when wave-particle duality was first demonstrated in Planck's calculation of the black-body spectrum it caused quite a stir, as you can imagine. The wavelength of a particle has a precisely analogous role to the wavelength of the waves you are used to (like sound, light, etc) - for example it is the characteristic length over which diffraction happens. Why is the wavelength inversely proportional to the momentum? This is another idea that is built-in to the mathematical formulation of quantum mechanics. As the uncertainty principle tells us, states in which momentum and position both have definite values don't exist. However, states in which momentum has a definite value do exist, and as you might expect in these states position has a totally indefinite value. I won't go into the detail because I guess it's above your level of knowledge, but it turns out that these states of well-defined momentum are plane waves $\psi = e^{-i k x}$, where the momentum $p = \hbar k = 2 \pi \hbar/\lambda$, and $\lambda$ is the wavelength. Edit: OK, here is (some of) the detail. In the formalism of quantum mechanics, states in which an observable quantity (like momentum) has a well-defined value are the eigenstates of the operator associated with that observable. Then the allowed values of the observable are the eigenvalues of that operator. We're interested in the momentum operator, which is defined as $\hat{p} = - i \hbar \frac{\partial}{\partial x}$. So let's use the eigenvalue equation to find out what the $p$-states are: $$-i \hbar \frac{\partial \psi}{\partial x} = p \psi,$$ where $p$ here is a number (not an operator). This is a differential equation in $x$, which is solved by the plane-wave solution given above (substitute it in and try it!). - 2 I would also stress that the wave amplitude expresses the probability of finding the particle at that (x,y,z,t) coordinate. In that it is different than a sound wave, for example, whose energy is spread all over the wave front. – anna v Aug 2 '12 at 18:20 Good point, I'd neglected to mention that. I'll add it in an edit. – poorsod Aug 2 '12 at 21:30 Thanks for the answer!!its very helpful.But, after posting this question I did a little bit of research on the net and got an excellent article from one of the forums or blogs whose names I don't remember . I have posted it as an answer here to ensure that I get it right and don't end up having misconceptions.I know I'm answering my own question but please consider having a look at it. – nilu Aug 4 '12 at 19:08 In quantum field theory, elementary particles are the normal modes of fundamental fields, so according to the standard model, matter can indeed be considered a wave. However, the so-called wave-particle duality is a property of any quantum system, and you can do double-slit experiments not only with photons or electrons, but also $C_{60}$ buckyball molecules. Should we therefore consider buckyballs as excitations of a buckyball-field filling the whole of spacetime? Probably not. It's just that quantum system exhibit properties not present in classical mechanics, but similar to ones familiar from classical wave optics. The wave-function is not a physical field like the electromagnetic field, though, but rather an object which lives in abstract phase space and related to Hamilton's principal function of the Hamilton-Jacobi formalism. Goldstein's book on classical mechanics has a chapter on that (9-8 in the 2nd edition): The time-independent Hamilton-Jacobi equation is formally equivalent to the eikonal equation of geometrical optics, where Hamilton's characteristic function plays the role of phase and the energy the role of frequency. The relationship between momentum and wavelength follows from that. This wave length is not the length of a physical wave, but a wave in phase space, where the wave fronts are surfaces of constant action. - There's nothing wrong with considering a Buckyball field, except that it ceases to be elementary already at rather low temperatures, when the photons can excite the electrons, and turn it into a quantum of an excited buckyball field. – Ron Maimon Aug 3 '12 at 5:18 Thank you for your excellent answers they were so helpful. But, after posting this question I did a little bit of research on the net and got an excellent article from one of the forums or blogs whose names I don't remember . I thought of posting it here to ensure that I get it right and don't end up having misconceptions.I know I'm answering my own question but please consider having a look at it. In QM, a "wave" isn't what we normally imagine: something that moves up and down and moves in one direction, like water. It's just a function that evolves with time and has a (in general) different value at every point in space. The wave does not "exist" per se in physical space. It can be drawn (superimposed) on physical space, but that just means that it has a value at every point there. The wave associated with an electron shows the probability of finding it at a particular point in space. If an electron is moving, it will have a "hump" in its vicinity, which shows it's probability at every point in time. This hump will move just like the electron does. When you observe the electron, you collapse the hump to a peak. This peak is still a wave, just narrowly confined so it looks like a particle. Your issue is that you're trying to look at the "electron" and "wave" simultaneously. This isn't exactly possible. The wave is the particle. You can look at it as if you exploded the electron into millions of fragments and spread it out over the hump. There is a fraction of an electron at every point. The fraction corresponds to the probability of finding it there. At this point, there is no electron-particle. So there's nothing that's "waving". Of course, we never see a fraction of an electron, so these fellows clump together the minute you try to make an observation. Quantum mechanics has a nice concept called wave particle duality. Any particle can be expressed as a wave. In fact, both are equivalent. Exactly what sort of wave is this? Its a probability wave. By this, I mean that it tracks probabilities. I'll give an example. Lets say you have a friend, A. Now at this moment, you don't know where A is. He could be at home or at work. Alternatively, he could be somewhere else, but with lesser probability. So, you draw a 3D graph. The x and y axes correspond to location (So you can draw a map on the x-y plane), and the z axis corresponds to probability. Your graph will be a smooth surface, that looks sort of like sand dunes in a desert. You'll have "humps" or dunes at A's home and at A's workplace, as there's the maximum probability that he's there. You could have smaller humps on other places he frequents. There will be tiny, but finite probabilities, that he's elsewhere (say, a different country). Now, lets say you call him and ask him where he is. He says that he's on his way home from work. So, your graph will be reconfigured, so that it has "ridges" along all the roads he will most probably take. Now, he calls you when he reaches home. Now, since you know exactly where he is, there will be a "peak" with probability 1 at his house (assuming his house is point-size, otherwise ther'll be a tall hump). Five minutes later, you decide to redraw the graph. Now you're almost certain that he's at home, but he may have gone out. He can't go far in 5 minutes, so you draw a hump centered at his house, with slopes outside. As time progresses, this hump will gradually flatten. So what have I described here? It's a wavefunction, or the "wave" nature of a particle. The wavefunction can reconfigure and also "collapse" to a "peak", depending on what data you receive. Now, everything has a wavefunction. You, me, a house, and particles. You and me have a very restricted wavefunction (due to tiny wavelength, but let's not go into that), and we rarely (read:never) have to take wave nature into account at normal scales. But, for particles, wave nature becomes an integral part of their behavior - 1 It is not a favorite view for me, since this collapse business is disorienting for me and imo misleading. Also it is a gross exaggeration to say that you and me have "a" wavefunction. Only if we assume we are an elementary particle with the mass of a human body, which is nonsense. We are composed by a decohered, i.e. classical for all intents and purposes,conglomerate of atoms and molecules. – anna v Aug 5 '12 at 6:39 I agree with @annav. I think this is a rather sloppy description of the wavefunction, and it does not account for the really important features of QM like interference. The Wikipedia pages I linked to in my answer, particularly the one for the double-slit experiment, are much more illuminating. – poorsod Aug 10 '12 at 22:40 I know you think you've got an answer, but let's give it another try. First of all, you should get familiar with the double-slit experiment. Don't just sort of memorize it, but puzzle over it until you can see the ramifications. What it's saying is that there is something that acts like a probability wave. Normal waves, like in water, work by interfering, that is, by reinforcing and/or canceling each other. We call the height of a wave its amplitude, which can be positive or negative at any place and point in time. When two amplitudes hit each other and have the same sign, they add together to make a big wave. When they hit each other and have opposite sign, they subtract and make a small wave, or no wave at all (at that place and time). You knew all that. A wave also has energy, which is proportional to amplitude squared, and it is always positive, right? If you square a positive number, the result is positive. If you square a negative number, the result is positive. That way, whether a wave at a particular place is "high", or "low", its energy is always positive. OK, now here's the leap into quantum-land: There is a kind of wave whose energy at a place and time is equal to the probability that a particle exists at that place and time. That is the particle's wave function. Back to the double-slit. A single particle is shot from the gun. It continually exists at a place and time, which is to say its wave function has a big lump of energy there. But, the wave spreads out. Part of it goes through slit A, and part through slit B, and part gets lost. On the other side, the remaining two components of the wave travel forward and interfere, as waves do. So finally, at the screen, there are places where the waves cancel out, so there's no energy there, so there's no probability there, so there's no particle there. There are other places where the waves reinforce, so there's plenty energy there, so there's plenty probability there, so the particle is likely to be found there. That's the key idea. You have waves of a "substance" where the energy in the wave is the same as the probability of something. That should make you think, because it means we don't actually know what's so. We only know possibilities, and those possibilities conspire among themselves. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564319849014282, "perplexity_flag": "head"}
http://mathoverflow.net/questions/65070?sort=oldest
## A construction of generators of discrete subgroups of SL(2,R) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I know about geometrical method of construction of discrete subgroups of $SL(2,\mathbb{R})$ using Lobachevsky plane (e.g. B.A. Dubrovin, A.T. Fomenko, S.P. Novikov, Modern Geometry --- Methods and Applications, Springer) via fundamental polygon. Such construction has many applications and some relevant themes were already discussed in MO. I do not know, if my question is appropriate here, but I would like to know rather opposite thing: how to construct explicitly the matrices itself. In book mentioned above it was demonstrated only for simple case of $4g$-polygon with sum of angles $2\pi$. I mean, if there is some analytical equations or an algorithm of calculation of parameters of matrices for discrete subgroups of $SL(2,\mathbb{R})$. - Alex -- what is known about the subgroup for which you would like to compute a system of generators? – algori May 15 2011 at 21:48 algori: It is subgroups, described by geometrical construction with polygon on Lobachevsky plane I mentioned. I.e. there is some visual construction, but how to write it as matrices... – Alex 'qubeat' May 16 2011 at 11:45 Alex -- I still don't get it. What exactly is "some visual information"? There may be several distinct Fuchsian grous with the same fundamental domain: one has to specify how edges are identified. On the other hand, once one has done that, then there are your generators. – algori May 16 2011 at 11:58 @algori: Yes, of course. I mean, there is geometric construction with domain and way to identify edges described eg in book below. I want to know if there is some clear way to construct some example of matrices for that. – Alex 'qubeat' May 16 2011 at 12:38 Alex -- once one has two oriented segments of the same length in the hyperbolic plane (in this example, edges of the fundamental polygon), there is a unique hyperbolic isometry taking one to the other; one can write a matrix explicitly starting from the coordinates of the points. – algori May 16 2011 at 13:13 show 6 more comments ## 2 Answers For a pleasant introduction that includes many beautiful pictures, I suggest the book "Indra's Pearls" by Mumford, Series, and Wright. They also give examples of explicit computation of the relevant matrices. Here's a link to the copy at Google books. - 2 As an element of $SL_2\mathbb{R}$ is determined by where it sends three points, via the fact that it preserves cross ratios, this is an elementary, but tedious exercise. The hard part is going to be coordinatizing your $4g$-gon. There is a great passage in Jakob Nielsen's long paper on transformations of surfaces where he gives a ruler and compass construction of the symmetric $4g$-gon. The actual transformations are easy from there. I did it when I was graduate student in about 1980. There were lots of square roots involved. – Charlie Frohman May 16 2011 at 2:44 A truly great book on $SL(2,\mathbb{R})$ from an elementary perspective is Lester Ford's book "Automorphic Functions". Also Wilhelm Magnus book "Discrete Groups" is very elementary and constructive. – Charlie Frohman May 16 2011 at 2:47 Finally, there is a theorem of Poincare that says if you have a convex hyperbolic polyhedron and identifications of the edges so that the angles add up to rational multiples of $\pi$at the identified edges then the group generated by the congruence transformations will be discrete. If I am remembering it correctly. I am not sure if Poincare proved it, but John Millson was really interested in it at some point, and might have given a careful statement and proof. – Charlie Frohman May 16 2011 at 2:51 Thanks for answer and comments. I got the "Indra's pearls" and Ford's book. – Alex 'qubeat' May 16 2011 at 17:38 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A fairly detailed construction of arithmetic triangle groups can be found in Takeuchi's paper, Arithmetic triangle groups, J. Math. Soc. Japan Volume 29, Number 1 (1977), 91-106. - I took the liberty of replacing your link by a permanent one (the direct link copied from the browser's window can cause trouble and is not persistent - usually journals provide a permanent link somewhere on the article's page) and adding some bibliographical information. – Theo Buehler May 16 2011 at 22:11 Despite of finite number of such groups it is really instructive. – Alex 'qubeat' May 23 2011 at 12:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301210045814514, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/161052/luhncalc-and-bpay-mod10-version-5
# LuhnCalc and bpay MOD10 version 5 [closed] Ok so my maths maybe not so good, but here is my question So I am using the following ````<?php function LuhnCalc($number) { $chars = array_reverse(str_split($number, 1)); $odd = array_intersect_key($chars, array_fill_keys(range(1, count($chars), 2), null)); $even = array_intersect_key($chars, array_fill_keys(range(0, count($chars), 2), null)); $even = array_map(function($n) { return ($n >= 5)?2 * $n - 9:2 * $n; }, $even); $total = array_sum($odd) + array_sum($even); return ((floor($total / 10) + 1) * 10 - $total) % 10; } print LuhnCalc($_GET['num']); ?> ```` however it seems that BPAY which is version 5 of MOD 10 which for the record I cant find what the documentation for. seems to not be the same as MOD10 the following numbers where tested `2005, 1597, 3651, 0584, 9675` bPAY ````2005 = 20052 1597 = 15976 3651 = 36514 0584 = 05840 9675 = 96752 ```` MY CODE ````2005 = 20057 1597 = 15974 3651 = 36517 0584 = 05843 9675 = 96752 ```` as you can see none, of them match the BPAY numbers - 1 Could you clarify what your question is - are you asking why two sets of numbers don't come out the same? Should we interpret your tables as having inputs in the left column and outputs in the right? What does your function actually do - for those of us who don't know PHP code - and what is it supposed to do (I'm guessing this)? Also, modules are an algebraic structure in mathematics, and are not really related to modular arithmetic despite the name. – anon Jun 21 '12 at 0:42 – anon Jun 21 '12 at 0:42 Yes it was posted and was suggested to post it on here, yes I do need the numbers to match and I need to know how to make them match – RussellHarrower Jun 21 '12 at 1:49 If you want to make this a math question and you can generate more checksums yourself, you should generate many more checksums (and if possible related together, e.g. 2004,2005,2006,2014,2015,2016,2105,3005,1005,...). As it is there is definitely not enough data to work with. – Generic Human Jun 23 '12 at 14:48 ## closed as off topic by Will Jagy, sdcvvc, t.b., John Wordsworth, WilliamAug 29 '12 at 21:57 Questions on Mathematics Stack Exchange are expected to relate to math within the scope defined in the FAQ. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about closed questions here. ## 2 Answers Your code is a correct, if somewhat cumbersome, implementation of the Luhn check digit algorithm. However, as described by Nick Adams' answer to your cross-post of this question on Stack Overflow, the check digits used by BPay are not calculated using Luhn's method. Instead, what they appear to be using is a weighted sum modulo 10, with the weight of each digit given by its position in the number, starting at 1 for the leftmost digit and counting up. That is, if the digits of the original number are $a_1 a_2 \dots a_n$, the check digit will be $$c = (1 \cdot a_1 + 2 \cdot a_2 + \dotsb + n \cdot a_n) \bmod 10.$$ Here's one way to calculate this check digit in PHP code (which should give the same results as Nick Adams' code): ````function BPayCheckDigit ($number) { $n = strlen($number); $sum = 0; for ($i = 0; $i < $n; $i++) { $sum += ($i + 1) * $number{$i}; } return $sum % 10; } ```` By the way, this check digit formula has a number of weaknesses compared to Luhn's method: • Since the weights are counted from the left, the presence or absence of leading zeros affects the checksum. Thus, you need to make sure that your numbers are actually stored and manipulated as strings, or pad them to the correct length before the checksum calculation. • Unlike in the Luhn method, verifying the checksum requires treating the check digit separately from the other digits, rather than just including it in the calculation and checking that the result comes out zero. (That is, unless the length of the numbers just happens to be nine digits including the check digit, in which case the ninth digit is assigned the weight $9 \equiv -1 \bmod 10$ and the match does work out right.) • For even-numbered digits, this checksum cannot detect the addition or subtraction of 5 from the digit. For digits whose weight is a multiple of 5, the checksum cannot detect the addition or subtraction of any even number. In particular, digits whose weight is a multiple of 10 are not checked at all (although I believe the checksum is not intended for numbers that long anyway)! - For me the obvious oddity here is in the two range declarations. It looks like you want to produce one array of odd and and one array of even digits to intersect against. So you will want [1,3,5,7,9] and [0,2,4,6,8]. Instead you are producing only [1,3] and [0,2,4] for the range is limited by 4=`count($chars)` both times. So I suspect what you want is ````range(1, 9, 2) ```` and ````range(0, 8, 2) ```` respectively. EDIT: I did not run the code but in my head and even there couldn't come up with your final digits. As was said in the comments, you may need to clarify what you want to do so people can help you -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9129746556282043, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/8453/are-there-examples-in-classical-mechanics-where-dalemberts-principle-fails?answertab=oldest
# Are there examples in classical mechanics where D'Alembert's principle fails? D'Alembert's principle suggests that the work done by the internal forces for a virtual displacement of a mechanical system in harmony with the constraints is zero. This is obviously true for the constraint of a rigid body where all the particles maintain a constant distance from one another. It's also true for constraining force where the virtual displacement is normal to it. Can anyone think of a case where the virtual displacements are in harmony with the constraints of a mechanical system, yet the total work done by the internal forces is non-zero, making D'Alembert's principle false? - ## 2 Answers Given a system of $N$ point-particles with positions ${\bf r}_1, \ldots , {\bf r}_N$; with corresponding virtual displacements $\delta{\bf r}_1$, $\ldots$, $\delta{\bf r}_N$; with momenta ${\bf p}_1, \ldots , {\bf p}_N$; and with applied forces ${\bf F}_1^{(a)}, \ldots , {\bf F}_N^{(a)}$. Then D'Alembert's principle states that $$\tag{1} \sum_{j=1}^N ( {\bf F}_j^{(a)} - \dot{\bf p}_j ) \cdot \delta {\bf r}_j~=~0.$$ The total force $${\bf F}_j ~=~ {\bf F}_j^{(a)} +{\bf F}^{(ec)}_j+{\bf F}^{(ic)}_j + {\bf F}^{(i)}_j + {\bf F}_j^{(o)}$$ on the $j$'th particle can be divided into five types: 1. applied forces ${\bf F}_j^{(a)}$ (that we keep track of and that are not constraint forces). 2. an external constraint force ${\bf F}^{(ec)}_j$ from the environment. 3. an internal constraint force ${\bf F}^{(ic)}_j$ from the $N-1$ other particles. 4. an internal force ${\bf F}^{(i)}_j$ (that is not an applied or a constraint force of type 1 or 3, respectively) from the $N-1$ other particles. 5. Other forces ${\bf F}_j^{(o)}$ not already included in type 1, 2, 3 and 4. Because of Newton's 2nd law ${\bf F}_j= \dot{\bf p}_j$, D'Alembert's principle (1) is equivalent to$^1$ $$\tag{2} \sum_{j=1}^N ( {\bf F}^{(ec)}_j+{\bf F}^{(ic)}_j+{\bf F}^{(i)}_j+{\bf F}_j^{(o)}) \cdot \delta {\bf r}_j~=~0.$$ So OP's question can essentially be rephrased as Are there examples in classical mechanics where eq. (2) fails? Eq. (2) could trivially fail, if we have forces ${\bf F}_j^{(o)}$ of type 5, e.g. sliding friction, that we (for some reason) don't count as applied forces of type 1. For a rigid body, to exclude pairwise contributions of type 3, one needs the strong Newton's 3rd law, cf. this Phys.SE answer. So if these forces fail to be collinear, this could lead to violation of eq. (2). For internal forces of type 4, there is in general no reason that they should respect eq. (2). Example: Consider a system of two point-masses connected by an ideal spring. This system has no constraints, so there are no restrictions to the class of virtual displacements. It is easy to violate eq. (2) if we count the spring force as a type 4 force. Reference: H. Goldstein, Classical Mechanics, Chapter 1. -- $^1$It is tempting to call eq. (2) the Principle of virtual work, but strictly speaking, the principle of virtual work is just D'Alembert's principle (1) for a static system. - You can have instances where there is no local extremum of the action--for instance, take the lagrangian $L=m\left(\dot x ^{2}+\dot y^{2}\right)$ over the space defined by a crescent embedded in $\mathbb{R}^2$--then, even though the tips of the crescent are both perfectly good starting and ending points in your domain, there is no extremal path connecting them--it would have to be the straight line that leaves the domain of your configuration space. But this is admittedly a contrived example. - I was asking about cases where D'Alembert's principle fails, not where the principle of least action fails - which is the application of D'Alembert's principle for holonomic constraints. – Larry Harson May 13 '11 at 16:26 @user2146: and thus, a subclass of D'Albert's principle. – Jerry Schirmer May 13 '11 at 16:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9124764204025269, "perplexity_flag": "head"}
http://amathew.wordpress.com/2011/03/10/the-site-of-g-sets-and-the-associated-category-of-sheaves/
# Climbing Mount Bourbaki Thoughts on mathematics March 10, 2011 ## The site of G-sets and the associated category of sheaves Posted by Akhil Mathew under algebraic geometry, category theory | Tags: Grothendieck topologies, representable functors, sheaves, sites | 1 Comment In the past, I said a few words about Grothendieck topologies and fpqc descent. Well, strictly speaking, I didn’t get very far into the descent bit. I described a topology on the category of schemes (the fpqc topology) and showed that it was a subcanonical topology, that is, any representable presheaf was a sheaf in this topology. This amounted to saying that if ${X' \rightarrow X}$ was a fpqc morphism of schemes, then to hom out of ${X}$ was the same thing as homming out of ${X'}$ such that the two pull-backs to ${X' \times_X X'}$ were the same. If I had gotten further, I would have shown that to give a quasi-coherent sheaf on ${X}$ (among other things) is the same as giving “descent data” of a quasi-coherent sheaf on ${X'}$ together with an isomorphism between the two pull-backs to ${X' \times_X X'}$ satisfying the cocycle condition. Maybe I’ll do that later. But there is a more basic “toy” example that I now want to describe of a site (that is, category with a Grothendieck topology) and the associated category of sheaves on it. 1. ${G}$-sets Our category ${\mathcal{C}}$ is going to be the category of left ${G}$-sets for a fixed group ${G}$; morphisms will be equivariant morphisms of ${G}$-sets. We are now going to define a Grothendieck topology on this category. For this, we need to axiomatize the notion of “cover.” We can do this very simply: a collection of maps ${\left\{U_i \rightarrow U\right\}}$ is called a cover if the images cover ${U}$. Now, fiber products of ${G}$-sets are calculated in the category of sets, or in other words the forgetful functor $\displaystyle G-\mathbf{set} \rightarrow \mathbf{Sets}$ commutes with limits (as it has an adjoint, the functor ${S \mapsto G \times S}$). Thus, taking pull-backs preserve the notion of covering, and it is easy to see the other axioms are satisfied too: if we have a cover of each of the ${U_i}$ (which cover ${U}$), then collecting them gives a cover of ${U}$. Similarly, an isomorphism is a cover. This is obvious from the definitions. 2. Representable presheaves So we indeed do have a perfectly good site. Now, we want a characterization of all the sheaves of sets on it. To start with, let us show that any representable functor forms a sheaf; that is, the topology is subcanonical. (In fact, this topology is the canonical topology, in that it is the finest possible that makes representable functors into sheaves.) Proposition 1 Any representable functor on ${G-\mathbf{set}}$ is a sheaf in the above topology. To see this, we have to show the following fact (in view of the fact that . If ${X' \rightarrow X}$ is a surjection of ${G}$-sets and we have a map ${X' \rightarrow R}$ (for some other ${G}$-set ${R}$) such that the two pull-backs $\displaystyle X' \times_X X' \rightarrow X'$ become equal, then we get a map ${X \rightarrow R}$. But if we are thinking of maps of sets, then any two points in ${X' }$ that map to the same thing in ${X}$ clearly go to the same place in ${R}$ by the above condition. So we can get a sequence $\displaystyle X' \twoheadrightarrow X \rightarrow R$ where the composite is the given map ${X' \rightarrow R}$. We only need to see that ${X \rightarrow R}$ is a homomorphism. But this is clear since ${X' \rightarrow R}$ is and ${X' \rightarrow X}$ is a surjective homomorphism. 3. The classification of sheaves Now that we have shown that representable presheaves on ${G-\mathbf{set}}$ are sheaves, we want to get the converse direction. We want to show that every sheaf is representable. In fact, we will show that there is a correspondence between sheaves and ${G}$-sets. Theorem 2 There is an equivalence of categories$\displaystyle \left\{\text{sheaves on } G-\mathbf{set}\right\} \simeq G-set$ that assigns to each ${G}$-set ${R}$ the sheaf ${X \mapsto \hom_G(X, R)}$. In fact, we are going to define the inverse functor. Given a sheaf ${\mathcal{F}}$, we consider ${\mathcal{F}(G)}$. Since ${G}$ acts on ${G}$ on the right by morphisms of left ${G}$-sets, ${\mathcal{F}(G)}$ is naturally a (left!) ${G}$-set. We are going to show that these two functors are inverse to each other. One direction is straightforward. If ${R}$ is a ${G}$-set, then the ${G}$-set of maps of left-${G}$ sets ${G \rightarrow R}$ is canonically isomorphic to ${R}$; if we have a map ${\phi: G \rightarrow R}$, we send it to ${\phi(1)}$. It is easy to check that for each element of ${R}$, we do indeed get such a map ${\phi}$, and that the left-${G}$-structure on ${\hom_G(G, R)}$ is the same as that on ${R}$. For the other, let ${\mathcal{F}}$ be a sheaf on ${G-\mathbf{set}}$. We want to construct a natural isomorphism $\displaystyle \mathcal{F}(X) \simeq \hom_G(X, \mathcal{F}(G)).$ It is clear that we have a map of ${G}$-sets from ${G \times |X|}$ (where ${G \times |X|}$ denotes the ${G}$-set ${G \times X}$ but with ${G}$ only acting on the first factor) into ${X}$. In fact, we have a surjection of ${G}$-sets $\displaystyle G \times |X| \twoheadrightarrow X.$ There is thus an exact (equalizer) sequence by sheafiness $\displaystyle \mathcal{F}(X) \rightarrow \prod_X \mathcal{F}(G) \rightrightarrows \prod_{T} \mathcal{F}( G)$ where ${T}$ is some set. We have used the fact that the fibered product of two copies of ${G}$ is always ${G}$ or ${\emptyset}$. From this, it is clear that there is a natural injective map from ${\mathcal{F}(X)}$ into the set of functions ${X \rightarrow \mathcal{F}(G)}$. These functions ${X \rightarrow \mathcal{F}(G)}$ are in fact ${G}$-equivariant because of the way ${G \times |X| \rightarrow X}$ was defined. Now we want to check that the map is surjective. For this, let us use a version of the “finite presentation trick.” We have seen that that for any ${G}$-set ${X}$, there is a coequalizer diagram $\displaystyle F' \rightrightarrows F \twoheadrightarrow X$ where ${F', F}$ are free ${G}$-sets. Applying ${\mathcal{F}}$ and ${\hom_G(-, \mathcal{F}(G))}$ turns this into two equalizer diagrams of sets with a morphism To see that both are equalizer diagrams, we use the fact that ${\mathcal{F}}$ is a sheaf and representable functors are sheaves. It is clear that the two rightmost vertical arrows are isomorphisms, so the leftmost one is an isomorphism too. This finishes the proof. From this, we see that giving a sheaf on ${G}$-sets equates to giving a ${G}$-set. Giving a sheaf of abelian groups on this site thus equates to giving an abelian group with a compatible action of ${G}$ on it, i.e. a ${G}$-module. With this in mind, we are going to get a toy example of sheaf cohomology on a site—namely, group cohomology. I learned this from the book by Tamme on Etale cohomology. I’m reading it, and I’m repeatedly struck at how clear and accessible the exposition is there (especially compared with other treatments like SGA, which I find somewhat intimidating).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 96, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553235769271851, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/284835/f-is-continuous-at-c-implies-f-has-a-limit-at-c-true?answertab=votes
# $f$ is continuous at $c$ $\implies$ $f$ has a limit at $c$. True? Further to Another simple/conceptual limit question where I was questioning David Brannan's assertion in his A First Course in Mathematical Analysis that $f(x)=\sqrt x,x\geq 0$ has no limit at $0$ (Example 2c if you type in page 184 in the box on http://www.scribd.com/doc/74564079/Mathematical-Analysis), I just noticed that in an earlier section he asserted that the same function is continuous on its domain i.e. including at $0$ (Example 3 if you type in page 148 in the box). Does this not violate the well-known theorem (Thm 2 if you type in page 185 in the box) implying that if $f$ is continuous at $c$, then $f$ has a limit at $c$ ? Is David Brannan contradicting himself (by setting up his definition of limits badly)? EDIT: Thanks, Wisefool. It turns out that there is no contradiction in Brannan's assertions (that the square root function is continuous at 0 yet has no limit at 0) after all, but only because of his peculiar definition of "limit" and his analogously peculiar statement of the theorem relating continuity at a point to the limit at that point. - 1 I fail to see the problem. The function you mention is continuous on its domain and it does have a limit at x=0. – Ittay Weiss Jan 23 at 7:22 2 @IttayWeiss. What I understood the problem here is, is that the author of the book claims that there is no limit for $\sqrt{x}$ in $0$, while in other place he claims that for a continuous function the limit exists in every point of its domain. – Thomas E. Jan 23 at 7:25 Yes, Thomas, that's exactly the issue- thanks. – Ryan Jan 23 at 7:32 1 IMO this is exactly the kind of trouble you get into when you try to define functions like $\sqrt{x}$ over all of $\mathbb{R}$ and start talking about "defined" and "undefined" points, instead of making full use of topology and subspaces. – user7530 Jan 23 at 7:50 @user7530 It's simply David Brannan being clumsy in dumbing down the definition of limit (not letting the limit to exist at $c$ when $c$ is an end-point of $f$'s domain) to suit whatever purpose he has. But I agree that the trouble run into is the same. – Ryan Jan 23 at 8:46 ## 1 Answer It is a simple matter of terminology: in that example (or in that section/chapter/book) a limit is understood to be a two-sided limit. $f$ has limit $l$ in $x_0$ if: 1. $f$ is defined in a (puntcured) neighborhood of $x_0$ 2. for every $\epsilon>0$ there exists $\delta>0$ such that if $|x-x_0|<\delta$ then $|f(x)-l|<\epsilon$. In that example, it is the first condition which fails. This isn't exactly standard and personally I would have said that the function had a limit for $x\to 0$, but I can see the book's point... EDIT: moreover, Thm2 on page 185 speaks about a point $c$ contained in an open interval $I$ where the function is defined. There is no open interval of the real line containing $0$ where the function $\sqrt{x}$ is defined. - Thanks for pointing out that Thm 2 mandates that the point in question isn't an end-point. I had missed that. – Ryan Jan 23 at 7:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507042169570923, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Data_assimilation
# Data assimilation This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2011) Data assimilation is the process by which observations are incorporated into a computer model of a real system. Applications of data assimilation arise in many fields of geosciences, perhaps most importantly in weather forecasting and hydrology. Data assimilation proceeds by analysis cycles. In each analysis cycle, observations of the current (and possibly past) state of a system are combined with the results from a numerical weather prediction model (the forecast) to produce an analysis, which is considered as 'the best' estimate of the current state of the system. This is called the analysis step. Essentially, the analysis step tries to balance the uncertainty in the data and in the forecast. The model is then advanced in time and its result becomes the forecast in the next analysis cycle. ## Data assimilation as statistical estimation In data assimilation applications, the analysis and forecasts are best thought of as probability distributions. The analysis step is an application of the Bayes theorem and the overall assimilation procedure is an example of Recursive Bayesian estimation. However, the probabilistic analysis is usually simplified to a computationally feasible form. Advancing the probability distribution in time would be done exactly in the general case by the Fokker-Planck equation, but that is unrealistically expensive, so various approximations operating on simplified representations of the probability distributions are used instead. If the probability distributions are normal, they can be represented by their mean and covariance, which gives rise to the Kalman filter. However it is not feasible to maintain the covariance because of the large number of degrees of freedom in the state, so various approximations are used instead. Many methods represent the probability distributions only by the mean and impute some covariance instead. In the basic form, such analysis step is known as optimal statistical interpolation. Adjusting the initial value of the mathematical model instead of changing the state directly at the analysis time is the essence of the variational methods, 3DVAR and 4DVAR. Nudging, also known as Newtonian relaxation or 4DDA, is essentially the same as proceeding in continuous time rather than in discrete analysis cycles (the Kalman-Bucy filter), again with imputing simplified covariance. Ensemble Kalman filters represent the probability distribution by an ensemble of simulations, and the covariance is approximated by sample covariance. In addition to weather forecasting, other uses of DA include trajectory estimation for the Apollo program, GPS, and atmospheric chemistry. ## Weather forecasting applications Data assimilation is a concept encompassing any method for combining observations of variables like temperature, and atmospheric pressure into numerical models as the ones used to predict weather. In weather forecasting there are 2 main types of data assimilation: 3 dimensional (3DDA) and 4 dimensional (4DDA). In 3DDA only those observations available at the time of analysis are used. In 4DDA the future observations are included (thus, time dimension added). ### History of Data Assimilation in Weather forecasting The first data assimilation methods were called the "objective analyses" (e.g., Cressman algorithm). This was in contrast to the "subjective analyses", when (in past practices) numerical weather predictions (NWP) forecasts were adjusted by meteorologists using their operational expertise. The objective methods used simple interpolation approaches, and thus were 3DDA methods. The similar 4DDA methods, called "nudging" also exist (e.g. in MM5 NWP model). They are based on the simple idea of Newtonian relaxation (the 2nd axiom of Newton). The idea is to add in the right part of dynamical equations of the model a term that is proportional to the difference of the calculated meteorological variable and the observed value. This term that has a negative sign "keeps" the calculated state vector closer to the observations. Nudging can be interpreted as a variant of the Kalman-Bucy filter (a continuous time version of the Kalman filter) with the gain matrix prescribed rather than obtained from covariances. The breakthrough in the field of data assimilation was achieved by L. Gandin (1963) who introduced the "statistical interpolation" (or "optimal interpolation") method. His work developed the previous ideas of Kolmogorov. That method is a 3DDA method and is a type of regression analysis, which utilizes information about the spatial distributions of covariance functions of the errors of the "first guess" field (previous forecast) and "true field". These functions are never known. However, the different approximations were assumed. In fact the optimal interpolation algorithm is the reduced version of the Kalman filtering (KF) algorithm, when the covariance matrices are not calculated from the dynamical equations, but are pre-determined in advance. Attempts to introduce the KF algorithms as a 4DDA tool for NWP models came later. However, this was (and remains) a very difficult task, since the full version of KF algorithm requires solution of the enormous number of additional equations (~N*N~10**12, where N=Nx*Ny*Nz is the size of the state vector, Nx~100, Ny~100, Nz~100 - the dimensions of the computational grid). To overcome that difficulty the special kind of KF algorithms (approximate or suboptimal KF's) for NWP models were developed. These include, e.g., the Ensemble Kalman filter and the Reduced-Rank Kalman filters (RRSQRT) (e.g., Todling and Cohn, 1994). Another significant advance in the development of the 4DDA methods was utilizing the optimal control theory (variational approach) in the works of Le Dimet and Talagrand (1986), based on the previous works of G. Marchuk, who was the first to apply that theory in the environmental modeling. The significant advantage of the variational approaches is that the meteorological fields satisfy the dynamical equations of the NWP model and at the same time they minimize the functional, characterizing their difference from observations. Thus, the problem of constrained minimization is solved. The 3DDA variational methods were developed for the first time by Sasaki (1958). As was shown by Lorenc (1986), all the above-mentioned 4DDA methods are in some limit equivalent, i.e. under some assumptions they minimize the same cost function. However, in practical applications these assumptions are never fulfilled, the different methods perform differently and generally it is not clear what approach (Kalman filtering or variational) is better. The fundamental questions also arise in application of the advanced DA techniques such as convergence of the computational method to the global minimum of the functional to be minimised. For instance, cost function or the set in which the solution is sought can be not convex. The 4DDA method which is currently most successful[1][2] is hybrid incremental 4D-Var, where an ensemble is used to augment the climatological background error covariances at the start of the data assimilation time window, but the background error covariances are evolved during the time window by a simplified version of the NWP forecast model. This data assimilation method is used operationally at forecast centres such as the Met Office.[3][4] ### Future Development in NWP The rapid development of the various data assimilation methods for NWP models is connected with the two main points in the field of numerical weather prediction: 1. Utilizing the observations currently seems to be the most promising chance to improve the quality of the forecasts at the different spatial scales (from the planetary scale to the local city, or even street scale) and time scales. 2. The number of different kinds of available observations (sodars, radars, satellite) is rapidly growing. The question is: can the principal limit of the predictability of weather forecast models be overcome (and to what extent) with the help of data assimilation? ### Cost function The process of creating the analysis in data assimilation often involves minimization of a "cost function." A typical cost function would be the sum of the squared deviations of the analysis values from the observations weighted by the accuracy of the observations, plus the sum of the squared deviations of the forecast fields and the analyzed fields weighted by the accuracy of the forecast. This has the effect of making sure that the analysis does not drift too far away from observations and forecasts that are known to usually be reliable. 1. 3D-Var $J(\mathbf{x}) = (\mathbf{x}-\mathbf{x}_{b})^{\mathrm{T}}\mathbf{B}^{-1}(\mathbf{x}-\mathbf{x}_{b}) + (\mathbf{y}-\mathit{H}[\mathbf{x}])^{\mathrm{T}}\mathbf{R}^{-1}(\mathbf{y}-\mathit{H}[\mathbf{x}]),$ where $\mathbf{B}$ denotes the background error covariance, $\mathbf{R}$ the observational error covariance. $\nabla J(\mathbf{x}) = 2\mathbf{B}^{-1}(\mathbf{x}-\mathbf{x}_{b}) - 2\mathit{H}\mathbf{R}^{-1}(\mathbf{y}-\mathit{H}[\mathbf{x}])$ 2. 4D-var $J(\mathbf{x}) = (\mathbf{x}-\mathbf{x}_{b})^{\mathrm{T}}\mathbf{B}^{-1}(\mathbf{x}-\mathbf{x}_{b}) + \sum_{i=0}^{n}(\mathbf{y}_{i}-\mathit{H}_{i}[\mathbf{x}_{i}])^{\mathrm{T}}\mathbf{R}_{i}^{-1}(\mathbf{y}_{i}-\mathit{H}_{i}[\mathbf{x}_{i}])$ provided that $\mathit{H}$ is linear operator (matrix). This section requires expansion. (June 2008) ## Other applications of Data Assimilation This section requires expansion. (June 2008) Data assimilation methods are currently also used in other environmental forecasting problems, e.g. in hydrological forecasting. Basically, the same types of data assimilation methods as those described above are in use there. An example of chemical data assimilation using Autochem can be found at CDACentral. Given the abundance of spacecraft data for other planets in the Solar System, data assimilation is now also applied beyond the Earth to obtain re-analyses of the atmospheric state of extraterrestrial planets. Mars is the first extraterrestrial planet which data assimilation has been applied to, so far. Available spacecraft data include, in particular, retrievals of temperature and dust/water ice optical ticknesses from the Thermal Emission Spectrometer onboard NASA's Mars Global Surveyor and the Mars Climate Sounder onboard NASA's Mars Reconnaissance Orbiter. Two methods of data assimilation have been applied to these datasets: an Analysis Correction scheme [5] and two Ensemble Kalman Filter schemes,[6][7] both using a global circulation model of the martian atmosphere as forward model. The Mars Analysis Correction Data Assimilation (MACDA) dataset is publicly available from the British Atmospheric Data Centre.[8] Data assimilation is a part of the challenge for every forecasting problem. Dealing with biased data is a serious challenge in data assimilation. Further development of methods to deal with biases will be of particular use. If there are several instruments observing the same variable then intercomparing them using probability distribution functions can be instructive. Such an analysis is available on line at PDFCentral designed for the validation of observations from the NASA Aura satellite. ## References • R. Daley, Atmospheric data analysis, Cambridge University Press, 1991. • MM5 community model homepage • ECMWF Data Assimilation Lecture notes • Ide, K., P. Courtier, M. Ghil, and A. C. Lorenc (1997) Unified Notation for Data Assimilation: Operational, Sequential and Variational Journal of the Meteorologcial Society of Japan, vol. 75, No. 1B, pp. 181–189 • COMET module "Understanding Data Assimilation" • Geir Evensen, Data Assimilation. The Ensemble Kalman Filter. Springer, 2007 • John M. LEWIS; S. Lakshmivarahan, Sudarshan Dhall, "Dynamic Data Assimilation : A Least Squares Approach", Encyclopedia of Mathematics and its Applications 104, Cambridge University Press, 2006 (ISBN 978-0-521-85155-8 Hardback)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214504957199097, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2997/chaum-undeniable-signature-justification-for-probability-of-misleading-the-ver
# Chaum undeniable signature - justification for probability of misleading the verifier? Can anyone explain me in details why does the following statement hold true in Chaum and van Antwerpen scheme for undeniable signature? The probability that a dishonest signer is able to successfully mislead the verifier in either verification or disavowal is 1/q where q is the prime number in the signer's private key Chaum and van Antwerpen scheme: 1) Initialization • Select two big primes $p$ and $q$ such that $p - 1 = 2q$ • Select a generator $g$ of the cyclic subgroup $G$ with order $q$ inside of $Z_p^*$ which means an element $g$ such that $1 = g^q \bmod p$ and $1 \ne g^a \bmod p$ for all $0 < a < q$ • Choose a private key $k \in \{1, 2, \dotsc, q-1\}$ • Send to the Trusted Authority (PKI) the public scheme key $(y, g, p)$ where $y = g^k \bmod p$ 2) Signature of a message $m \in G$ • Compute the signature $s = m^k \bmod p$ 3) Verification of a signature $s$ The verification phase is characterised by a challenge / response procedure between the verifier (Victor) and the author of the signature (Sally): Challenge: • Victor chooses two random numbers $a,b \in \{1, 2, \dotsc, q-1\}$ • Victor computes the quantity $c = s^a y^b = s^a g^{kb} \bmod p$ and sends it to Sally Response: • Sally computes the response $r = c^{k^{-1}}$ where $k^{-1}$ is the number such that $kk^{-1} = 1 \bmod q$ and sends it back to Victor Final verification: • Victor checks if $r = m^a g^b \bmod p$ holds true, if it does then the signature is authentic, otherwise or the signature was forged, or Sally is trying to deny her signature. Which of the two possibilities holds true may be checked with an disavowal procedure that I'm not explaining cause it should not be of interest for the question. For any further details about the scheme read the following chapter of Handbook of Applied Cryptography at page 476 - @Illmari Karonen - Thks for correcting mistakes. I figured out just now that I had made a little bit of confusion about the group $G$, which is not $Z_q^*$ as I originally thought. – Matteo Jun 21 '12 at 13:55 1 – Ilmari Karonen Jun 21 '12 at 14:24 @IlmariKaronen - that's exactly the point. The proofs of both theorem 1 and 2 are really short, and I don't feel as they convince me in a complete way! – Matteo Jun 21 '12 at 14:37 ## 1 Answer OK, let me try to expand on Chaum and van Antwerpen's proofs a bit. Please let me know if there's still something that doesn't convince you. I'll start by rephrasing Chaum and van Antwerpen's theorem 1 using your notation: Theorem 1: Even with infinite computing power, the probability that Sally can verify an invalid signature $s \ne m^k \bmod p$ is at most $1/q$. To verify $s$, Victor chooses two random numbers $a$ and $b$ from $\mathbb Z_q = \{0, 1, \dotsc, q-1\}$ (see note below). However, the important bit is that Sally doesn't know $a$ and $b$; she only sees $c = s^a y^b \in G$. Now, it's not hard to see that every value of $c$ seen by Sally corresponds to $q$ possible $(a,b)$ pairs: in particular, for any $a$ and $c$, we may solve for $b$ as follows: $$\begin{aligned} s^a y^b &= c \\ y^b &= s^{-a} c \\ b &= \operatorname{dlog}_y(s^{-a} c)\\ \end{aligned}$$ (This works because, in a cyclic group of prime order, the discrete logarithm is unique for any base not equal to 1; equivalently, $b \mapsto y^b$ is a bijection, and in fact a group homomorphism, from $\mathbb Z_q^+$ to $G$ for any $y \in G$, $y \ne 1$. The case $y = 1$ cannot occur, since $k \ne 0$ and $g \ne 1$ by construction.) Now, assume that $s \in G$ and $m \ne 1$ (which Chaum and van Antwerpen's paper mandates). Then there exists some $x \in \mathbb Z_q$ such that $s = m^x$. If $x = k$, then the signature is valid, so an invalid signature must have $x \ne k$. I'll now show that, if $x \ne k$, then for each of the $q$ different $(a,b)$ pairs corresponding to the $c$ sent by Victor, Sally would have to reply with a different $r$ to have it accepted. Like Chaum and van Antwerpen, I will do so using proof by contradiction. Specifically, assume that there exist two pairs $(a,b)$ and $(a^*,b^*)$ corresponding to the same $c$ and the same $r$. Then: $$\begin{aligned} m^{xa} g^{kb} =& c = m^{xa^*} g^{kb^*} &&& m^{a} g^{b} =& r = m^{a^*} g^{b^*} \\ m^{xa} m^{-xa^*} =& g^{kb^*} g^{-kb} & \text{and} && m^{a} m^{-a^*} =& g^{b^*} g^{-b} \\ m^{x(a-a^*)} =& g^{k(b^*-b)} &&& m^{a-a^*} =& g^{b^*-b} \\ \end{aligned}$$ Taking the base-$g$ discrete logarithm and letting $\mu = \operatorname{dlog}_g(m)$, we get $\mu x(a-a^*) = k(b^*-b)$ and $\mu (a-a^*) = b^*-b$. Substituting the latter into the former, we get $\mu x(a-a^*) = k \mu (a-a^*)$, and thus $x = k$. But this is only possible if the signature is, in fact, valid! Thus, we've shown that, if $s \ne m^k$, any value of $r$ that Sally returns can only be valid for one $(a,b)$ pair. But since there are $q$ possible such pairs, each of which is equally likely, Sally has only a $1/q$ chance of guessing the correct one. Note: This proof involves a few corner cases in which the version of the Chaum–van Antwerpen scheme given in the Handbook of Applied Cryptography differs from that in Chaum and van Antwerpen's original paper. First, Chaum and van Antwerpen explicitly require that $m \ne 1$. This makes sense, since if $m = 1$, $s = m^k = 1$ regardless of $k$! I assume this is most likely just an oversight in the Handbook, but it may be important if the message $m$ might be chosen by an adversary. Second, the Handbook says that $a$ and $b$ should be chosen at random from $\{1,2, \dotsc, q-1\}$, whereas Chaum and van Antwerpen also allow $a = 0$ and $b = 0$. Admittedly, these are somewhat peculiar cases — in particular, $a = 0$ implies that $c$ (and thus $r$, if Sally is honest) is independent of $s$ (and thus $m$)! Even so, these cases don't actually present a weakness, since they each occur only with probability $1/q$, and, importantly, Sally can't tell when they occur. In fact, excluding $a = 0$ and $b = 0$ means that, at least in principle, Sally can exclude two of the possible $(a,b)$ pairs (or one, if $c=1$). Thus, the maximum probability of verifying an invalid signature may actually be as high as $1/(q-2)$. Of course, for practical values of $q$, that makes no real difference. Also note that, in practice, one would usually use the compact representation recommended by Chaum and van Antwerpen, in which $G$ is mapped to $\{1, 2, \dotsc, q\} \subset \mathbb Z_p^*$ by the map $f(x) = \min(x, p-x)$, and the map $f$ is reapplied after any sequence of arithmetic operations in $\mathbb Z_p^*$. (This works because $p-1 \equiv -1 \bmod p$ is the generator of the order-2 subgroup of $\mathbb Z_p^*$.) Not only is this representation more compact than naively storing elements of $G$, but it makes it easy to map arbitrary messages to elements of $G \setminus \{1\}$ and avoids issues with bogus signatures outside $G$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 106, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253055453300476, "perplexity_flag": "head"}
http://mathoverflow.net/questions/16829/what-are-your-favorite-instructional-counterexamples/71726
## What are your favorite instructional counterexamples? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Related: question #879, Most interesting mathematics mistake. But the intent of this question is more pedagogical. In many branches of mathematics, it seems to me that a good counterexample can be worth just as much as a good theorem or lemma. The only branch where I think this is explicitly recognized in the literature is topology, where for example Munkres is careful to point out and discuss his favorite counterexamples in his book, and Counterexamples in Topology is quite famous. The art of coming up with counterexamples, especially minimal counterexamples, is in my mind an important one to cultivate, and perhaps it is not emphasized enough these days. So: what are your favorite examples of counterexamples that really illuminate something about some aspect of a subject? Bonus points if the counterexample is minimal in some sense, bonus points if you can make this sense rigorous, and extra bonus points if the counterexample was important enough to impact yours or someone else's research, especially if it was simple enough to present in an undergraduate textbook. As usual, please limit yourself to one counterexample per answer. - 1 @Regenbogen - I am familiar with the proof that Selmer's curve has points everywhere locally but not globally. But that counterexample led many people to study the manner in which the Hasse Prinicple could fail. For example, there is the Brauer-Manin Obstruction. However Skorobogatov has found examples of curves with trivial Manin obstruction and everywhere local points but no global points, so the story is not finished...In my comment I was suggesting that someone more familiar with the current work might use this example. – Ben Linowitz Mar 2 2010 at 17:23 show 4 more comments ## 52 Answers I like the Sorgenfrey line. It's finer than the metric topology on R, and hereditarily Lindelöf, hereditarily separable, first countable, but not second countable. It's non-orderable, but generalised orderable, etc. It's a popular example for metrisation theorems, e.g. All its compact subsets are at most countable. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is a useful example of counter-examples in commutative ring theory; Let $R=P(\mathbb{N})$ be the power set of $\mathbb{N}.$ It has a ring structure $(R, +, \times)$ where $+$ is the symmetric difference of sets and $\times$ is the intersection of sets. Applications: Obviously, $R$ is a commutative ring with $1$, ($\mathbb{N}$ is the $1$). 1) Let $R$ be a commutative ring with $1$ and a multiplicative closed set of $R$. If $R$ is Noetherian (Artinian) ring then $S^{-1}R$ is Noetherian (Artinian). Does the converse hold? No, it doesn't. Using the above example, for any prime ideal $p$ of $R$, $R_p$ (the localization at $p$) is Noetherian (Artinian) while, $R$ is not Noetherian (Artinian). Outline: Consider P({1}) $\subset$ P({1,2}) $\subset...$ and $P(\mathbb{N}) \supset$ P($\mathbb{N} \setminus${1}) $\supset$ P($\mathbb{N} \setminus${1,2}) $\supset ...$ showing that $R$ is neither Noetherian nor Artinian ring. It is easy to verify that $R_p$ is isomorphic to $\mathbb{Z}/2$, hence it is both Noetherian & Artinian. (Every element of $R_p$ is either $0/1$ or a invertible.) 2) Let $R$ be an integral domain (also commutative with $1$), then for every multiplicative closed set of $R$, $S^{-1}R$ is an integral domain, hence for every $R_p.$ Does the converse hold? By the above example, it doesn't, since $(P(\mathbb{N}),+,\times)$ is not an integral domain. - 1 It may be worth noticing that this ring $R$ is nothing but $(\mathbb Z/2)^{\mathbb N}$ in disguise. Also, I am surprised with your statement that localizations $R_p$ are all isomorphic to $\mathbb Z/2$. – ACL Mar 17 2011 at 9:10 3 The prime ideals in this ring are the complements of the ultrafilters on `$\mathbb N$`, so the spectrum is the Stone-Cech compactification of the discrete space `$\mathbb N$`. – Andreas Blass Mar 17 2011 at 13:53 show 1 more comment Small's Example from noncommutative algebra... The triangular ring $T = \pmatrix{\mathbb{Z} & \mathbb{Q} \\ 0 & \mathbb{Q}}$ has the following properties: • It's right noetherian but not left noetherian • It's right hereditary but not left hereditary • The right global dimension is 1 but the left global dimension is 2 • This generalizes to give an example of a ring with right global dimension $n$ and left global dimension $n+1$ by replacing $\mathbb{Z}$ by $R$, a commutative noetherian domain of global dimension $n$, then replacing $\mathbb{Q}$ by $K = Frac(R)$ • A similar example gives a ring which is noetherian but neither left nor right Ore. Just take $R = \pmatrix{S & 0 \\ S & I}$ where $S = \pmatrix{\mathbb{Z} & 0 \\ \mathbb{Z}_p & \mathbb{Z}_p}$ and $I = \pmatrix{\mathbb{Z} & 0 \\ 0 & \mathbb{Z}_p}$ is an $S$-ideal. Having been trained to think in a commutative world, I found the existence of an example for any one of these to be surprising. The fact that they were all (basically) the same example is even more amazing. - As a counter-example for Fatou's lemma in measure theory: strict inequality can occure! Just take the measure space $\mathbb{N}$ with the counting measure and consider the functions \begin{equation} f_n(k) = \delta_{nk} \end{equation} Then the sum of $f_n$ is always $1$ while the pointwise limit of the $f_n$ will be the zero function having zero integral. If you have this counter-example then you do not need fancy measures and integrals at al to produce examples that in Fatou's lemma strict inequality may happen... - I occasionally use the following "counterexample" to unique factorization in Z in an introduction to math course: (1003)(1007)=(901)(1121). Once the students figure out what's going on, I think they learn something from it. - A standard result in introductory calculus classes is that, if a function has positive derivative on an open interval, then it's increasing there. Based on this, students tend to think that, if $f'(a)>0$, then $f$ must be increasing "near $a$." However, the example $f(x) = 2x^2\sin(1/x)+x$ (set $f(0)=0$) shows that this is quite false! - "Every finitely-branching tree with infinitely many nodes has an infinite branch" is constructively false, as witnessed by the following counterexample: http://math.andrej.com/wp-content/uploads/2006/05/kleene-tree.pdf Andrej Bauer's exposition (above) is especially nice; most textbooks take a far less direct route to the result, which makes it harder to see what's really going on past the level of "yeah, the proof is correct step-by-step." - Here is some simple counterexample in commutative algebra, which I found really cute when I first meet it: Let $k$ be a field, $A = k[X_{1},X_{2},X_{3}\ldots],$ $I = (X_{1}, X_{2}^{2}, X_{3}^{3},\ldots)$ and $R = A/I.$ Then $\text{Spec}(R)$ consists of one point (because $\text{rad}(I)$ is maximal ideal of $A$); in particular $\text{Spec}(R)$ is a noetherian space, and $\dim R = 0$; although $R$ is not noetherian ring (since $\text{nil}(R)^{n}\neq 0$ for every $n$). - 1 Nevertheless, I think that it's not obvious that there exist commutative rings with only one prime (not only with one maximal) ideal that are not noetherian. – ifk Apr 4 2010 at 21:35 2 @ifk: There are simpler examples of that: Consider the direct sum $R=k\oplus V$ of a field $k$ and an infinite dimensional vector space $V$, made into a ring so that $V$ is an ideal which squares to zero, $k$ and $V$ multiply as you expect, and $k$ is a subring (this is called a trivial extension, in some contexts) Then $R$ is commutative, has only one prime, and it is not noetherian. – Mariano Suárez-Alvarez Apr 5 2010 at 6:05 show 2 more comments The 5-cycle $C_5$ is a great counterexample. It's the smallest imperfect graph, it's self-complementary, it has chromatic number $>\Delta$, it has no stable set meeting every maximum clique and yet satisfies $\omega = \frac{2}{3}(\Delta+1)$, it has chromatic number $> \frac 1 2 (\Delta+\omega+1)$, meaning that Reed's $\chi, \omega, \Delta$ conjecture is somehow tight. And when you blow up each vertex into a clique or stable set of size $k$, the fun continues. For $k=3$ this gives you Catlin's counterexample to Hajos' Conjecture. - I'm shocked that noone has mentioned the Quaternion group! This thing is a counterexample to lots of basic questions you'd come up with while learning (finite) group theory. For example (although not really a counterexample to a specific question), if you know the semidirect product construction and Sylow theorems and are trying to classify groups of low order, the quaternion group is the first group you can't construct as a semidirect product of cyclic groups. This can be an entry point for the extension problem for groups and cohomology of groups. - The elliptic curve 960d1 in Cremona's tables is the smallest conductor example of an optimal elliptic curve with nontrivial Shafarevich-Tate group which is isogenous to an elliptic curve with trivial Shafarevich-Tate group. - Any classical counter-example to inversion of a limit and an integral, $f_n:[0,1[\to\mathbb{R} ; x\mapsto n^2 x^n$ say. Basic, but important to motivate the dominated convergence theorem. - $\textbf{Algebra.}$ • The symmetric group $S_{3}$ is the first $\text{non-abelian}$ group and also this group has a fascinating property that $S_{3} \cong \mathscr{I}(S_{3})$ where $\mathscr{I}$ denotes the $\text{Inner - Automorphism}$ group. • Example of a group which is $\textbf{isomorphic}$ to it's proper subgroup. $\mathsf{Answer:}$ Take $G=(\mathbb{Z},+)$ and take $H= 2\mathbb{Z}$. Then $G \cong H$. • Example of a free module in which a linearly independent subset cannot be extended to a basis. $\textbf{Answer.}$ As a $\mathbb{Z}$ module $\mathbb{Z}$ is free with basis $\{1\}$ and $\{-1\}$. Now $\{2\}$ is linearly independent over $\mathbb{Z}$. Note that $2$ cannot generate $\mathbb{Z}$ over $\mathbb{Z}$. If at all there is a basis $\mathscr{B}$ containing $2$, $\mathscr{B}$ should have atleast one more element, say $b$. We then have $b\cdot 2 - 2\cdot b =0$, i.e $\{2,b\}$ is linearly dependent subset of $\mathscr{B}$ which is absurd. $\textbf{Analysis.}$ • The function defined by $f(x) = x^{2} \cdot \sin\frac{1}{x}$ for $x \neq 0$ and $f(x) =0$ for $x=0$. This is example of a function whose derivatives are not continuous. • Set that is not Lebesgue measurable. Example given by Vitali. - This is an easy one, but one I've found useful in the past to keep in mind, and which I've passed on to many younger students who are new to homological algebra. These students sometimes struggle with the idea of a non-free projective module because if you're new to modules and you still think of them via analogy to vector spaces then it's natural to think direct summands of free modules should be free. A nice counter-example to keep in mind is the ring $\mathbb{Z}/6\mathbb{Z}$ and the projective but not free module $\mathbb{Z}/3\mathbb{Z}$ (projective because $\mathbb{Z}/6\mathbb{Z} \cong \mathbb{Z}/3\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$) - A nice counterexample to the statement "$L^p$ convergence to $0$ implies pointwise a.e. convergence to $0$" is obtained by taking characteristic functions of length $\frac{1}{n}$ wrapping around the interval $[0,1]$. These integrate to $\frac{1}{n}$, but converge nowhere to $0$ because the harmonic series diverges. A counterexample to the converse is easier: just take $f_n = n(n+1)\chi_{[\frac{1}{n+1},\frac{1}{n}]}$. These integrate to $1$ and converge everywhere to $0$. - 1 $\chi_{[0,1/2]}, \chi_{[1/2,5/6]}, \chi_{[5/6,1] \cup [0,1/12]}, \chi_{[1/12,17/60]}...$. – Douglas Zare Jul 10 at 4:58 show 3 more comments From an earlier post: "The 8-element quaternion group. It can't be reconstructed from its character table (D_4 has the same one), and every subgroup is normal but it's not abelian." Although the character tables for the dihedral group D of order 8 and the quaternion group Q of order 8 may seem the same, they are not. Using Adams operations on the representation rings for D and Q, it is possible to show that these representation rings are different as rings with operations (either lambda or Adams operations). These Adams operations are defined in a paper by Aityah and Tall, where it is shown how to calculate them directly from character tables. - 3 It can't be possible to define lambda operations directly from the character table; you need to know some of the multiplication table as well. – Qiaochu Yuan Jul 31 2011 at 18:07 Assume given three projective systems `$\{A_n,\alpha_{nm}\}_{n\in\mathbb{N}}$`, `$\{B_n,\beta_{nm}\}_{n\in\mathbb{N}}$` and `$\{C_n,\kappa_{nm}\}_{n\in\mathbb{N}}$` of abelian groups (modules over some ring would equally do), endowed with arrrows $$0\rightarrow A_n\xrightarrow{f_n}B_n\xrightarrow{g_n}C_n\rightarrow 0$$ making the above sequences exact for every $n$ and satisfying the commutativity conditions $\beta_{nm}\circ f_n=f_m\circ\alpha_{nm}$ and $\kappa_{nm}\circ f_n=f_m\circ\beta_{nm}$. Then one can form the projective limits of the system to find a sequence $$0\rightarrow \varprojlim A_n\xrightarrow{f}\varprojlim B_n \xrightarrow{g}\varprojlim C_n$$ and a classical result says that, in order for this sequence to be right-exact, one needs the system $A_n$ to be stationary - meaning that $\alpha_{nm}(A_n)=\alpha_{n'm}(A_{n'})\subseteq A_m$ for all $n,n'\gg m$. A classical counterexample showing the necessity of this condition is to take `$A_n=p^n\mathbb{Z}$` with $\alpha_{nm}$ given by inclusions, $B_n=\mathbb{Z}$ for all $n$ with identity maps $\beta_{nm}=\mathrm{id}$, and $C_n=\mathbb{Z}/p^n\mathbb{Z}$ with the obvious maps. The system $A_n$ is non-stationary because the image of $A_n$ in $A_m$ is `$p^n\mathbb{Z}\subseteq p^m\mathbb{Z}$` which becomes smaller and smaller as $n\rightarrow \infty$: the corresponding sequence of projective limits is $$0\rightarrow 0\rightarrow \mathbb{Z}\rightarrow\mathbb{Z}_p$$ which is clearly not right exact. [Later remark]: After typing all down, I remarked that everything can be found in Wikipedia at http://en.wikipedia.org/wiki/Inverse_limit Moreover, the stationary condition quoted above, usually referred to as Mittag-Leffler condition, is enough to prove right-exactness of $\varprojlim$ in Ab, but there is a counterexample due to Deligne and Neeman showing that in other categories this is not enough, see http://www.springerlink.com/content/aeem2yx884nnufxn/ - Rotations $\rho_\alpha$ of the unit circle by an angle $2\pi\alpha$ are nice examples in the theory of discrete dynamical systems. If $\alpha=m/n$ is rational, then every point on the circle is periodic of prime period $n$ for $\rho_\alpha$, but has no fixed points. This shows that Sharkowskii's theorem does not hold in general for functions continuous $f\colon X\to X$ if $X$ is not the real line or an interval of the real line. If $\alpha$ is irrational, then the orbit under $\rho_\alpha$ of every point of the circle is dense, but $\rho_\alpha$ has nor sensitive dependence on initial conditions, and in particular is not caotic. - The Warsaw circle $W$ http://en.wikipedia.org/wiki/Continuum_%28topology%29 is a counterexample for quite a number of too naive statements. Some observations: $W$ is weakly contractible (because a map from a locally path connected space cannot ''go over the bad point''). There is a projection map $g:W \to S^1$ onto the usual circle. The point-preimages of $g$ are either points or, for a single point on $S^1$, a closed interval. Thus the assumptions of the Vietoris-Begle mapping theorem hold for $g$, proving that $g$ induces an isomorphism in Cech cohomology. Thus the Cech cohomology of $W$ is that of $S^1$, but it has the singular homology of a point, by Hurewicz. These observations imply: 1. A map with contractible point-inverses does not need to be a weak homotopy equivalence, even if both, source and target, are compact metric spaces. Assuming that the base and the preimages are finite CW complexes does not help. 2. The Vietoris-Begle Theorem is false for singular cohomology (in particular, the wikipedia version of that Theorem is not quite correct). 3. $W$ does not have the homotopy type of a CW complex (since it is not contractible). 4. Even though the map $g$ is trivial on fundamental groups, it does not lift to the universal cover $p: \mathbb{R} \to S^1$, because $g$ cannot be nullhomotopic. Thus the assumption of local path connectivity in the lifting theorem is necessary. - My favorite counter-example is given in the shore paper, "Almost Commuting Unitaries," by R. Exel and T. Loring. Here is a little background. Two $n \times n$ matrices $A$ and $B$ are said to be "almost-commuting" if there commutator, $[A, B]$, is small in some matrix norm. In the paper, the authors exhibit a family of unitary matrices, $U_n$ and $V_n$ that "almost-commute" in the sense that given $\epsilon > 0$ there exists an $N \in \mathbb{N}$ with $|| [U_n, V_n] || < \epsilon$ for all $n \geq N$, yet for any commuting $n \times n$ matrices, $X, Y$ $(XY = YX)$ there exists an absolute constant $C > 0$ such that $\max(||X - U_n||, ||Y - V_n||) > C > 0$. This was one of the first counter-examples in a research paper that I understood because the authors method of proof is very elementary. The most technical fact used is that the winding number of a closed curve around the origin is a homotopy invariant. - In topology, The comb space is an example of a path connected space which is not locally path connected. see http://en.wikipedia.org/wiki/Comb_space. - the example which shows that exp(zw) is not equal to exp (exp(z),w) another one a continuous function of a complex variable need not have primitive in a region.the example is f(z) = square ( | z| ). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 191, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277982115745544, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/117341/list
## Return to Answer 2 added 1100 characters in body I am not aware of a single Fenchel-Nielsen type parameterization as you ask for, and I'm not sure there can be one, because even on the boundary sphere the manner in which Fenchel-Nielsen coordinates are "re-adapted" does not produce a single coordinate system. The way that proof gets coordinates for the boundary sphere does start with a pants decomposition of the surface, as do Fenchel-Nielsen coordinates. And one does get a single coordinate system for the open subset of the boundary sphere which has nontrivial intersection number with each pants curve. But then one has to patch in additional coordinate charts to cover the closed subset of measure foliations that have zero intersection number with one or more pants curve. The proof does demonstrate that these coordinates can be patched together in such an explicit way that one can see the homeomorphism to a sphere, but nonetheless one is still patching things up. Edit: I also recall that in one of his very earliest writings on this topic, Thurston gave a different proof, certainly not explicit, that the boundary is a sphere. Namely, from the existence of a pseudo-Anosov homeomorphism $\phi$, which acts with attractor--repeller dynamics, one gets a covering by two open sets homeomorphic to Euclidean space: for any neighborhood $U_+$ of the attracting fixed point and any neighborhood $U_-$ of the repelling fixed point there exists $n>0$ such that $\phi^n(U_-)$ and $U_+$ cover the boundary. It follows that the boundary is homeomorphic to a sphere. I posted this question to verify that the same was true for manifolds with boundary, and so the same proof works for compactified Teichmuller space, as Thurston undoubtedly knew: the action of a pseudo-Anosov homeomorphism on compactified Teichmuller space also has attractor-repeller dynamics, and so it is covered by two manifold-with-boundary coordinate charts, and so it is homeomorphic to a closed ball. 1 I am not aware of a single Fenchel-Nielsen type parameterization as you ask for, and I'm not sure there can be one, because even on the boundary sphere the manner in which Fenchel-Nielsen coordinates are "re-adapted" does not produce a single coordinate system. The way that proof gets coordinates for the boundary sphere does start with a pants decomposition of the surface, as do Fenchel-Nielsen coordinates. And one does get a single coordinate system for the open subset of the boundary sphere which has nontrivial intersection number with each pants curve. But then one has to patch in additional coordinate charts to cover the closed subset of measure foliations that have zero intersection number with one or more pants curve. The proof does demonstrate that these coordinates can be patched together in such an explicit way that one can see the homeomorphism to a sphere, but nonetheless one is still patching things up.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948649525642395, "perplexity_flag": "head"}