url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://nrich.maths.org/192
|
### Homes
There are to be 6 homes built on a new development site. They could be semi-detached, detached or terraced houses. How many different combinations of these can you find?
### Number Squares
Start with four numbers at the corners of a square and put the total of two corners in the middle of that side. Keep going... Can you estimate what the size of the last four numbers will be?
### I'm Eight
Find a great variety of ways of asking questions which make 8.
# One Big Triangle
##### Stage: 1 Challenge Level:
Drag and drop these triangles so that the numbers that touch add up to $10$
Try to use all the small triangles and move them to fill in all the spaces in the big triangle.
Full screen version
This text is usually replaced by the Flash movie.
You can also print out these triangles, cut them out and rearrange them to make the big triangle.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9168350696563721, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/173571/compute-integral-int-66-frac4e2x-22e2x-mathrmd-x/173608
|
# Compute integral $\int_{-6}^6 \! \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x$
I want to solve $\int_{-6}^6 \! \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x$ but I get the wrong results:
$$\int_{-6}^6 \! \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x = \int_{-6}^6 \! \frac{16e^{4x} + 16e^{2x} + 4}{e^{2x}} \, \mathrm{d} x$$
$$= \left[ \frac{(4e^{4x} + 8e^{2x} + 4x)2}{e^{2x}} \right]_{-6}^6 = \left[ \frac{8e^{4x} + 16e^{2x} + 8x}{e^{2x}} \right]_{-6}^6$$
$$= (\frac{8e^{24} + 16e^{12} + 48}{e^{12}}) - (\frac{8e^{-24} + 16e^{-12} - 48}{e^{-12}})$$
$$= e^{-12}(8e^{24} + 16e^{12} + 48) - e^{12}(8e^{-24} + 16e^{-12} - 48)$$
$$= 8e^{12} + 16 + 48e^{-12} - (8e^{-12} + 16 - 48e^{12})$$
$$= 8e^{12} + 16 + 48e^{-12} - 8e^{-12} - 16 + 48e^{12})$$
$$= 56e^{12} + 56e^{-12}$$
Where am I going wrong?
-
2
What is the $f$, and where did it go? It seems to have magically disappeared in the middle of your working. – user22805 Jul 21 '12 at 11:06
It will be highly unlikely anyone will understand your question, let alone solve it, without you telling us what is that "f" appearing there... – DonAntonio Jul 21 '12 at 11:56
In the title, there's a 2 in the denominator; in the first line of the body, it's $e^{2x}$; in the second line, it's $e^{2x}$ on the left, and 2 on the right. I suggest you heed the comments, and figure out exactly what it is that you want to ask, and then edit your question accordingly. – Gerry Myerson Jul 21 '12 at 12:29
2
And if it's $e^{2x}$ in the denominator, then you have made the mistake of doing $\int(g/h)=(\int g)/(\int h)$. – Gerry Myerson Jul 21 '12 at 12:32
Both the "f" and the "2" (in the title) were typos, thanks for letting me know. – Quispiam Jul 21 '12 at 13:06
show 1 more comment
## 4 Answers
$$\int_{-6}^6 \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x= \int_{-6}^6 \frac{16e^{4x} + 16e^{2x}+ 4}{e^{2x}} \, \mathrm{d} x= \int_{-6}^6 16e^{2x} + 16+ 4e^{-2x} \, \mathrm{d} x= \left[ 8e^{2x} + 16x-2e^{-2x} \right]_{-6}^6= 8(e^{12}-e^{-12}) + 16\cdot 12 -2(e^{-12}-e^{12})= 192+ 10 e^{12}-10 e^{-12}$$
You can check both indefinite and definite integral at WolframAlpha.
I am not sure where is mistake in your solution (since I do not understand what exactly you have done), but most probably you have used $\int \frac{f(x)}{g(x)} \, \mathrm{d} x = \frac{\int f(x) \, \mathrm{d} x}{\int g(x)\, \mathrm{d} x}$, as suggested by Gerry's comment. This formula is incorrect.
-
+1 Certainly the most direct approach! – alex.jordan Jul 21 '12 at 18:20
## Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
email address
$$I:=\int_{-6}^6 \frac{(4e^{2x} + 2)^2}{e^{2x}}\ dx$$
Let $u=e^x, du = e^x \ dx$, leaving us with:
$$\int_{e^{-6}}^{e^{6}} \frac{\left( 4u^2 + 2 \right)^2}{u^3} \ du$$
Expand the numerator to get
$$\int_{e^{-6}}^{e^{6}} \frac{16u^4 + 16u^2 + 4}{u^3} \ du$$
Since the highest power in the numerator is greater than the highest power in the denominator, we have to do some long division. Upon dividing, you get:
$$\int_{e^{-6}}^{e^{6}} \frac{4}{u^3} + 16u + \frac{16}{u} \ du$$
Integrate to get:
$$8u^2-\frac{2}{u^2} + 16 \ln |u|$$
Back-substitute $u=e^x$ to get
$$8e^{2x} - 2e^{-2x} + 16 \ln|e^{x}|$$
Since $e^x$ is strictly increasing, we can drop the absolute value. Also, recall that $\ln{e^x} = x$, so you can simplify a bit.
$$8e^{2x} + 16x - 2e^{-2x}$$
Now, simply evaluate at your endpoints to find that
$$I \approx 1.628\times10^6$$
-
1
You forgot to change the limits of the integral to $e^{\pm 6}$ – Belgi Jul 21 '12 at 13:48
After doing the change of variables, shouldn't it be $u^2$ instead of $e^{2u}$ in the numerator? – Javier Badia Jul 21 '12 at 15:46
Yes, you are right. I made quite a few mistakes when I wrote this up right after I got out of bed, sorry. – Joe Jul 21 '12 at 16:08
+1. An observation: subbing $u=e^{2x}$ leaves a "tidier" rational function that is merely quadratic over quadratic. – alex.jordan Jul 21 '12 at 18:17
Yup. Turns out there are many ways to attack this integral, which is pretty common for textbook exercises (which I imagine is where this came from). – Joe Jul 21 '12 at 19:05
show 3 more comments
You had these steps ok: $$\int_{-6}^6 \! \frac{(4e^{2x} + 2)^2}{e^{2x}} \, \mathrm{d} x = \int_{-6}^6 \! \frac{16e^{4x} + 16e^{2x} + 4}{e^{2x}} \, \mathrm{d} x$$
After that, there are a number of choices. It looks like you forgot to integrate the solution.
You could do this: $$\int_{-6}^6 {\frac{16e^{4x} + 16e^{2x} + 4}{e^{2x}} dx}$$ $$= \int_{-6}^6 { \left( 16e^{2x} + 16 + 4e^{-2x} \right) dx}$$ $$= \left[ { 8e^{2x} + 16x - 2e^{-2x} } \right]_{-6}^6$$
The integration is directly above. Plugging in the values then gives:
$$\left( 8e^{2(6)} + 16(6) - 2e^{-2(6)} \right) - \left( 8e^{2(-6)} + 16(-6) - 2e^{-2(-6)} \right)$$ $$= \left( 8e^{12} + 96 - 2e^{-12} \right) - \left( 8e^{-12} -96 - 2e^{12} \right)$$ $$= 10e^{12} + 192 - 10e^{-12}$$
$$\approx 1.62774*10^6$$
To get the hyperbolic sine ($\sinh$), note that $$\sinh(x) = \frac{ e^{x} - e^{-2x} } {2}$$ $$\sinh(12) = \frac{ e^{12} - e^{-12} } {2}$$ $$20\sinh(12) = 10 \left( e^{12} - e^{-12} \right)$$
So we have $$10e^{12} - 10e^{-12} + 192$$ $$= 20\sinh(12) + 192$$ $$= 4 \left( 5 \sinh(12) + 48 \right)$$
-
Note that this answer is also equal to $4 (48 + 5 \sinh(12))$ – Matt Groff Jul 21 '12 at 17:30
As for the error in your work, I see a problem in the following step:
$$\int_{-6}^6 \! \frac{16e^{4x} + 16e^{2x} + 4}{e^{2x}} \, \mathrm{d} x = \left[ \frac{(4e^{4x} + 8e^{2x} + 4x)2}{e^{2x}} \right]_{-6}^6$$
The denominator is not a constant, so you cannot do the integration like this. I would suggest dividing the numerator by the denominator. This amounts to the substitution which Joe suggests, but seems less complicated in my opinion.
Also, the 2 outside the parenteses in the numerator is incorrect.
-
The 2 outside the parenthesis came from integrating the denominator. As I wrote in a comment, the "rule" $\int(f/g)=(\int f)/(\int g)$ is being used. – Gerry Myerson Jul 22 '12 at 7:07
@GerryMyerson Ahhh...that makes sense, even if it is incorrect. – Code-Guru Jul 22 '12 at 17:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 31, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431596994400024, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/68470-independent-random-variables.html
|
# Thread:
1. ## Independent Random Variables
Question 1)
Note: iid=idependent and identically dsitributed
I don't get the part underlined in red. Why are they independent? How can we prove that, or at least intuitively make sense of it?
Question 2) If X1,X2,...,X6 are independent random varaibles, then my textbook has a theorem saying that g1(X1), g2(X2), ..., g6(X6) are also independent, where the gi's are any function of a single random variable.
But how about f1(X1,X2,...,X5), f2(X6)? If X1,X2,...,X6 are independent, are any function of X1,...,X5 and any function of X6 independent?
Any help is greatly appreciated!
2. Originally Posted by kingwinner
Question 1)
Note: iid=idependent and identically dsitributed
I don't get the part underlined in red. Why are they independent? How can we prove that, or at least intuitively make sense of it?
This is because for iid $\mathcal{N}(0,1)$ random variables $Y_1,\ldots,Y_n$, the random variables $\overline{Y}=\frac{1}{n}\sum_{i=1}^n Y_i$ and $U=\sum_{i=1}^n (Y_i- \overline{Y})^2$ are independent ("the empirical mean and variance of independent gaussian r.v. are independent").
I only rewrote what you said, but the point is that this is very specific to Gaussian random variables, and you probably have it in your course notes.
I don't have a simple intuitive way to think about it (this can be understood geometrically thanks to Cochran's theorem using orthogonal projections of Gaussian random vectors, but this doesn't make it much more intuitive; anyway I can develop if you wish). As for a proof, it requires knowledge about Gaussian random vectors; the idea is that the covariance of $\overline{Y}$ and $U$ is found to be zero, and that for Gaussian vectors this implies the independence; again I can develop if you're really interested, but you may have seen it in your course.
Question 2) If X1,X2,...,X6 are independent random varaibles, then my textbook has a theorem saying that g1(X1), g2(X2), ..., g6(X6) are also independent, where the gi's are any function of a single random variable.
But how about f1(X1,X2,...,X5), f2(X6)? If X1,X2,...,X6 are independent, are any function of X1,...,X5 and any function of X6 independent?
The answer is yes. It is probably in your textbook as well, but I don't know the English name, it is "indépendance par paquets" in French, like "independence by packets" or "packets independence" perhaps?
3. 1) This is my first statistics course in university and I haven't done anything on random vectors yet, so I won't be likely to understand the proof.
But there is a theorem in my textbook saying that for iid normal random variables, the "sample mean" and "sample variance" are independent random variables.
So assuming this fact, how can we prove the independence in the red underlined part? There we have (Ybar)^2 + (Y6)^2
2) Let X1,X2,...,X6 be independent random varaibles.
Then for example X1+(X2)^2 and X6 would be independent, and something like X1+X2+(X3)^4 and (X4)^3 + X5 would also be independent, am I right?
Thanks for helping!
4. Originally Posted by kingwinner
1) This is my first statistics course in university and I haven't done anything on random vectors yet, so I won't be likely to understand the proof.
But there is a theorem in my textbook saying that for iid normal random variables, the "sample mean" and "sample variance" are independent random variables.
So assuming this fact, how can we prove the independence in the red underlined part? There we have (Ybar)^2 + (Y6)^2
This is the right theorem. You get that $U$ and $\overline{Y}$ are independent. On the other hand, both $U$ and $\overline{Y}$ depend only on $Y_1,\ldots,Y_5$, so that they are independent of $Y_6$ (using your second question!).
More precisely, by this argument, the couple $(\overline{Y},U)$ is independent of $Y_6$.
This proves that the three r.v. $\overline{Y}$, $U$ and $Y_6$ are independent.
In particular, $\overline{Y}^2+(Y_6)^2$, as a function of two of them, is independent of the third one, namely $U$. This is again related to your second question.
Note: there's a slight subtlety in the argument. I said the couple $(\overline{Y},U)$ is independent of $Y_6$, because it is not sufficient to say that $X,Y,Z$ are pairwise independent (i.e. $X$ independent of $Y$ and $Z$, and $Y$ independent of $Z$) in order to conclude that $X,Y,Z$ are independent. But if $(X,Y)$ is independent of $Z$ and $X$ is independent of $Y$, then $X,Y,Z$ are independent... If this is your first course in probability, don't worry too much about this anyway.
2) Let X1,X2,...,X6 be independent random varaibles.
Then for example X1+(X2)^2 and X6 would be independent, and something like X1+X2+(X3)^4 and (X4)^3 + X5 would also be independent, am I right?
You are! More generally, if you gather independent random variables in several groups, then the functions of random variables in different groups are independent.
5. So we have:
$\overline{Y}$ and U independent (by theorem)
$\overline{Y}$ and Y6 independent (f(Y1,Y2,...Y5) and g(Y6) are independent)
U and Y6 independent (h(Y1,Y2,...Y5) and k(Y6) are independent)
Does this imply $\overline{Y}$, U, and Y6 are independent?
6. Originally Posted by kingwinner
So we have:
$\overline{Y}$ and U independent (by theorem)
$\overline{Y}$ and Y6 independent (f(Y1,Y2,...Y5) and g(Y6) are independent)
U and Y6 independent (h(Y1,Y2,...Y5) and k(Y6) are independent)
Does this imply $\overline{Y}$, U, and Y6 are independent?
No, confer to my note. This proves that $\overline{Y},U,Y_6$ are pairwise independent, but not that they are independent.
However, the couple $(\overline{Y},U)$ is a function of $Y_1,\ldots,Y_5$, so that it is independent of $Y_6$. In addition, $\overline{Y}$ and $U$ are independent. From there, it follows rigorously that the three r.v. are independent. I show you why: Suppose $X,Y,Z$ are real r.v. such that $(X,Y)$ is independent of $Z$ and $X$ is independent of $Y$. Then for any (measurable) subsets $A,B,C$ of $\mathbb{R}$, $P(X\in A,Y\in B,Z\in C)=P((X,Y)\in A\times B, Z\in C)$ $=P((X,Y)\in A\times B)P(Z\in C)=P(X\in A)P(Y\in B)P(Z\in C)$.
To see why "pairwise independence" and "independence" are not equivalent: consider two independent dices, $X$ is the parity of the first dice (0 if even, 1 if odd), [tex]Y[/Math] is the parity of the second dice, and $Z$ is the parity of the sum of the results. After a thought, you'll understand why $X$ and $Z$ are independent. In the same way, $Y$ and $Z$ are independent. At last, it is straightforward that $X$ and $Y$ are independent. But $X,Y,Z$ can't be independent: $Z$ is a function of $X$ and $Y$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 66, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313773512840271, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/18510/list
|
## Return to Answer
2 added 230 characters in body
This problem seems pretty hard. Let $G$ be a bipartite graph with bipartition $(A,B)$. One easy case to consider is if each vertex in $A$ has degree $k$ and we seek a maximum $(1, k)$ matching. For each $a \in A$, if we let $N(a)$ be the set of neighbours of $a$, then we seek a maximum size subfamily $\mathcal{S}$ of
{ $N(a) : a \in A$ },
such that any two members of $\mathcal{S}$ are disjoint. I believe this problem is polynomially equivalent to maximum-clique, so even in this easy case your problem is still hard. However, maximum-clique is polynomially solvable for perfect graphs, so perhaps this is a partial answer to your question.
Another way to go is to look for good approximation algorithms, say by adapting approximation algorithms for maximum-clique.
Edit. A general comment is that if we restrict to a class of graphs with bounded tree-width, then your problem is indeed polynomial. This works for many NP-hard problems.
1
This problem seems pretty hard. Let $G$ be a bipartite graph with bipartition $(A,B)$. One easy case to consider is if each vertex in $A$ has degree $k$ and we seek a maximum $(1, k)$ matching. For each $a \in A$, if we let $N(a)$ be the set of neighbours of $a$, then we seek a maximum size subfamily $\mathcal{S}$ of
{ $N(a) : a \in A$ },
such that any two members of $\mathcal{S}$ are disjoint. I believe this problem is polynomially equivalent to maximum-clique, so even in this easy case your problem is still hard. However, maximum-clique is polynomially solvable for perfect graphs, so perhaps this is a partial answer to your question.
Another way to go is to look for good approximation algorithms, say by adapting approximation algorithms for maximum-clique.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453223943710327, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/293828/what-group-does-mathbbg-m-denote
|
# What group does $\mathbb{G}_m$ denote?
What group does $\mathbb{G}_m$ denote? I saw it used here.
-
## 1 Answer
$\mathbb G_m$ is an algebraic group: the group of multiplication (in a field). That is: For any field $F$, you obtain a group $\mathbb G_m(F)$, where the elements are tuples of elements of $F$ that satisfy certain polynomial conditions (with polynomials not depending on $F$) and the group operation is given by polynomials (that do not depend on $F$). In fact, you can take $\mathbb G_m(F)=\{(x,y)\in F^2\mid x y-1=0\}$ and define $(x,y)\cdot (x',y')=(xx',yy')$. It seems tha tthe author prefers to describe the group rather as a subgroup of $GL_2$ (or in fact $SL_2$).
-
2
In fact, $F$ need not even be a field but just any commutative ring... – Zhen Lin Feb 3 at 18:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930009663105011, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/88950-quick-basis-question.html
|
# Thread:
1. ## Quick basis question
Suppose V is a vector space of dimension n, that T is a linear transformation on V and that there exists $v \in V$ such that $\{v, Tv, ..., T^{n-1}(v)\}$ is a basis of V.
Then clearly $T^{n}(v)$ can be written as a linear combination of the basis elements, but can we find this linear combination explicitly? I suspect not...
2. Originally Posted by Amanda1990
Suppose V is a vector space of dimension n, that T is a linear transformation on V and that there exists $v \in V$ such that $\{v, Tv, ..., T^{n-1}(v)\}$ is a basis of V.
So that T has rank n and is an invertible matrix.
Then clearly $T^{n}(v)$ can be written as a linear combination of the basis elements, but can we find this linear combination explicitly? I suspect not...
This would depend on the "characteristic equation" of T, [tex]|T- \lambda I|= 0[/itex] Since every linear transformation satisfies its own characteristic equation, that gives an $n^{th}$ degree polynomial for T and that can be solved for $T^n$. That's not quite what you are asking but it's the best I can think of.
3. Originally Posted by Amanda1990
Suppose V is a vector space of dimension n, that T is a linear transformation on V and that there exists $v \in V$ such that $\{v, Tv, ..., T^{n-1}(v)\}$ is a basis of V.
Then clearly $T^{n}(v)$ can be written as a linear combination of the basis elements, but can we find this linear combination explicitly? I suspect not...
Here's a simple example, with n=3. Let T be the linear transformation of $\mathbb{R}^3$ given by the matrix $\begin{bmatrix}0&0&\alpha\\ 1&0&\beta\\ 0&1&\gamma\end{bmatrix}$. Let $e_1,\;e_2,\;e_3$ be the vectors $\begin{bmatrix}1\\0\\0\end{bmatrix}$, $\begin{bmatrix}0\\1\\0\end{bmatrix}$, $\begin{bmatrix}0\\0\\1\end{bmatrix}$ in the standard basis. If $v = e_1$ then $Tv=e_2$ and $T^2v=e_3$. So $\{v, Tv,T^2(v)\}$ is a basis. But $T^3v = Te_3 = \begin{bmatrix}\alpha\\\beta\\\gamma\end{bmatrix}$, which is an arbitrary vector. So you are right to think that there is no explicit way to determine it.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519606232643127, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/138924/how-to-motivate-the-axioms-for-the-inner-product/139222
|
# How to motivate the axioms for the inner product
Typically, one doesn't just write down lists of axioms and then sees if there are enough interesting examples that satisfy them; they evolve over time, usually from a couple of very important/interesting examples that are then generalized.
For the vector space axioms, for example, it is pretty easy to motivate them because they crop up everywhere and are easily spotted, so it is "natural" (as natural as mathematics can be...) to write them down in an abstract fashion and say "now we are just going study just what follows from these axioms".
But for the axioms of a certain class of maps from a pair of vector spaces to (to make it simple) $\mathbb{R}$, namely the inner product, I don't find their motivation satisfying at all. From what I read in some books and Wikipedia everything boils down to saying: 1) It is a geometric fact that in - for simplicity - $\mathbb{R}^{2}$ the equation $$\left\langle x,y\right\rangle =\left\Vert x\right\Vert \left\Vert y\right\Vert \cos\theta\quad\quad\left(1\right),$$
holdS, where $\theta$ is the angle between $x,y$ and $\left\langle \cdot,\cdot\right\rangle$ is defined as the dot product.
2) $\left\langle \cdot,\cdot\right\rangle$ has the properties of being symmetric, linear in each argument and positiv definite.
3) Conclusion: We should abstractly study symmetric, linear and positiv definite maps $V\times V\rightarrow\mathbb{R}$, where $V$ is a vector space.
For me, 1) and 2) aren't by far enough to say 3), since
$\bullet$ for other important examples of maps (and vector spaces $V$), the relation $\left(1\right)$, which motivated the abstract definition of an inner product, isn't applicable at all: It isn't intuitively clear what $\left\langle \cdot,\cdot\right\rangle$ and $\theta$ should be for these examples, so that we can verify $\left(1\right)$ for them, observe that in all these examples the LHS has the properties listed in 2), which would consolidate our belief that we truly have carved out an important class of mappings that is worthwhile studying in the abstract. Consider e.g. $V=C\left[a,b\right]$ and $$\left(x,y\right)\mapsto\int_{a}^{b}x\left(t\right)y\left(t\right)dt.$$
$\bullet$ there are a ton of other properties the dot product $\left\langle \cdot,\cdot\right\rangle$ has. Why not study maps that satisfy some other geometric intuitive properties besides the ones in 2) ?
So what I think I'm searching for is a better motivation of the axioms of the inner product or for more example (that are qualitatively different from another) satisfying $\left(1\right)$.
Note bene: Trying to motivate the axioms of the inner product by its history didn't bring me much clarity: All I could find after some googling was that the definition of the dot product came from the definitions of the quaternions (see History of dot product and cosine), but going from there to defining inner products abstractly seems to be a bit stretched for me.
-
3
There's a standard exercise which shows that you can recover an inner product $\langle x, y \rangle$ from the norm $\langle x, x \rangle$ it defines. Norms abstract Euclidean distance, so I think this is a reasonably good motivation because inner products are essentially linear-algebraic in nature whereas quadratic forms are a priori more complicated. – Qiaochu Yuan Apr 30 '12 at 15:40
The standard positive definite bilinear form is the one you want to study for the geometry of Euclidean space, but several of the ones that are not positive definite do correspond to non-Euclidean geometries (e.g. Minkowski space-time), so they are worth studying also. Loosely (and also vaguely and inaccurately) speaking, a bilinear form determines "angles" and also which version of the Pythagorean theorem is going to hold in the geometry. – rschwieb Apr 30 '12 at 15:44
@QiaochuYuan Ok, norms are easy to motivate. But I can't see how from that and the fact that under certain conditions (parallelogram law etc.) I can recover an inner product from the norm, motivates its axioms. I also didn't understood the last part of your second sentence. Could you please expand ? – temo Apr 30 '12 at 17:20
@rschwieb I couldn't really follow you, I think, but isn't talking about inner products in some abstract vector space a lot more general then talking about non-Euclidean geometries ? So should there be also a less "Euclidean/Non-Euclidean geometry"-style motivation for it ? (Sory, if my level of knowloedge is low, that I can't make more if the comment) – temo Apr 30 '12 at 17:23
@temo I was only thinking about non-Euclidean geometries on $\mathbb{R}^n$ at the time, and not geometries in general. Bilinear forms will give rise to geometries on the vector space. – rschwieb Apr 30 '12 at 17:49
## 1 Answer
Symmetry, bilinearity, and positive definiteness are exactly the properties used to prove the Cauchy-Schwarz inequality. Well, there are a zillion proofs of the Cauchy-Schwarz inequality; I mean the one that proceeds by observing $0\le \|x-ty\|^2$ for all $t\in\mathbb R$, expanding to obtain a quadratic in $t$, and concluding that the discriminant of that quadratic is nonpositive (and then you fiddle with definiteness to get the equality case).
In other words, an inner product is just a map for which that proof is correct.
We want to obtain the Cauchy-Schwarz inequality in other spaces because it's a cornerstone of the linear-algebraic treatment of Euclidean geometry — you use it to prove the triangle inequality, to show that orthogonal projections are metric projections (which gets you everything you want to know about tangent planes to spheres), etc. (The equation (1) is part of all that: in this treatment, it's essentially the definition of angle. You need Cauchy-Schwarz to show that it's well-defined.)
-
Ok, but I think there are proof of the CS-inequality that don't use the positiv definiteness (or some other property) of the inner product. How come the inner product is defined as maps, for which your mentioned proof works and not as map for which these other proofs work ? – temo May 1 '12 at 11:07
My answer is intended to explain why someone would abstract these three properties and consider them worthy of further study; I thought this is what you were asking for. My answer isn't intended to show that this is the only good way to axiomatize the dot product. You want to know why we don't try other treatments? No reason; go ahead and try. Maybe there's interesting things down those paths too. – Steven Taschuk May 1 '12 at 16:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555025100708008, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/54251/could-quantum-mechanics-work-without-the-born-rule?answertab=oldest
|
# Could quantum mechanics work without the Born rule?
Slightly inspired by this question about the historical origins of the Born rule, I wondered whether quantum mechanics could still work without the Born rule. I realize it's one of the most fundamental concepts in QM as we understand it (in the Copenhagen interpretation) and I know why it was adopted as a calculated and extremely successful guess, really. That's not what my question is about.
I do suspect my question is probably part of an entire field of active research, despite the fact that the theory seems to work just fine as it is. So have there been any (perhaps even seemingly promising) results with other interpretations/calculations of probability in QM? And if so, where and why do they fail? I've gained some insight on the Wikipages of the probability amplitude and the Born rule itself, but there is no mention of other possibilities that may have been explored.
-
1
The many worlds interpretation (MWI), Bohmian mechanics, and dynamic collapse theories all discard with the Born rule as a postulate. In all three theories the subjective appearance of the Born rule is explained as a consequence of other postulates. – Dan Stahlke Feb 18 at 15:12
1
– Nathaniel Feb 19 at 2:31
## 3 Answers
There is a paper called Ruling Out Multi-Order Interference in Quantum Mechanics that, I think, answers your question in the negative (within a certain bound anyway). The authors show that the Born rule implies quantum interference comes only in pairs of possibilities (second order interference), and that by relaxing the Born rule one would expect higher order interference terms in probability calculations.
The authors conduct a three-slit photon experiment and find that the magnitude of the third order interference is less than $10^{-2}$ of the expected second order interference.
-
+1, that's a very interesting paper. A question though: would any relaxation of the Born rule imply higher order interference terms? Or could there potentially be some other non-obvious definition of probability that yields very similar behaviour to that of the Born rule? – Wouter Feb 18 at 7:55
In the final 2 paragraphs they discuss nonlinear extensions of QM and consequences for any generalization beyond the Born rule, and I think that the message is yes, you could potentially come up with some new non-obvious way of doing probability that naturally suppresses higher order interference without explicitly using the Born rule, but there could be deep consequences like the need to describe quantum states differently and/or modify Schrodinger's equation, all while maintaining agreement with established experimental results. Have fun rebuilding modern physics :) – cb3 Feb 18 at 15:30
Rebuilding modern physics wasn't what I had in mind with this question, but I couldn't desist from asking it :) After all, it was only a calculated guess. That makes it even more impressive but it also can't really sit well with a critical scientist. Hence the attempts to derive the Born rule from higher principles, I suppose :) – Wouter Feb 18 at 16:56
The irreducible empirical core of quantum mechanics is a probability calculus. It correlates the outcomes of measurements, so that one measurement (usually called the system's preparation) can be used to calculate the probabilities of the possible outcomes of another measurement. At the center of this probability calculus is the Trace Rule (a special case of which is the Born Rule). If you take aware this Rule, you reduce quantum mechanics to pure fiction, since you have lost your only link between the mathematical formalism and what happens in the actual world.
-
Scott Aaronson (researcher and well-known blogger on topics related to quantum computing) has some lecture notes where he discusses this in a kinda conversational format. Here's the link to the relevant lecture : http://www.scottaaronson.com/democritus/lec9.html
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445866942405701, "perplexity_flag": "middle"}
|
http://nrich.maths.org/6382&part=
|
### Shape and Territory
If for any triangle ABC tan(A - B) + tan(B - C) + tan(C - A) = 0 what can you say about the triangle?
### Napoleon's Hat
Three equilateral triangles ABC, AYX and XZB are drawn with the point X a moveable point on AB. The points P, Q and R are the centres of the three triangles. What can you say about triangle PQR?
### The Root Cause
Prove that if a is a natural number and the square root of a is rational, then it is a square number (an integer n^2 for some integer n.)
# Mind Your Ps and Qs
##### Stage: 5 Short Challenge Level:
Here are 16 propositions involving a real number $x$:
| | | | |
|--------------------------|------------------------|-------------|-------------------------|
| $x\int^x_0 ydy < 0$ | $x> 1$ | $0< x< 1$ | $x^2+4x+4 =0$ |
| $x=0$ | $\cos(x/2)> \sin(x/2)$ | $x> 2$ | $x=1$ |
| $2\int^{x^2}_0ydy> x^2$ | $x< 0$ | $x^2+x-2=0$ | $x=-2$ |
| $x^3> 1$ | $|x|> 1$ | $x> 4$ | $\int^x_0 \cos y dy =0$ |
[Note: the trig functions are measured in radians]
By choosing $p$ and $q$ from this list, how many correct mathematical statements of the form $p\Rightarrow q$ or $p\Leftrightarrow q$ can you make?
It is possible to rearrange the statements into four statements $p\Rightarrow q$ and four statements $p\Leftrightarrow q$. Can you work out how to do this?
NOTES AND BACKGROUND
Logical thinking is at the heart of higher mathematics: In order to construct clear, correct arguments in ever more complicated situations mathematicians rely on clarity of language and logic. Logic is also at the heart of computer programming and circuitry. To find out more, look at the ideas surrounding the Adding Machine problem and related set of activities.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9002217054367065, "perplexity_flag": "middle"}
|
http://theoryclass.wordpress.com/2011/03/29/expert-testing-part-3/
|
# Expert testing part 3
March 29, 2011 in Uncategorized | Tags: axiom of choice, Blackwell, expert testing, infinite games, zero-sum games
Department of self-promotion: sequential tests, Blackwell games and the axiom of determinacy.
Links to part 1 and part 2
Recap: At every day ${n}$ an outcome ${s_n\in \{0,1\}}$ is realized (`${1}$‘ means a rainy day, `${0}$‘ a non-rainy day). An expert claims to know the distribution of the stochastic process that generates the sequence ${s_0,s_1,\dots}$ of outcomes. A sequential test is given by a Borel set ${R\subseteq \bigl([0,1]\times\{0,1\}\bigr)^{\mathbb N}}$ of infinite sequences of predictions and outcomes. An expert who delivers forecast ${f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]}$ fails on a realization ${x=(s_0,s_1,\dots)\in X=\{0,1\}^{\mathbb N}}$ if ${\left(p_0,s_0,p_1,s_1,\dots\right)\in R}$ where ${p_k=f(s_0,\dots,s_{k-1})}$ is the prediction made by ${f}$ about ${s_k}$ given the previous outcomes ${s_0,\dots,s_{k-1}}$.
Let me remind you what I mean by `a test that does not reject the truth with probability ${1-\epsilon}$‘. I am going to write an equivalent definition to the one I used up to now, this time using the language of stochastic processes.
Definition 1 The test does not reject the truth with probabality ${1-\epsilon}$ if for every ${\{0,1\}}$-valued stochastic process ${S_0,S_1,\dots}$ one has
$\displaystyle \mathop{\mathbb P}\bigl(\left(P_0,S_0,P_1,S_1,\dots\right)\in R\bigr)<\epsilon\ \ \ \ \ (1)$
where
$\displaystyle P_n=\mathop{\mathbb P}\left(S_n=1\left|S_0,\dots,S_{n-1}\right.\right)\ \ \ \ \ (2)$
We are going to prove the following theorem:
Theorem 2 Consider a sequential test that does not reject the true expert with probability ${1-\epsilon}$. Then for every ${\delta>0}$ there exists a ${\zeta\in\Delta({\mathcal F})}$ such that
$\displaystyle \zeta\bigl(\left\{\mu|f\text{ fails on }x\right\}\bigr)<\epsilon+\delta\ \ \ \ \ (3)$
for every ${x\in X}$.
So a charlatan, who doesn’t know anything about the true distribution of the process, can randomize a forecast according to ${\zeta}$ and pass the test with high probability regardless of the actual distribution.
— Discussion —
Before we delve into the proof of the theorem, a couple of words about where we are. Recall that a forecast ${f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]}$ specifies the Expert’s prediction about rain tomorrow after every possible history ${\sigma=(s_0,\dots,s_{n-1})}$. We denote by ${{\mathcal F}}$ the set of all such forecasts. The most general tests are given by a function ${T:{\mathcal F}\rightarrow 2^X}$, and specify for every such ${f}$ the set of realizations ${x\in X=\{0,1\}^{\mathbb N}}$ over which the forecast ${f}$ fails. Since ${X}$ is infinite we know that there exist tests that passes a true expert and are not manipulable by a strategic charlatan.
Sequential tests have the additional property that the test’s verdict depends only on predictions made by ${f}$ along the realized path: When deciding whether a forecast ${f}$ passes or fails when the realization is ${x=(s_0,s_1,\dots)}$ the test only considers the predictions ${p_k=f(s_0,\dots,s_{k-1})}$ made by ${f}$ along x. We also say that the test does not depend on counter-factual predictions, i.e. predictions about the probability of rainy day after histories that never happen. It seems that counter-factual predictions would be irrelevant to testing anyway, but, as the theorem shows, if the test does not use counter-factual prediction then it is manipulable.
One situation in which sequential tests are the only available tests is when, instead of providing his entire forecast ${f}$ before any outcome is realized, at every day ${n}$ the expert only provides his prediction ${p_n}$ about the outcome ${s_n}$ of day ${n}$ just before it is realized. At infinity, all the information available to the tester is the sequence ${(p_0,s_0,p_1,s_1,\dots)}$ of predictions and realized outcomes.
— Sketch of Proof —
We can transform the expert’s story to a two-player normal form zero-sum game as we did before: Nature chooses a realization ${x\in X}$ and Expert chooses a forecast ${f\in{\mathcal F}}$. Then Expert pays Nature ${1}$ if ${f}$ fails on ${x}$ and ${0}$ otherwise. The fact that the test does not reject the true expert translates to the fact that the maximin of the game is small. If we knew that the minimax is also small then an optimal mixed strategy ${\zeta}$ for the Expert will satisfy (3). We only need to prove the existence of value, or as game theorists say, that the game is determined.
Unfortunately, this time we cannot use Fan’s Theorem since we made no topological assumption about the set ${R}$, so there is no hope to get semi-continuity of the payoff function. Indeed, as we shall see in a moment, the Normal form representation misses an important part of the expert’s story. Instead of using a normal form game, we are going to write the game in extensive form. I will call this game ${\Gamma}$.
1. The game is played in stages ${n=0,1,\dots}$.
2. At stage ${n}$ Nature chooses an outcome ${s_n}$ and Expert chooses a prediction ${p_n\in [0,1]}$ simultaneously and independently.
3. Nature does not monitor past actions of Expert.
4. Expert monitors past actions ${s_0,\dots,s_{n-1}}$ of Nature.
5. At infinity, Expert pays Nature ${1}$ if ${\left(p_0,s_0,p_1,s_1,\dots\right)\in R}$ and ${0}$ otherwise.
Now I am going to assume that you are familiar with the concept of strategy in extensive form game, and are aware of Kuhn’s Theorem about the equivalence between behavioral strategies and mixtures of pure strategies (I will make implicit uses of both directions of Kuhn’s Theorem in what follows). We can then look at the normal form representation of this game, in which the players choose pure strategies. A moment’s thought will convince you that this is exactly the game from the previous paragraph: Nature’s set of pure strategies is ${X}$, Expert’s set of pure strategies is ${{\mathcal F}}$ and the payoff for a strategy profile ${(x,f)}$ is ${\mathbf{1}_{T(f)}(x)}$. So far no real gain. Extensive form games in which one of the players don’t monitor opponent’s actions need not be determined. In order to get a game with a value we are going to twist the game ${\Gamma}$, and allow Nature to observe past actions of the Expert player. This makes life more difficult for the Expert. Up to a minor inaccuracy which I will comment about later, the resulting game is what’s called Blackwell game and it admits a value by (a seminal theorem of Donald Martin).
Here is the game after the twist. I call this game ${\Gamma^\ast}$.
1. The game is played in stages ${n=0,1,\dots}$.
2. At stage ${n}$ Nature chooses an outcome ${s_n}$ and Expert chooses a prediction ${p_n\in [0,1]}$ simultaneously and independently.
3. Each player monitors past actions of the opponent.
4. At infinity, Expert pays Nature ${1}$ if ${\left(p_0,s_0,p_1,s_1,\dots\right)\in R}$ and ${0}$ otherwise.
Now if you internalized the method of proving manipulability that I was advocating in the previous two episodes, you know what’s left to prove: that the maximin of ${\Gamma^\ast}$ is small, i.e. that, fore every strategy of Nature, Expert has a response that makes the payoff at most ${\epsilon}$. We know this is true for the game ${\Gamma}$ but in ${\Gamma^\ast}$ Nature is more powerful.
Here is the most important insight of the proof: The fact that an expert who knows the distribution of the process can somehow pass the test ${T}$ implies that the maximin in ${\Gamma}$ is small, but this fact alone doesn’t say anything about the maximin of ${\Gamma^\ast}$. To show that the maximin of ${\Gamma^\ast}$ is also small we will use the fact that the way such an expert passes the test ${T}$ is by providing the correct forecast. Until now the distinction was not really important to us. Now it comes into play.
Let ${g:\left([0,1]\times \{0,1\}\right)^{<{\mathbb N}}\rightarrow [0,1]}$ be a behavioral strategy of Nature in ${\Gamma^\ast}$, i.e. a contingent plan that specifies the probability that Nature play ${1}$ after every. Let ${f:\{0,1\}^{<{\mathbb N}}\rightarrow [0,1]}$ be the pure strategy of Expert in ${\Gamma^\ast}$ that is given by
$\displaystyle \begin{array}{rcl} &&f(s_0,s_1,\dots,s_{n-1})=\\&&g\bigl(f(), s_0, f\left(s_0,s_1\right), s_1, \dots,f\left(s_0,s_1,\dots,s_{n-2}\right),s_{n-1}\bigr)\end{array}$
So the pure action taken by the Expert player at day ${n}$ is the mixed action that Nature is going to take at day ${n}$ according to her strategy ${g}$. Now assume that Nature follows ${g}$ and Expert follows ${f}$. Let ${P_n}$ and ${S_n}$ be the random variables representing the actions taken by Expert and Nature at day ${n}$. Then the stochastic process ${P_0,S_0,P_1,\dots}$ satisfies (2). Therefore from (1) we get that the expected payoff when Nature plays ${g}$ and Expert plays ${f}$ is indeed smaller than ${\epsilon}$.
Now for the minor inaccuracy that I mentioned: For Martin’s Theorem we need the set of actions at every stage to be finite. We can handle this obstacle by restricting Expert’s action at every stage to a grid and applying the coupling argument.
— Non-Borel Tests —
What about pathological sequential tests that are given by a non-Borel set ${R\subseteq \bigl([0,1]\times\{0,1\}\bigr)^{\mathbb N}}$ ? Well, if, like me, you find it preposterously impossible to choose one sock from each of infinitely many pairs of socks, then perhaps you live in the AD paradise. Here every set is measurable, Blackwell Games are determined even when the payoff function is not Borel, and Theorem 2 is true without the assumption that ${R}$ is Borel. See, the AD universe is a paradise for charlatans, since they can do as well as the true experts.
If, on the other hand, you subscribe to the axiom of choice, then you have a non-manipulable test:
Theorem 3 There exists a sequential test with a non-Borel set ${R}$ that does not reject the truth with probability ${1}$ and such that for every ${\zeta\in\Delta({\mathcal F})}$ there exists some realization ${x=(s_0,s_1,\dots)}$ such that
$\displaystyle \zeta\bigl(\left\{\mu|f\text{ fails on }x\right\}\bigr)=1.$
— Summary —
If you plan to remember one conclusion from my last three posts, I suggest you pick this: There exist non-manipulable tests, but they must rely on counter-factual predictions, or be extremely pathological.
Muchas gracias to everyone who read to this point. Did I mention that I have a paper about this stuff ?
### Email Subscription
Join 76 other followers
## 1 comment
Thank you Eran, you have done really good job!
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 108, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933712363243103, "perplexity_flag": "head"}
|
http://en.wikisource.org/wiki/Elements_of_the_Differential_and_Integral_Calculus/Chapter_VIII
|
# Elements of the Differential and Integral Calculus/Chapter VIII
From Wikisource
by William Anthony Granville Chapter VIII, § 79-84
Elements of the Differential and Integral Calculus — Chapter VIII, § 79-84 William Anthony Granville
## CHAPTER VIII
MAXIMA AND MINIMA. POINTS OF INFLECTION. CURVE TRACING
79. Introduction. A great many practical problems occur where we have to deal with functions of such a nature that they have a greatest (maximum) value or a least (minimum) value,[1] and it is very important to know what particular value of the variable gives such a value of the function. For instance, suppose that it is required to find the dimensions of the rectangle of greatest area that can be inscribed in a circle of radius 5 inches. Consider the circle in the following figure:
Inscribe any rectangle, as BD.
Let CD = x; then DE = $\sqrt{100 - x^2}$, and the area of the rectangle is evidently
(1) $A = x\sqrt{100 - x^2}$.
That a rectangle of maximum area must exist may be seen as follows: Let the base CD (= x) increase to 10 inches (the diameter); then the altitude $DE = \sqrt{100 - x^2}$ will decrease to zero and the area will become zero. Now let the base decrease to zero; then the altitude will increase to 10 inches and the area will again become zero. It is therefore intuitionally evident that there exists a greatest rectangle. By a careful study of the figure we might suspect that when the rectangle becomes a square its area would be the greatest, but this would at best be mere guesswork. A better way would evidently be to plot the graph of the function (1) and note its behavior. To aid us in drawing the graph of (1), we observe that
(a) from the nature of the problem it is evident that x and A must both be positive; and
(b) the values of x range from zero to 10 inclusive.
Now construct a table of values and draw the graph.
What do we learn from the graph?
x A
0 0
1 9.9
2 19.6
3 28.6
4 36.6
5 43.0
6 48.0
7 49.7
8 48.0
9 39.6
10 0.0
(a) If carefully drawn, we may find qui!e accurately the area of the rectangle corresponding to any value x by measuring the length of the corresponding ordinate. Thus,
when $x = OM = 3$ inches,
then $A = MP = 28.6$ square inches;
and when $x = ON = 4\frac{1}{2}$ inches,
then $A = NQ =$ about 39.8 sq. in. (found by measurement).
(b) There is one horizontal tangent (RS). The ordinate TH from its point of contact T is greater than any other ordinate. Hence this discovery: One of the inscribed rectangles has evidently a greater area than any of the others. In other words, we may infer from this that the function defined by (1) has a maximum value. We cannot find this value (= HT) exactly by measurement, but it is very easy to find, using Calculus methods. We observed that at T the tangent was horizontal; hence the slope will be zero at that point (Illustrative Example 1, p. 74 [§64]). To find the abscissa of T we then find the first derivative of (1), place it equal to zero, and solve for x. Thus
(1) $\ A$ $= x\sqrt{100 - x^2}$,
$\frac{dA}{dx}$ $=\ \frac{100 - 2 x^2}{\sqrt{100 - x^2}}$
$\frac{100 - 2 x^2}{\sqrt{100 - x^2}}$ $=\ 0$.
Solving, $\ x$ $=\ 5\sqrt{2}$.
Substituting back, we get $\ DE$ $=\ \sqrt{100 - x^2} = 5\sqrt{2}$.
Hence the rectangle of maximum area inscribed in the circle is a square of area
$A = CD \times DE = 5\sqrt{2} \times 5\sqrt{2} = 50$ square inches. The length of HT is therefore 50.
Take another example. A wooden box is to be built to contain 108 cu. ft. It is to have an open top and a square base. What must be its dimensions in order that the amount of material required shall be a minimum; that is, what dimensions will make the cost the least?
Let x = length of side of square base in feet,
and y = height of box.
Since the volume of the box is given, however, y may be found in terms of x. Thus
volume $=\ x^2 y\ =\ 108$; ∴ $y\ =\ \frac{108}{x^2}$.
We may now express the number (= M) of square feet of lumber required as a function of x as follows:
area of base = $x^2$ sq. ft.,
and area of four sides = $=\ 4xy\ =\ \frac{432}{x^2}$ sq. ft. Hence
(2) $\ M$ $=\ x^2 + \frac{432}{x}$
X M
1 433
2 220
3 153
4 124
5 111
6 108
7 111
8 118
9 129
10 143
is a formula giving the number of square feet required in any such box having a capacity of 108 cu. ft. Draw a graph of (2).
What do we learn from the graph?
(a) If carefully drawn, we may measure the ordinate corresponding to any length (= x) of the side of the square base and so determine the number of square feet of lumber required.
(b) There is one horizontal tangent (RS). The ordinate from its point of contact T is less than any other ordinate. Hence this discovery: One of the boxes evidently takes less lumber than any of the others. In other words, we may infer that the function defined by (2) has a minimum value. Let us find this point on the graph exactly, using our Calculus. Differentiating (2) to get the slope at any point, we have
$\frac{dM}{dx} = 2x - \frac{432}{x^2}$.
At the lowest point T the slope will be zero. Hence
$2x - \frac{432}{x^2} = 0$;
that is, when x = 6 the least amount of lumber will be needed.
Substituting in (2), we see that this is
M = 108 sq. ft.
The fact that a least value of M exists is also shown by the following reasoning. Let the base increase from a very small square to a very large one. In the former case the height must be very great and therefore the amount of lumber required will be large. In the latter case, while the height is small, the base will take a great deal of lumber. Hence M varies from a large value, grows less, then increases again to another large value. It follows, then, that the graph must have a "lowest" point corresponding to the dimensions which require the least amount of lumber, and therefore would involve the least cost.
We will now proceed to the treatment in detail of the subject of maxima and minima.
80. Increasing and decreasing functions.[2] A function is said to be increasing when it increases as the variable increases and decreases as the variable decreases. A function is said to be decreasing when it decreases as the variable increases and increases as the variable decreases.
The graph of a function indicates plainly whether it is increasing or decreasing. For instance, consider the function $a^x$ whose graph (Fig. a) is the locus of the equation
$y = a^x$ a > 1
As we move along the curve from left to right the curve is rising; that is, as x increases the function (= y) always increases. Therefore $a^x$ is an increasing function for all values of x.
Fig. a.
On the other hand, consider the function $(a - x)^3$ whose graph (Fig. b) is the locus of the equation
$y = (a - x)^3$.
Now as we move along the curve from left to right the curve is falling; that is, as x increases, the function (= y) always decreases. Hence $(a - x)^3$ is a decreasing function for all values of x.
That a function may be sometimes increasing and sometimes decreasing is shown by the graph (Fig. c) of
$y = 2x^3 - 9x^2 + 12x - 3$.
As we move along the curve from left to right the curve rises until we reach the point A, then it falls from A to B, and to the right of B it is always rising. Hence
(a) from x = $-\inf$ to x = 1 the function is increasing;
(b) from x = 1 to x = 2 the function is decreasing;
(c) from x = 2 to x = $+\inf$ the function is increasing.
The student should study the curve carefully in order to note the behavior of the function when x = 1 and x = 2. Evidently A and B are turning points. At A the function ceases to increase and commences to decrease; at B, the reverse is true. At A and B the tangent (or curve) is evidently parallel to the axis of X, and therefore the slope is zero.
81. Tests for determining when a function is increasing and when decreasing. It is evident from Fig. c that at a point, as C, where a function
$y = f(x)$
is increasing, the tangent in general makes an acute angle with the axis of X; hence
slope = tan $\tau = \frac{dy}{dx} = f'(x) =$ a positive number.
Similarly, at a point, as D, where a function is decreasing, the tangent in general makes an obtuse angle with the axis of X; therefore
slope = tan $\tau = \frac{dy}{dx} = f'(x) =$ a negative number.[3]
In order, then, that the function shall change from an increasing to a decreasing function, or vice versa, it is a necessary and sufficient condition that the first derivative shall change sign. But this can only happen for a continuous derivative by passing through the value zero. Thus in Fig. c, p. 107, as we pass along the curve the derivative (= slope) changes sign at A and B where it has the value zero. In general, then, we have at turning points
(18) $\frac{dy}{dx} = f'(x) = 0$.
The derivative is continuous in nearly all our important applications, but it is interesting to note the case when the derivative (= slope) changes sign by passing through $\inf$.[4] This would evidently happen at the points B, E, G in the following figure, where the tangents (and curve) are perpendicular to the axis of X. At such exceptional turning points
$\frac{dy}{dx} = f'(x) = \inf$;
or, what amounts to the same thing,
$\frac{1}{f'(x)} = 0$.
82. Maximum and minimum values of a function. A maximum value of a function is one that is greater than any values immediately preceding or following.
A minimum value of a function is one that is less than any values immediately preceding or following.
For example, in Fig. c, p. 107 [§80], it is clear that the function has a maximum value MA (= y = 2) when x = 1, and a minimum value NB (= y = l) when x = 2.
The student should observe that a maximum value is not necessarily the greatest possible value of a function nor a minimum value the least. For in Fig. c it is seen that the function (= y) has values to the right of B that are greater than the maximum MA, and values to the left of A that are less than the minimum NB.
A function may have several maximum and minimum values. Suppose that the above figure represents the graph of a function $f(x)$.
At B, D, G, I, K the function is a maximum, and at C, E, H, J a minimum. That some particular minimum value of a function may be greater than some particular maximum value is shown in the figure, the minimum values at C and H being greater than the maximum value at K.
At the ordinary turning points C, D, H, I, J, X the tangent (or curve) is parallel to OX; therefore
slope = $\frac{dy}{dx} = f'(x) = 0$.
At the exceptional turning points B, E, G the tangent (or curve) is perpendicular to OX, giving
slope = $\frac{dy}{dx} = f'(x) = \inf$.
One of these two conditions is then necessary in order that the function shall have a maximum or a minimum value. But such a condition is not sufficient; for at F the slope is zero and at A it is infinite, and yet the function has neither a maximum nor a minimum value at either point. It is necessary for us to know, in addition, how the function behaves in the neighborhood of each point. Thus at the points of maximum value, B, D, G, I, X, the function changes from an increasing to a decreasing function, and at the points of minimum value, C, E, H, J, the function changes from a decreasing to an increasing function. It therefore follows from § 81 that at maximum points
slope = $\frac{dy}{dx} = f'(x)$ must change from + to -,
and at minimum points
slope = $\frac{dy}{dx} = f'(x)$ must change from - to +
when we move along the curve from left to right.
At such points as A and F where the slope is zero or infinite, but which are neither maximum nor minimum points,
slope = $\frac{dy}{dx} = f'(x)$ does not change sign.
We may then state the conditions in general for maximum and minimum values of $f(x)$ for certain values of the variable as follows:
(19) $f(x)$ is a maximum if $f'(x) = 0$, and $f'(x)$ changes from + to -.
(20) $f(x)$ is a minimum if $f'(x) = 0$, and $f'(x)$ changes from - to +.
The values of the variable at the turning points of a function are called critical values; thus x = 1 and x = 2 are the critical values of the variable for the function whose graph is shown in Fig. c, p. 107. The critical values at turning points where the tangent is parallel to OX are evidently found by placing the first derivative equal to zero and solving for real values of x, just as under § 64, p. 73.[5]
To determine the sign of the first derivative at points near a particular turning point, substitute in it, first, a value of the variable just a little less than the corresponding critical value, and then one a little greater.[6] If the first gives + (as at L, Fig. d, p. 109 [§82]) and the second - (as at M), then the function (= y) has a maximum value in that interval (as at I).
If the first gives - (as at P) and the second + (as at N), then the function (= y) has a minimum value in that interval (as at C).
If the sign is the same in both cases (as at Q and R), then the function (= y) has neither a maximum nor a minimum value in that interval (as at F).[7]
We shall now summarize our results into a compact working rule.
83. First method for examining a function for maximum and minimum values. Working rule.
FIRST STEP. Find the first derivative of the function.
SECOND STEP. Set the first derivative equal to zero[8] and solve the resulting equation for real roots in order to find the critical values of the variable.
THIRD STEP. Write the derivative in factor form; if it is algebraic, write it in linear form.
FOURTH STEP. Considering one critical value at a time, test the first derivative, first for a value a trifle less and then for a value a trifle greater than the critical value. If the sign of the derivative is first + and then -, the function has a maximum value for that particular critical value of the variable; but if the reverse is true, then it has a minimum value. If the sign does not change, the function has neither.
In the problem worked out on p. 104 [§79] we showed by means of the graph of the function
$A = x\sqrt{100 - x^2}$
that the rectangle of maximum area inscribed in a circle of radius 5 inches contained 50 square inches. This may now be proved analytically as follows by applying the above rule.
Solution. $\ f(x)$ $=\ x\sqrt{100 - x^2}$
First step. $\ f'(x)$ $=\ \frac{100 - 2x^2}{\sqrt{100 - x^2}}$.
Second step. $\frac{100 - 2x^2}{\sqrt{100 - x^2}}$ $=\ 0$.
$\ x$ $=\ 5\sqrt{2}$
which is the critical value. Only the positive sign of the radical is taken, since, from the nature of the problem, the negative sign has no meaning.
Third step. $\ f'(x)$ $=\ \frac{2(5\sqrt{2} - x)(5\sqrt{2} + x}{\sqrt{(10 - x)(10 + x)}}$.
Fourth step When $x < 5\sqrt{2}$, $\ f'(x)$ $=\ \frac{2(+)(+)}{\sqrt{(+)(+)}} = +$.
When $x > 5\sqrt{2}$, $\ f'(x)$ $=\ \frac{2(+)(+)}{\sqrt{(-)(+)}} = -$.
Since the sign of the first derivative changes from + to - at $x = 5\sqrt{2}$, the function has a maximum value
$f(5\sqrt{2}) = 5\sqrt{2} \cdot 5\sqrt{2} = 50$. Ans.
84. Second method for examining a function for maximum and minimum values. From (19), p. 110 [§82], it is clear that in the vicinity of a maximum value of $f(x)$, in passing along the graph from left to right,
$f'(x)$ changes from + to 0 to -.
Hence $f'(x)$ is a decreasing function, and by §81 we know that its derivative, i.e. the second derivative [= $f''(x)$] of the function itself, is negative or zero.
Similarly, we have, from (20), p. 110, that in the vicinity of a minimum value of $f(x)$
$<math>f'(x)$ changes from - to 0 to +.
Hence $f'(x)$ is an increasing function and by §81 it follows that $f''(x)$ is positive or zero.
The student should observe that $f''(x)$ is positive not only at minimum points (as at A) but also at points such as P. For, as a point passes through P in moving from left to right,
slope = $\tan \tau = \frac{dy}{dx} = f'(x)$ is an increasing function.
At such a point the curve is said to be concave upwards.
Similarly, $f''(x)$ is negative not only at maximum points (as at B) but also at points such as Q. For, as a point passes through Q,
slope = $\tan \tau = \frac{dy}{dx} = f'(x)$ is a decreasing function.
At such a point the curve is said to be concave downwards.[9]
We may then state the sufficient conditions for maximum and minimum values of $f(x)$ for certain values of the variable as follows:
(21) $f(x)$ is a maximum if $f'(x) = 0$ and $f''(x)$ = a negative number.
(22) $f(x)$ is a minimum if $f'(x) = 0$ and $f''(x)$ = a positive number.
Following is the corresponding working rule.
FIRST STEP. Find the first de1ivative of the function.
SECOND STEP. Set the first derivative equal to zero and solve the resulting equation for real roots in order to find the critical values of the variable.
THIRD STEP. Find the second derivative.
FOURTH STEP. Substitute each critical value for the variable in the second de1ivative. If the result is negative, then the function is a maximum for that critical value; if the result is positive, the function is a minimum.
When $f''(x) = 0$, or does not exist, the above process fails, although there may even then be a maximum or a minimum; in that case the first method given in the last section still holds, being fundamental. Usually this second method does apply, and when the process of finding the second derivative is not too long or tedious, it is generally the shortest method.
Let us now apply the above rule to test analytically the function
$M = x^2 + \frac{432}{x}$
found in the example worked out on p. 105 [§79].
Solution. $\ f(x)$ $=\ x^2 + \frac{432}{x}$.
First step. $\ f'(x)$ $=\ 2x - \frac{432}{x^2}$.
Second step. $\ 2x - \frac{432}{x^2}$ $=\ 0$,
Third step. $\ f''(x)$ $=\ 2 + \frac{864}{x^3}$.
Fourth step. $\ f''(6)$ $=\ +$. Hence
$\ f(6)$ $=\ 108$, minimum value.
The work of finding maximum and minimum values may frequently be simplified by the aid of the following principles, which follow at once from our discussion of the subject.
(a) The maximum and minimum values of a continuous function must occur alternately,
(b) When c is a positive constant, $c \cdot f(x)$ is a maximum or a minimum for such values of x, and such only, as make $f(x)$ a maximum or a minimum.
Hence, in determining the critical values of x and testing for maxima and minima, any constant factor may be omitted.
When c is negative, $c \cdot f(x)$ is a maximum when $f(x)$ is a minimum, and conversely.
(c) If c is a constant,
$f(x)$ and $c + f(x)$
have maximum and minimum values for the same values of x.
Hence a constant term may be omitted when finding critical values of x and testing.
In general we must first construct, from the conditions given in the problem, the function whose maximum and minimum values are required, as was done in the two examples worked out on pp. 103-106 [§79]. This is sometimes a problem of considerable difficulty. No rule applicable in all cases can be given for constructing the function, but in a large number of problems we may be guided by the following
General directions.
(a) Express the function whose maximum or minimum is involved in the problem.
(b) If the resulting expression contains more than only variable, the conditions of the problem will furnish enough relations between the variables so that all may be expressed in terms of a single one.
(c) To the resulting function of a single variable apply one of our two rules jor finding maximum and minimum values.
(d) In practical problems it is usually easy to tell which critical value will give a maximum and which a minimum value, so it is not always necessary to apply the fourth step of our rules.
(e) Draw the graph of the function (p.l04) in order to check the work.
1. There may be more than one of each, as illustrated on p. 109 [§82].
2. The proofs given here depend chiefly on geometric intuition. The subject of Maxima and Minima will be treated analytically in §108, p. 167.
3. Conversely, for any given value of x, if $f'(x) = +$, then $f(x)$ is increasing; if $f'(x) = -$, then $f(x)$ is decreasing. When $f'(x) = 0$, we cannot decide without further investigation whether $f(x)$ is increasing or decreasing.
4. By this is meant that its reciprocal passes through the value zero.
5. Similarly, if we wish to examine a function at exceptional turning points where the tangent is perpendicular to OX, we set the reciprocal of the first derivative equal to zero and solve to find critical values.
6. In this connection the term "little less," or "trifle less," means any value between the next smaller root (critical value) and the one under consideration; and the term "little greater," or "trifle greater," means any value between the root under consideration and the next larger one.
7. A similar discussion will evidently hold for the exceptional turning points B, E, and A respectively.
8. When the first derivative becomes infinite for a certain value of the independent variable, then the function should be examined for such a critical value of the variable, for it may give maximum or minimum values, as at B, E, or A (Fig. d, p. 109). See footnote on p. 108 [§81].
9. At a point where the curve is concave upwards we sometimes say that the curve has a positive bending, and where it is concave downwards a negative bending.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 119, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146378636360168, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/124861/solving-a-linear-system-with-an-initial-condition
|
Solving a linear system with an initial condition
So there is a linear system
$\frac{du}{dt} = - \left( \begin{array}{cc} 4 & -3 \\ 6 & -5 \end{array} \right) \frac{du}{dx}$
with an initial condition
$u(x, 0)= \left( \begin{array}{c} -tanh(\lambda x) \\ 0 \end{array} \right)$
Now to start this off I have manipulated the first equation to get this
$\left( \begin{array}{ccc} 1 & \frac{-3}{4} & -\frac{1}{4} \frac{du_{1}}{dt}\\ 0 & \frac{1}{12} & \frac{1}{6} \frac{du_{2}}{dt}-\frac{1}{4} \frac{du_{1}}{dt}\end{array} \right) \frac{du}{dx}$
which can be rewritten as
$\left( \begin{array}{ccc} 1 & \frac{-3}{4} & -\frac{1}{4} \frac{du_{1}}{dt}\\ 0 & \frac{1}{12} & \frac{1}{6} \frac{du_{2}}{dt}-\frac{1}{4} \frac{du_{1}}{dt}\end{array} \right) \frac{du}{dx} =>\left( \begin{array}{cc} 1 & \frac{-3}{4}\\ 0 & \frac{1}{12}\end{array} \right) \frac{du}{dx} = \left( \begin{array}{c} -\frac{1}{4} \frac{du_{1}}{dt} \\ \frac{1}{6} \frac{du_{2}}{dt}-\frac{1}{4} \frac{du_{1}}{dt} \end{array} \right)$
but I'm not sure how to deal with the derivatives. Would I use what I already have but just plug in the initial condition to $u_{1}$ and $u_{2}$ or am I using the wrong method?
-
I added the tag "differential equations", I think it is more than appropriate here than just "linear-algebra". – Patrick Da Silva Mar 27 '12 at 0:26
1 Answer
If one writes $u=(u_1,u_2)$ and one diagonalizes the matrix $A$, one gets the simpler differential system $\frac{\partial v_1}{\partial t}=-\frac{\partial v_1}{\partial x}$ and $\frac{\partial v_2}{\partial t}=2\frac{\partial v_2}{\partial x}$, with $v_1=u_1+u_2$ and $v_2=u_1+2u_2$.
For any $a$, the differential equation $\frac{\partial v}{\partial t}=a\frac{\partial v}{\partial x}$ means that $v$ is a function of $x+at$, hence that $v(x,t)=v(x+at,0)$. Here, $v_1(x,t)=v_1(x-t,0)$ and $v_2(x,t)=v_2(x+2t,0)$.
Finally, $u_1=2v_1-v_2$ and $u_2=v_2-v_1$ hence $$u_1(x,t)=2u_1(x-t,0)+2u_2(x-t,0)-u_1(x+2t,0)-2u_2(x+2t,0),$$ and $$u_2(x,t)=-u_1(x-t,0)-u_2(x-t,0)+u_1(x+2t,0)+2u_2(x+2t,0).$$ In the special case when $u_1(x,0)=u_0(x)$ and $u_2(x,0)=0$, one gets $$u_1(x,t)=2u_0(x-t)-u_0(x+2t),\quad u_2(x,t)=u_0(x+2t)-u_0(x-t).$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306327104568481, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/36942/determination-of-the-joint-distribution
|
# Determination of the joint distribution
Consider a random variable $x$ and $\theta$. Suppose $f(\theta)$ and $f(\theta|x)$ are given. Suppose, in addition, that $f(\theta|x)$ satisfies the strict monotone likelihood ratio property. That is, for every $\theta >\theta'$ and $x > x'$, we have $f(\theta'|x')f(\theta|x)>f(\theta|x')f(\theta'|x).$ Will they uniquely determine the joint distribution $f(x, \theta)$?
-
## 1 Answer
No. For example, if $f(\theta|x)=f(\theta)$ for every $x$, the distribution of $X$ can be any distribution.
Edit to answer the revised version of the question There is no reason to believe the SMLR property + the distribution of $\Theta$ + the conditional distribution of $\Theta$ conditional on $X$ determine the joint distribution of $(\Theta,X)$ or, what is equivalent in your context, the distribution of $X$. If you have reasons to believe they do, you could explain why.
-
Yes but I was in the middle of editing the question while you posted an answer. So please take a look at the question again and I will very much appreciate your comments. – Thales May 4 '11 at 13:30
To continue on my edit: More generally, browsing through the questions you asked on math.SE, I have never seen you show anything about what you had tried, what was your intuition, or where you got stuck. This runs contrary to the specific instructions given on this site and cannot improve the quality of the answers you receive here. – Did May 7 '11 at 13:55
So you did not even try with 2*2 case? – Thales May 8 '11 at 17:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536116123199463, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showpost.php?p=1909445&postcount=2
|
Thread: Help with Reciprocal Space View Single Post
There are different ways to think about reciprocal space. For one, you can think of it as a purely geometrical thing. If you have a lattice with vectors $$\mathbf{a}_i$$, then the reciprocal lattice is defined by $$\mathbf{a}_i \cdot \mathbf{g}_j = 2\pi\delta_{ij}$$. It's a purely geometrical thing, and the above equation defines a lattice with vectors $$\mathbf{g}_i$$. In mathematics this would probably be called a dual lattice, but in physics we call it a reciprocal lattice because of the units (g has units of 1 / length). Another way reciprocal space shows up is if you look at the solution to the Schroedinger equation for a periodic potential, Bloch's theorem says that the wavefunctions are a product of two periodic functions, and of the form $$\psi_{kn}(r) = u_{kn}(r) e^{i k \cdot r}$$ where k is the so-called pseudomomentum vector, which serves as a quantum number. k is restricted to the first Brillouin zone, where $$k = l_1 g_1 + l_2 g_2 + l_3 g_3$$ where the l's are restricted to the range [-0.5,0.5]. So k is restricted to wavelengths which are longer than a lattice vector. The other function $$u_{kn}(r)$$ is periodic within the unit cell, so if you expand it in planewaves, all the planewaves would be like $$n_1 g_1 + n_2 g_2 + n_3 g_3$$ where the n's are integers, thus these wavelengths are all the lattice constants divided by integers.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498572945594788, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/15841/how-do-the-compact-hausdorff-topologies-sit-in-the-lattice-of-all-topologies-on-a
|
## How do the compact Hausdorff topologies sit in the lattice of all topologies on a set?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is about the space of all topologies on a fixed set X. We may order the topologies by refinement, so that τ ≤ σ just in case every τ open set is open in σ. Equivalently, we say in this case that τ is coarser than σ, that σ is finer than τ or that σ refines τ. (See wikipedia on comparison of topologies.) The least element in this order is the indiscrete topology and the largest topology is the discrete topology.
One can show that the collection of all topologies on a fixed set is a complete lattice. In the downward direction, for example, the intersection of any collection of topologies on X remains a topology on X, and this intersection is the largest topology contained in them all. Similarly, the union of any number of topologies generates a smallest topology containing all of them (by closing under finite intersections and arbitrary unions). Thus, the collection of all topologies on X is a complete lattice.
Note that the compact topologies are closed downward in this lattice, since if a topology τ has fewer open sets than σ and σ is compact, then τ is compact. Similarly, the Hausdorff topologies are closed upward, since if τ is Hausdorff and contained in σ, then σ is Hausdorff. Thus, the compact topologies inhabit the bottom of the lattice and the Hausdorff topologies the top.
These two collections kiss each other in the compact Hausdorff topologies. Furthermore, these kissing points, the compact Hausdorff topologies, form an antichain in the lattice: no two of them are comparable. To see this, suppose that τ subset σ are both compact Hausdorff. If U is open with respect to σ, then the complement C = X - U is closed with respect to σ and hence compact with respect to σ in the subspace topology. Thus C is also compact with respect to τ in the subspace topology. Since τ is Hausdorff, this implies (an elementary exercise) that C is closed with respect to τ, and so U is in τ. So τ = σ. Thus, no two distinct compact Hausdorff topologies are comparable, and so these topologies are spread out sideways, forming an antichain of the lattice.
My first question is, do the compact Hausdorff topologies form a maximal antichain? Equivalently, is every topology comparable with a compact Hausdorff topology? [Edit: François points out an easy counterexample in the comments below.]
A weaker version of the question asks merely whether every compact topology is refined by a compact Hausdorff topology, and similarly, whether every Hausdorff topology refines a compact Hausdorff topology. Under what circumstances is a compact topology refined by a unique compact Hausdorff topology? Under what circumstances does a Hausdorff topology refine a unique compact Hausdorff topology?
What other topological features besides compactness and Hausdorffness have illuminating interaction with this lattice?
Finally, what kind of lattice properties does the lattice of topologies exhibit? For example, the lattice has atoms, since we can form the almost-indiscrete topology having just one nontrivial open set (and any nontrivial subset will do). It follows that every topology is the least upper bound of the atoms below it. The lattice of topologies is complemented. But the lattice is not distributive (when X has at least two points), since it embeds N5 by the topologies involving {x}, {y} and the topology generated by {{x},{x,y}}.
-
5
The maximal antichain question has a negative answer for infinite X. Split X into two infinite halves put the discrete topology on one half and the indiscrete topology on the other. – François G. Dorais♦ Feb 19 2010 at 21:52
1
@Qiaochu: The order topology on a successor ordinal is compact Hausdorff. – François G. Dorais♦ Feb 20 2010 at 0:17
2
Is it obvious that there exists a compact Hausdorff topology on every set? Yes (using the well-ordering theorem) ... the order topology on the set of ordinals up to and including a given ordinal. – Gerald Edgar Feb 20 2010 at 0:17
3
Steen & Seebach 99. – Gerald Edgar Feb 20 2010 at 0:21
2
About Qiaochu's question: is it conceivable that it is a weak AC principle that every set has a compact Hausdorff topology? – Joel David Hamkins Feb 20 2010 at 4:19
show 11 more comments
## 2 Answers
This is a community wiki of the answers in the comments.
• The compact Hausdorff topologies do not generally form a maximal antichain. If X is infinite, split X into two infinite halves and put the discrete topology on one half and the indiscrete topology on the other half. (Comment by François G. Dorais)
• There is a maximal compact topology on a countable space which is not Hausdorff. See Steen & Seebach 99. (Comment by Gerald Edgar)
• There is a minimal Hausdorff topology on a countable space which is not compact. See Steen & Seebach 100. (Comment by François G. Dorais)
• Those examples can be lifted to any cardinality space, simply by using the disjoint sum with any given compact Hausdorff space. (Comment by Gerald Edgar)
• The Axiom of Choice implies that every set admits a compact Hausdorff topology, using the order topology of a successor well-ordering of it. (Comments by François G. Dorais and Gerald Edgar)
(Feel free to edit and expand)
-
1
Great! This is exactly the sort of answer for which I was hoping (although I don't like the restriction to countable spaces---I wonder if it can be generalized?). Thanks François, Gerald and Qiaochu for the great information (and especially Gerald for the Steen and Seebach reference). But now I observe that since you all answered in comments, we are unable to bestow the deserved reputation on any of you.... – Joel David Hamkins Feb 20 2010 at 4:18
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In fact there are spaces which are `minimal Hausdorff'' -- they have no coarser Hausdorff topology -- but are not compact. It turns out that these spaces are`H-closed'' (every open cover has a finite subfamily whose closures cover) and semi-regular (the collection of regular open sets form a base). A minimal Hausdorff space is compact exactly when it is Urysohn. Spaces which have coarser minimal Hausdorff topologies are called Kat\v etov. A nice'' example of a space which is not Kat\v etov is the space of rational numbers $\mathbb{Q}$.
I'm not sure about compact spaces, but I suspect that a Hausdorff space has a unique coarser minimal Hausdorff topology exactly when it is H-closed. One direction I'm sure of -- the semiregularization of an H-closed space is minimal Hausdorff.
BTW, (one of) THE BOOK(s) on this topic is _Extensions_and_Absolutes_of_Hausdorff_Spaces_ by Porter and Woods, however it discusses Hausdorff spaces almost exclusively.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9077380299568176, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/84933-squeeze-theorem-print.html
|
# squeeze theorem
Printable View
• April 21st 2009, 07:29 PM
needhelp101
squeeze theorem
cos(n*pi/n^2)
i have to prove that this sequence is converging to 0. i am unable to see how this would converge to 0. any ideas?
• April 21st 2009, 07:38 PM
mr fantastic
Quote:
Originally Posted by needhelp101
cos(n*pi/n^2)
i have to prove that this sequence is converging to 0. i am unable to see how this would converge to 0. any ideas?
It doesn't converege to zero.
What you have simplifies to $\cos \frac{\pi}{n}$ and this converges to 1 as n --> +oo.
• April 21st 2009, 07:45 PM
needhelp101
yea, i was able to understand that, but according to the question, it is suppose to converge to 0. i was able to get it to converge to 1, but i wasn't sure if there was any other way for it to converge to 0.
• April 21st 2009, 10:36 PM
redsoxfan325
Quote:
Originally Posted by needhelp101
cos(n*pi/n^2)
i have to prove that this sequence is converging to 0. i am unable to see how this would converge to 0. any ideas?
As you have it written $\lim_{n\to\infty}\cos\left(\frac{n\pi}{n^2}\right) = 1$ because as $n\to\infty, \frac{n\pi}{n^2}\to 0$ and $\cos(0)=1$.
Perhaps you meant $\lim_{n\to\infty}\frac{\cos(n\pi)}{n^2}$? This does equal zero because $\cos(n\pi)\leq 1$ for all $n$ so $\lim_{n\to\infty}\frac{\cos(n\pi)}{n^2} \leq \lim_{n\to\infty}\frac{1}{n^2} = 0$
• April 21st 2009, 11:25 PM
woof
Quote:
Originally Posted by redsoxfan325
Perhaps you meant $\lim_{n\to\infty}\frac{\cos(n\pi)}{n^2}$? This does equal zero because $\cos(n\pi)\leq 1$ for all $n$ so $\lim_{n\to\infty}\frac{\cos(n\pi)}{n^2} \leq \lim_{n\to\infty}\frac{1}{n^2} = 0$
And to add to that, now the "squeeze" part:
$-1\leq\cos(n\pi)\leq 1$ for all $n$ so $\lim_{n\to\infty}\frac{-1\ \ }{n^2} \leq\lim_{n\to\infty}\frac{\cos(n\pi)}{n^2} \leq \lim_{n\to\infty}\frac{1}{n^2}$
Now both "ends" converge to 0, squeezing the middle to zero.
• April 21st 2009, 11:32 PM
redsoxfan325
Right, I forgot to finish squeezing in my reply.
• April 21st 2009, 11:42 PM
woof
But you got the important part :)
• April 22nd 2009, 02:49 AM
needhelp101
thank u all so much. u all have been very helpful 2 me for this problem.
All times are GMT -8. The time now is 06:18 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458625316619873, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/22001/references-for-modular-polynomials
|
## References for modular polynomials
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am teaching a graduate "classical" course on modular forms. I try to achieve the most elementary level for presenting modular polynomials. Serge Lang's "Elliptic functions" cover the topic quite well, except for some inconsistence (from my point of view) in using left/right cosets for actions of the modular group on the set of matrices of determinant $n$. I overcome this trouble by using one more simple arithmetic lemma which relates left and right cosets. I wonder whether there exist another version of the Kronecker--Weber approach for modular polynomials, or maybe even another elementary proof.
-
An elementary proof of what? – Pete L. Clark Apr 21 2010 at 2:16
Of the standard properties of modular polynomials which allow one to prove that the algebraicity of values of the modular invariant at CM points. – Wadim Zudilin Apr 21 2010 at 2:34
## 2 Answers
I'm not quite sure what exactly you're asking for, but you might find the third part of David Cox's Primes of the form x^2 + n y^2 useful for an elementary approach to modular polynomials.
-
Thanks, Alison! I just got the book. It seems that David Cox uses exactly the same approach but takes more care than Lang. – Wadim Zudilin Apr 21 2010 at 2:33
Agreed. This is the most modern reference I know of for the "classical" approach to this material. – Pete L. Clark Apr 21 2010 at 2:45
Although the exposition is slightly tied to considering modular forms for $\Gamma_0(m)$ as well (something I do not cover in my very short course) and is too sketchy at some point, I would agree about its high quality and self-consistence. But Lang's proof (after fixing a minor piece) is simpler. – Wadim Zudilin May 3 2010 at 5:17
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Hi,
I find Silverman's exposition in his "Advanced Topics in the Arithmetic of Elliptic Curves" very clear, and concise. Look at the section on the integrality of the j-invariant, in particular Theorem 6.1 of Chapter II, Section 6, pages 140-151. You can find three proofs of the integrality of the j-invariant and, more concretely, you should have a look at the "Analytic proof of Theorem 6.1", starting in page 143, which is the "classical" one I believe you are interested in (it is also a proof using matrices of determinant n, but the exposition is great).
-
It's really a nice and self-contained exposition of the same classical proof, with "disadvantage" of starting with isogenies of elliptic curves. Since in my course the latter appear later, I cannot use it as reference for students. – Wadim Zudilin May 3 2010 at 5:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265831708908081, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/187818-curve-fitting-nonlinear-polynomial.html
|
# Thread:
1. ## Curve fitting nonlinear polynomial
Hello,
Sorry to say but I studied this in class 20 years ago. Now it is gone. So I need help here.
I have 5-10 data points that I want to approximate a nonlinear polynominal to. I need to write the solution in excel, in the visual basic editor, meaning I want to use an iterative approach and not a "inverse matrix" solution. Can someone please help me here.
Ex)
x=2012, 2013, 2014, 2015, 2016
y= 5937, 5343, 4310, 3887, 0
Preferably, I want to have zero error at the boundaries but if the difference between the total least square error for "zero boundrary solution" and "without zero boundary error constraint" is too big I can skip the boundary conditions.
Would approciate if solution is written in a form that it is easy to see how I can code it in VB, flow chart maybe.....
Thanks in advance from an "old" student that has not been able to remember this knowledge 20 years later.....
/sbe
2. ## Re: Curve fitting nonlinear polynomial
Originally Posted by sbe70
Hello,
Sorry to say but I studied this in class 20 years ago. Now it is gone. So I need help here.
I have 5-10 data points that I want to approximate a nonlinear polynominal to. I need to write the solution in excel, in the visual basic editor, meaning I want to use an iterative approach and not a "inverse matrix" solution. Can someone please help me here.
Ex)
x=2012, 2013, 2014, 2015, 2016
y= 5937, 5343, 4310, 3887, 0
Preferably, I want to have zero error at the boundaries but if the difference between the total least square error for "zero boundrary solution" and "without zero boundary error constraint" is too big I can skip the boundary conditions.
Would approciate if solution is written in a form that it is easy to see how I can code it in VB, flow chart maybe.....
Thanks in advance from an "old" student that has not been able to remember this knowledge 20 years later.....
/sbe
That depends... Do you want a curve that fits all the points exactly? If so you will need a quartic. Otherwise you can do a lest squares quadratic or cubic...
3. ## Re: Curve fitting nonlinear polynomial
Hi,
No, I will seldom get an exact match since the data points will vary a lot. I want to be able to change the order of the polynomial if necessary.
I have been trying to use least square approach but everything on the internet has inverse matrix examples. I can not use that approach since the inverse sometime does not work, the matrix has no inverse.
Would be very grateful if you could help out. it is urgent to the stress level is not good
/sbe
4. ## Re: Curve fitting nonlinear polynomial
Originally Posted by sbe70
Hi,
No, I will seldom get an exact match since the data points will vary a lot. I want to be able to change the order of the polynomial if necessary.
I have been trying to use least square approach but everything on the internet has inverse matrix examples. I can not use that approach since the inverse sometime does not work, the matrix has no inverse.
Would be very grateful if you could help out. it is urgent to the stress level is not good
/sbe
Maybe the reason your system of equations isn't giving you an inverse matrix is because the matrix isn't square when you aren't fitting the polynomial exactly. The first step to getting a least-square solution is to first premultiply both sides by the transpose. This creates a square matrix, which will probably have an inverse.
5. ## Re: Curve fitting nonlinear polynomial
As I understand it there does not always exists an inverse even for a square matrix. If I am wrong I need to find a way to code the inverse of a matrix. Before I need to determine the polynomial order based on number of data points. How do I do that?
6. ## Re: Curve fitting nonlinear polynomial
Originally Posted by sbe70
As I understand it there does not always exists an inverse even for a square matrix. If I am wrong I need to find a way to code the inverse of a matrix. Before I need to determine the polynomial order based on number of data points. How do I do that?
1. CHOOSE the order of the polynomial you want. In this case, you can only have up to a quartic.
2. Write your system of equations that you get from the data points.
3. Write this system in matrix form $\displaystyle \mathbf{A}\mathbf{x} = \mathbf{b}$.
4. Premultiply both sides by the transpose of $\displaystyle \mathbf{A}$, in other words, you should get $\displaystyle \mathbf{A}^T\mathbf{A}\mathbf{x} = \mathbf{A}^T\mathbf{b}$. The matrix $\displaystyle \mathbf{A}^T\mathbf{A}$ is now square.
5. IF the inverse of $\displaystyle \mathbf{A}^T\mathbf{A}$ exists, premultiply both sides of the equation by it.
In other words $\displaystyle \mathbf{x} = \left(\mathbf{A}^T\mathbf{A}\right)^{-1}\mathbf{A}^T\mathbf{A}\mathbf{b}$.
7. ## Re: Curve fitting nonlinear polynomial
I have a problem if the inverse does not exist. What do I do then? That is why I prefer a numerical method before a matrix approach. Do you have an idea for an numerical approach?
8. ## Re: Curve fitting nonlinear polynomial
Originally Posted by sbe70
I have a problem if the inverse does not exist. What do I do then? That is why I prefer a numerical method before a matrix approach. Do you have an idea for an numerical approach?
Chances are the inverse will exist.
9. ## Re: Curve fitting nonlinear polynomial
Hello Prove It, and sbe70,
I have a similar problem. However I want to curve fit 3 points exactly resulting in some y=f(x) formula I can use to get any other point.
My sample data is x=1,120,2304 and y=1,145,210. I am actually more concerned with the middle data, and less so with the extremes.
I'm looking for a curve something like y = c * ln(x) + Y but that is just to get an idea: it does not have to involve ln(x).
Apparently Excel uses the least squares method for curve fitting, but when I selected a 2nd degree polynomial form for these 3 points I got the St. Louis Arch !
I guess then my last criterion would be a curve fitting method which also results in some miniumum area under the curve.
Thanks, Mike
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317049384117126, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/9380/why-is-push-back-in-c-vectors-constant-amortized
|
# Why is push_back in C++ vectors constant amortized?
I am learning C++ and noticed that the running time for the push_back function for vectors is constant "amortized." The documentation further notes that "If a reallocation happens, the reallocation is itself up to linear in the entire size."
Shouldn't this mean the push_back function is $O(n)$, where $n$ is the length of the vector? After all, we are interested in worst case analysis, right?
I guess, crucially, I don't understand how the adjective "amortized" changes the running time.
-
With a RAM machine, allocating $n$ bytes of memory is not an $O(n)$ operation -- it is considered pretty much constant time. – usul Feb 1 at 14:09
## 2 Answers
The important word here is "amortized". Amortized analysis is an analysis technique that examines a sequence of $n$ operations. If the whole sequence runs in $T(n)$ time, then each operation in the sequence runs in $T(n)/n$. The idea is that while a few operations in the sequence might be costly, they can't happen often enough to weigh down the program. It's important to note that this is different from average case analysis over some input distribution or randomized analysis. Amortized analysis established a worst case bound for the performance of an algorithm irrespective of the inputs. It's most commonly used to analyse data structures, which have a persistent state throughout the program.
One of the most common examples given is the analysis of a stack with a multipop operations that pops $k$ elements. A naive analysis of multipop would say that in the worst case multipop must take $O(n)$ time since it might have to pop off all the elements of the stack. However, if you look at a sequence of operations, you'll notice that the number of pops can not exceed the number of pushes. Thus over any sequence of $n$ operations the number of pops can't exceed $O(n)$, and so multipop runs in $O(1)$ amortized time even though occasionally a single call might take more time.
Now how does this relate to C++ vectors? Vectors are implemented with arrays so to increase the size of a vector you must reallocate memory and copy the whole array over. Obviously we wouldn't want to do this very often. So if you perform a push_back operation and the vector needs to allocate more space, it will increase the size by a factor $m$. Now this takes more memory, which you may not use in full, but the next few push_back operations all run in constant time.
Now if we do the amortized analysis of the push_back operation (which I found here) we'll find that it runs in constant amortized time. Suppose you have $n$ items and your multiplication factor is $m$. Then the number of relocations is roughly $\log_m(n)$. The $i$th reallocation will cost proportional to $m^i$, about the size of the current array. Thus the total time for $n$ push back is $\sum_{i=1}^{\log_m(n)}m^i \approx \frac{nm}{m-1}$, since it's a geometric series. Divide this by $n$ operations and we get that each operation takes $\frac{m}{m-1}$, a constant. Lastly you have to be careful about choosing your factor $m$. If it's too close to $1$ then this constant gets too large for practical applications, but if $m$ is too large, say 2, then you start wasting a lot of memory. The ideal growth rate varies by application, but I think some implementations use $1.5$.
-
## Did you find this question interesting? Try our newsletter
email address
Although @Marc has given (what I think is) an excellent analysis, some people might prefer to consider things from a slightly different angle.
One is to consider a slightly different way of doing a reallocation. Instead of copying all the elements from the old storage to the new storage immediately, consider copying only one element at a time -- i.e., each time you do a push_back, it adds the new element to the new space, and copies exactly one existing element from the old space to the new space. Assuming a growth factor of 2, it's pretty obvious that when the new space is full, we'd have finished copying all the elements from the old space to the new space, and each push_back have been exactly constant time. At that point, we'd discard the old space, allocate a new block of memory that was twice as large gain, and repeat the process.
Pretty clearly, we can continue this indefinitely (or as long as there's memory available, anyway) and every push_back would involve adding one new element and copying one old element.
A typical implementation still has exactly the same number of copies -- but instead of doing the copies one at a time, it copies all the existing elements at once. On one hand, you're right: that does mean that if you look at individual invocations of push_back, some of them will be substantially slower than others. If we look at a long term average, however, the amount of copying done per invocation of push_back remains constant, regardless of the size of the vector.
Although it's irrelevant to the computational complexity, I think it's worth pointing out why it's advantageous to do things as they do, instead of copying one element per push_back, so the time per push_back remains constant. There are are least three reasons to consider.
The first is simply memory availability. The old memory can be freed for other uses only after the copying is finished. If you only copied one item at a time, the old block of memory would remain allocated much longer. In fact, you'd have one old block and one new block allocated essentially all the time. If you decided on a growth factor smaller than two (which you usually want) you'd need even more memory allocated all the time.
Second, if you only copied one old element at a time, indexing into the array would be a little more tricky -- each indexing operation would need to figure out whether the element at the given index was currently in the old block of memory or the new one. That's not terribly complex by any means, but for an elementary operation like indexing into an array, almost any slow-down could be significant.
Third, by copying all at once, you take much better advantage of caching. Copying all at once, you can expect both the source and destination to be in the cache in most cases, so the cost of a cache miss is amortized over the number of elements that will fit in a cache line. If you copy one element at a time, you might easily have a cache miss for every element you copy. That only changes the constant factor, not the complexity, but it can still be fairly significant -- for a typical machine, you could easily expect a factor of 10 to 20.
It's probably also worth considering the other direction for a moment: if you were designing a system with real-time requirements, it might well make sense to copy only one element at a time instead of all at once. Although overall speed might (or might not) be lower, you'd still have a hard upper bound on the time taken for a single execution of push_back -- presuming you had a real-time allocator (though of course, many real-time systems simply prohibit dynamic allocation of memory at all, at least in portions with real-time requirements).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567016959190369, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/14466/how-do-leptons-arise-from-lambda-decay
|
# How do Leptons arise from Lambda decay?
I have a question for an assignment:
Use your understanding of the quark model of hadrons and the boson model of the weak nuclear interaction to explain how leptons can arise from lambda decay, especially when there is an emission of a muon and a muon antineutrino.
I'm a bit lost as to how to answer this; I'm not looking for the full answer (it is for an assessed assignment so the work needs to be original) but if anyone has any pointers as to where I should be starting I would be very grateful!
-
2
The big hint here is "Use [...] the boson model of the weak nuclear interaction". Ask yourself what is special about the weak interaction as compared to the strong interaction? – dmckee♦ Sep 8 '11 at 18:32
## 2 Answers
-
Thanks for both those comments, I think I've worked it out! – Richard Sep 13 '11 at 16:16
$$\Lambda \to p + \pi$$ $$\pi \to \mu + \bar\nu_\mu$$ therefore, $$\Lambda \to p + \mu + \bar\nu_\mu$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473080635070801, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/41809/flatly-compactifiable-morphisms/92744
|
## Flatly compactifiable morphisms
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $f:U \to S$ be a flat morphism. Let us say that $f$ is flatly compactifiable if there exists a proper morphism $\bar{f}:X \to S$ and a closed subscheme $Z \subset X$ such that
1) $U = X \setminus Z$ and $f = \bar{f}_{|U}$;
2) $\bar{f}$ is proper;
3) BOTH $X$ and $Z$ are flat over $S$.
My question is whether this notion already appeared in the literature and what is the correct name for it?
-
1
I'm also interested in the same question with 3) replaced by: X is smooth over S and (Z,X) is strict relative normal crossings over S. – Dustin Clausen Oct 11 2010 at 17:09
1
I suppose $f$ is finitely presented; heck, let's assume $S$ is noetherian. Now if $Z$ is to be flat then since it is also proper we see that its image in $S$ is both open and closed. So if we also assume $S$ is connected then $Z$ meets all fibers. That makes things sound problematic if $U$ is proper over a dense open in $S$ but not proper over $S$. Perhaps you have a more specific situation in mind? If so, please say more about that. – BCnrd Oct 11 2010 at 17:12
Brian, I think your comment is relevant to the problem "When is such a property likely to hold?" but this is not Sasha's question. – Laurent Moret-Bailly Oct 11 2010 at 18:09
Dear BCnrd, what do you mean by "things sound problematic"? Certainly, there are plenty of morphisms which do not admit such a compactification. For example $x:(A^2 \setminus (0,0)) \to A^1$ doesn't. My question was, what is the correct name for this notion and whether one can find something about it in the literature. – Sasha Oct 11 2010 at 18:19
Dear Sasha: Whoops, I misunderstood the question as to be asking for further hypotheses on $f$ which would suffice for the existence of such an $\overline{f}$, and so I was just mentioning a trivial example where the answer is negative and asking if you had specific classes of examples in mind for which you'd want an affirmative answer (so as to better guide appropriate kinds of hypotheses to seek). But this is not relevant to the question your were actually asking; sorry about that. – BCnrd Oct 11 2010 at 19:23
## 1 Answer
In general you can not expect that $f$ will be flat on $X-U$, but you have a locally finite stratification of $X$ for which $U$ is the dense stratum, and such that $f$ is flat over each strata. This notion is called "platification" (in French) and it was dealed with great details in the paper of Gruson and Raynaud "Techniques de platification d'un module".
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567424654960632, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/1425/what-other-one-way-functions-are-used-in-cryptosystems/1430
|
# What other one-way functions are used in cryptosystems?
For RSA and El Gamal (and most other public key cryptosystems), one of the key ideas is that factoring and finding discrete logarithms are hard. There are other systems that rely on certain properties of lattices.
What are the other one-way-ish functions that have cryptosystems designed around them?
-
## 3 Answers
The Merkle–Hellman knapsack cryptosystem was based on a variation of the subset sum problem. (It was broken by Adi Shamir a few years after it was developed.)
Given a set of numbers $A$ and a number b, find a subset of $A$, which sums to b.
The cryptosystem relies on the fact that in this form of the subset sum problem if the set $A$ is superincreasing (each element of the set is greater than the sum of all the previous elements), the problem is solvable in polynomial time. Also, you can transform the superincreasing set $A$ into a non-superincreasing set $B$ using a multiplier $r$ and a modulus $q$.
Solving the subset sum problem with a non-superincreasing set ($B$) is NP-complete, so you can use $B$ as a public key to encrypt messages, then use $A$, $r$, and $q$ as a private key to decrypt them.
-
+1 for that answer. However broken cryptosystems are of less interest than the other kind, and knapsack-based cryptosystems have a bad reputation (since that episode, a perhaps some sequel). – fgrieu Dec 10 '11 at 17:02
Many lattice schemes are based on the shortest vector problem and it's variants. Elliptic curve crypto systems are based on something akin to discrete logarithms but it is different in its details. Some authentication schemes like HB are based on learning parity with noise and systems are based on the more general learning with errors. Subset sum was mentioned. Decoding information sets is the basis of McEliece. The conjugacy search problem is ostensibly the basis of some cryptography in braid groups.
-
I will just like to contribute in light of what has been told above. There are few cryptosystems (just signature schemes as far as my knowledge goes) that are based on the hardness of solving a system of multi-variate polynomial. Solving a system of multi-variate polynomial is proved to be $\mathsf{NP}$-hard and just like the "hard" problems on lattices, they have resisted serious quantum attacks. Constructing a secure encryption scheme based on multi-variate polynomials is still an open problem.
People have also used abstract concepts like Fractal to construct cryptosystem based on Mandelbrot sets, but for some reason, it never attracted too much attention though it is considered to be secure against quantum attack.
A recent work that constructed public key primitives that are as secure as subset sum was proved in TCC 2010 by Lyubashevsky et. al. It is a good paper to read as it gives a very good description of the relation between hardness of some lattice based problem and subset sum. So, in light of recent works, you can count subset-sum as another problem on which cryptographic primitives are based.
Frankly, this list can go on and on forever, but I think these are the few that are worth mentioning in addition to the one that are already been mentioned, especially if you are interested in post-quantum cryptography.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9687843322753906, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Real_numbers
|
# Real number
(Redirected from Real numbers)
For the real numbers used in descriptive set theory, see Baire space (set theory). For the computing datatype, see Floating-point number.
A symbol of the set of real numbers (ℝ)
Real numbers can be thought of as points on an infinitely long number line.
In mathematics, a real number is a value that represents a quantity along a continuous line. The real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers such as √2 (1.41421356… the square root of two, an irrational algebraic number) and π (3.14159265…, a transcendental number). Real numbers can be thought of as points on an infinitely long line called the number line or real line, where the points corresponding to integers are equally spaced. Any real number can be determined by a possibly infinite decimal representation such as that of 8.632, where each consecutive digit is measured in units one tenth the size of the previous one. The real line can be thought of as a part of the complex plane, and correspondingly, complex numbers include real numbers as a special case.
These descriptions of the real numbers are not sufficiently rigorous by the modern standards of pure mathematics. The discovery of a suitably rigorous definition of the real numbers — indeed, the realization that a better definition was needed — was one of the most important developments of 19th century mathematics. The currently standard axiomatic definition is that real numbers form the unique Archimedean complete totally ordered field (R,+,·,<), up to isomorphism,[1] whereas popular constructive definitions of real numbers include declaring them as equivalence classes of Cauchy sequences of rational numbers, Dedekind cuts, or certain infinite "decimal representations", together with precise interpretations for the arithmetic operations and the order relation. These definitions are equivalent in the realm of classical mathematics.
The reals are uncountable, that is, while both the set of all natural numbers and the set of all real numbers are infinite sets, there can be no one-to-one function from the real numbers to the natural numbers: the cardinality of the set of all real numbers (denoted $\mathfrak c$ and called cardinality of the continuum) is strictly greater than the cardinality of the set of all natural numbers (denoted $\aleph_0$). The statement that there is no subset of the reals with cardinality strictly greater than $\aleph_0$ and strictly smaller than $\mathfrak c$ is known as the continuum hypothesis. It is known to be neither provable nor refutable using the axioms of Zermelo–Fraenkel set theory, the standard foundation of modern mathematics, provided ZF set theory is consistent.
## Basic properties
A real number may be either rational or irrational; either algebraic or transcendental; and either positive, negative, or zero. Real numbers are used to measure continuous quantities. They may be expressed by decimal representations that have an infinite sequence of digits to the right of the decimal point; these are often represented in the same form as 324.823122147… The ellipsis (three dots) indicate that there would still be more digits to come.
More formally, real numbers have the two basic properties of being an ordered field, and having the least upper bound property. The first says that real numbers comprise a field, with addition and multiplication as well as division by nonzero numbers, which can be totally ordered on a number line in a way compatible with addition and multiplication. The second says that if a nonempty set of real numbers has an upper bound, then it has a real least upper bound. The second condition distinguishes the real numbers from the rational numbers: for example, the set of rational numbers whose square is less than 2 is a set with an upper bound (e.g. 1.5) but no (rational) least upper bound: hence the rational numbers do not satisfy the least upper bound property.
## In physics
In the physical sciences, most physical constants such as the universal gravitational constant, and physical variables, such as position, mass, speed, and electric charge, are modeled using real numbers. In fact, the fundamental physical theories such as classical mechanics, electromagnetism, quantum mechanics, general relativity and the standard model are described using mathematical structures, typically smooth manifolds or Hilbert spaces, that are based on the real numbers although actual measurements of physical quantities are of finite accuracy and precision.
In some recent developments of theoretical physics stemming from the holographic principle, the Universe is seen fundamentally as an information store, essentially zeroes and ones, organized in much less geometrical fashion and manifesting itself as space-time and particle fields only on a more superficial level. This approach removes the real number system from its foundational role in physics and even prohibits the existence of infinite precision real numbers in the physical universe by considerations based on the Bekenstein bound.[2]
## In computation
Computer arithmetic cannot directly operate on real numbers, but only on a finite subset of rational numbers, limited by the number of bits used to store them, whether as floating-point numbers or arbitrary precision numbers. However, computer algebra systems can operate on irrational quantities exactly by manipulating formulas for them (such as $\textstyle\sqrt 2$, $\textstyle\arcsin \left({{2}\over{23}}\right)$, or$\textstyle\int_{0}^{1} {x^{x}}\;dx$) rather than their rational or decimal approximation;[3] however, it is not in general possible to determine whether two such expressions are equal (the constant problem).
A real number is called computable if there exists an algorithm that yields its digits. Because there are only countably many algorithms,[4] but an uncountable number of reals, almost all real numbers fail to be computable. Moreover, the equality of two computable numbers is an undecidable problem. Some constructivists accept the existence of only those reals that are computable. The set of definable numbers is broader, but still only countable.
## Notation
Mathematicians use the symbol R (or alternatively, $\mathbb{R}$, the letter "R" in blackboard bold, Unicode ℝ – U+211D, named DOUBLE-STRUCK CAPITAL R) to represent the set of all real numbers (as this set is naturally endowed with a structure of field, the expression field of the real numbers is more frequently used than set of all real numbers). The notation Rn refers to the Cartesian product of n copies of R, which is an n-dimensional vector space over the field of the real numbers; this vector space may be identified to the n-dimensional space of Euclidean geometry as soon as a coordinate system has been chosen in the latter. For example, a value from R3 consists of three real numbers and specifies the coordinates of a point in 3-dimensional space.
In mathematics, real is used as an adjective, meaning that the underlying field is the field of the real numbers (or the real field). For example real matrix, real polynomial and real Lie algebra. As a substantive, the term is used almost strictly in reference to the real numbers themselves (e.g., The "set of all reals").
## History
Simple fractions have been used by the Egyptians around 1000 BC; the Vedic "Sulba Sutras" ("The rules of chords") in, ca. 600 BC, include what may be the first 'use' of irrational numbers. The concept of irrationality was implicitly accepted by early Indian mathematicians since Manava (c. 750–690 BC), who were aware that the square roots of certain numbers such as 2 and 61 could not be exactly determined.[5] Around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2.
The Middle Ages brought the acceptance of zero, negative, integral, and fractional numbers, first by Indian and Chinese mathematicians, and then by Arabic mathematicians, who were also the first to treat irrational numbers as algebraic objects,[6] which was made possible by the development of algebra. Arabic mathematicians merged the concepts of "number" and "magnitude" into a more general idea of real numbers.[7] The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850–930) was the first to accept irrational numbers as solutions to quadratic equations or as coefficients in an equation, often in the form of square roots, cube roots and fourth roots.[8]
In the 16th century, Simon Stevin created the basis for modern decimal notation, and insisted that there is no difference between rational and irrational numbers in this regard.
In the 17th century, Descartes introduced the term "real" to describe roots of a polynomial, distinguishing them from "imaginary" ones.
In the 18th and 19th centuries there was much work on irrational and transcendental numbers. Johann Heinrich Lambert (1761) gave the first flawed proof that π cannot be rational;[citation needed] Adrien-Marie Legendre (1794) completed the proof, and showed that π is not the square root of a rational number. Paolo Ruffini (1799) and Niels Henrik Abel (1842) both constructed proofs of Abel–Ruffini theorem: that the general quintic or higher equations cannot be solved by a general formula involving only arithmetical operations and roots.
Évariste Galois (1832) developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Joseph Liouville (1840) showed that neither e nor e2 can be a root of an integer quadratic equation, and then established existence of transcendental numbers, the proof being subsequently displaced by Georg Cantor (1873). Charles Hermite (1873) first proved that e is transcendental, and Ferdinand von Lindemann (1882), showed that π is transcendental. Lindemann's proof was much simplified by Weierstrass (1885), still further by David Hilbert (1893), and has finally been made elementary by Adolf Hurwitz and Paul Gordan.
The development of calculus in the 18th century used the entire set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871. In 1874 he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, which he published in 1891. See Cantor's first uncountability proof.
## Definition
Main article: Construction of the real numbers
The real number system $(\mathbb R,+,\cdot,<)$ can be defined axiomatically up to an isomorphism, which is described below. There are also many ways to construct "the" real number system, for example, starting from natural numbers, then defining rational numbers algebraically, and finally defining real numbers as equivalence classes of their Cauchy sequences or as Dedekind cuts, which are certain subsets of rational numbers. Another possibility is to start from some rigorous axiomatization of Euclidean geometry (Hilbert, Tarski etc.) and then define the real number system geometrically. From the structuralist point of view all these constructions are on equal footing.
### Axiomatic approach
Let ℝ denote the set of all real numbers. Then:
• The set ℝ is a field, meaning that addition and multiplication are defined and have the usual properties.
• The field ℝ is ordered, meaning that there is a total order ≥ such that, for all real numbers x, y and z:
• if x ≥ y then x + z ≥ y + z;
• if x ≥ 0 and y ≥ 0 then xy ≥ 0.
• The order is Dedekind-complete; that is, every non-empty subset S of ℝ with an upper bound in ℝ has a least upper bound (also called supremum) in ℝ.
The last property is what differentiates the reals from the rationals. For example, the set of rationals with square less than 2 has a rational upper bound (e.g., 1.5) but no rational least upper bound, because the square root of 2 is not rational.
The real numbers are uniquely specified by the above properties. More precisely, given any two Dedekind-complete ordered fields ℝ1 and ℝ2, there exists a unique field isomorphism from ℝ1 to ℝ2, allowing us to think of them as essentially the same mathematical object.
For another axiomatization of ℝ, see Tarski's axiomatization of the reals.
### Construction from the rational numbers
The real numbers can be constructed as a completion of the rational numbers in such a way that a sequence defined by a decimal or binary expansion like (3, 3.1, 3.14, 3.141, 3.1415, …) converges to a unique real number. For details and other constructions of real numbers, see construction of the real numbers.
## Properties
### Completeness
A main reason for using real numbers is that the reals contain all limits. More precisely, every sequence of real numbers having the property that consecutive terms of the sequence become arbitrarily close to each other necessarily has the property that after some term in the sequence the remaining terms are arbitrarily close to some specific real number. In mathematical terminology, this means that the reals are complete (in the sense of metric spaces or uniform spaces, which is a different sense than the Dedekind completeness of the order in the previous section). This is formally defined in the following way:
A sequence (xn) of real numbers is called a Cauchy sequence if for any ε > 0 there exists an integer N (possibly depending on ε) such that the distance |xn − xm| is less than ε for all n and m that are both greater than N. In other words, a sequence is a Cauchy sequence if its elements xn eventually come and remain arbitrarily close to each other.
A sequence (xn) converges to the limit x if for any ε > 0 there exists an integer N (possibly depending on ε) such that the distance |xn − x| is less than ε provided that n is greater than N. In other words, a sequence has limit x if its elements eventually come and remain arbitrarily close to x.
Notice that every convergent sequence is a Cauchy sequence. The converse is also true:
Every Cauchy sequence of real numbers is convergent to a real number.
That is, the reals are complete.
Note that the rationals are not complete. For example, the sequence (1, 1.4, 1.41, 1.414, 1.4142, 1.41421, …), where each term adds a digit of the decimal expansion of the positive square root of 2, is Cauchy but it does not converge to a rational number. (In the real numbers, in contrast, it converges to the positive square root of 2.)
The existence of limits of Cauchy sequences is what makes calculus work and is of great practical use. The standard numerical test to determine if a sequence has a limit is to test if it is a Cauchy sequence, as the limit is typically not known in advance.
For example, the standard series of the exponential function
$\mathrm{e}^x = \sum_{n=0}^{\infty} \frac{x^n}{n!}$
converges to a real number because for every x the sums
$\sum_{n=N}^{M} \frac{x^n}{n!}$
can be made arbitrarily small by choosing N sufficiently large. This proves that the sequence is Cauchy, so we know that the sequence converges even if the limit is not known in advance.
### "The complete ordered field"
The real numbers are often described as "the complete ordered field", a phrase that can be interpreted in several ways.
First, an order can be lattice-complete. It is easy to see that no ordered field can be lattice-complete, because it can have no largest element (given any element z, z + 1 is larger), so this is not the sense that is meant.
Additionally, an order can be Dedekind-complete, as defined in the section Axioms. The uniqueness result at the end of that section justifies using the word "the" in the phrase "complete ordered field" when this is the sense of "complete" that is meant. This sense of completeness is most closely related to the construction of the reals from Dedekind cuts, since that construction starts from an ordered field (the rationals) and then forms the Dedekind-completion of it in a standard way.
These two notions of completeness ignore the field structure. However, an ordered group (in this case, the additive group of the field) defines a uniform structure, and uniform structures have a notion of completeness (topology); the description in the section Completeness above is a special case. (We refer to the notion of completeness in uniform spaces rather than the related and better known notion for metric spaces, since the definition of metric space relies on already having a characterisation of the real numbers.) It is not true that R is the only uniformly complete ordered field, but it is the only uniformly complete Archimedean field, and indeed one often hears the phrase "complete Archimedean field" instead of "complete ordered field". Every uniformly complete Archimedean field must also be Dedekind-complete (and vice versa, of course), justifying using "the" in the phrase "the complete Archimedean field". This sense of completeness is most closely related to the construction of the reals from Cauchy sequences (the construction carried out in full in this article), since it starts with an Archimedean field (the rationals) and forms the uniform completion of it in a standard way.
But the original use of the phrase "complete Archimedean field" was by David Hilbert, who meant still something else by it. He meant that the real numbers form the largest Archimedean field in the sense that every other Archimedean field is a subfield of R. Thus R is "complete" in the sense that nothing further can be added to it without making it no longer an Archimedean field. This sense of completeness is most closely related to the construction of the reals from surreal numbers, since that construction starts with a proper class that contains every ordered field (the surreals) and then selects from it the largest Archimedean subfield.
### Advanced properties
See also: Real line
The reals are uncountable, that is, there are strictly more real numbers than natural numbers, even though both sets are infinite. In fact, the cardinality of the reals equals that of the set of subsets (i.e., the power set) of the natural numbers, and Cantor's diagonal argument states that the latter set's cardinality is strictly bigger than the cardinality of N. Since only a countable set of real numbers can be algebraic, almost all real numbers are transcendental. The non-existence of a subset of the reals with cardinality strictly between that of the integers and the reals is known as the continuum hypothesis. The continuum hypothesis can neither be proved nor be disproved; it is independent from the axioms of set theory.
As a topological space, the real numbers are separable. This is because the set of rationals, which is countable, is dense in the real numbers. The irrational numbers are also dense in the real numbers, however they are uncountable and have the same cardinality as the reals.
The real numbers form a metric space: the distance between x and y is defined as the absolute value |x − y|. By virtue of being a totally ordered set, they also carry an order topology; the topology arising from the metric and the one arising from the order are identical, but yield different presentations for the topology – in the order topology as intervals, in the metric topology as epsilon-balls. The Dedekind cuts construction uses the order topology presentation, while the Cauchy sequences construction uses the metric topology presentation. The reals are a contractible (hence connected and simply connected), separable and complete metric space of Hausdorff dimension 1. The real numbers are locally compact but not compact. There are various properties that uniquely specify them; for instance, all unbounded, connected, and separable order topologies are necessarily homeomorphic to the reals.
Every nonnegative real number has a square root in R, although no negative number does. This shows that the order on R is determined by its algebraic structure. Also, every polynomial of odd degree admits at least one real root: these two properties make R the premier example of a real closed field. Proving this is the first half of one proof of the fundamental theorem of algebra.
The reals carry a canonical measure, the Lebesgue measure, which is the Haar measure on their structure as a topological group normalised such that the unit interval [0,1] has measure 1. There exist sets of real numbers that are not Lebesgue measurable, e.g. Vitali sets.
The supremum axiom of the reals refers to subsets of the reals and is therefore a second-order logical statement. It is not possible to characterize the reals with first-order logic alone: the Löwenheim–Skolem theorem implies that there exists a countable dense subset of the real numbers satisfying exactly the same sentences in first order logic as the real numbers themselves. The set of hyperreal numbers satisfies the same first order sentences as R. Ordered fields that satisfy the same first-order sentences as R are called nonstandard models of R. This is what makes nonstandard analysis work; by proving a first-order statement in some nonstandard model (which may be easier than proving it in R), we know that the same statement must also be true of R.
The field R of real numbers is an extension field of the field Q of rational numbers, and R can therefore be seen as a vector space over Q. Zermelo–Fraenkel set theory with the axiom of choice guarantees the existence of a basis of this vector space: there exists a set B of real numbers such that every real number can be written uniquely as a finite linear combination of elements of this set, using rational coefficients only, and such that no element of B is a rational linear combination of the others. However, this existence theorem is purely theoretical, as such a base has never been explicitly described.
The well-ordering theorem implies that the real numbers can be well-ordered if the axiom of choice is assumed: there exists a total order on R with the property that every non-empty subset of R has a least element in this ordering. (The standard ordering ≤ of the real numbers is not a well-ordering since e.g. an open interval does not contain a least element in this ordering.) Again, the existence of such a well-ordering is purely theoretical, as it cannot be explicitly described.
## Generalizations and extensions
The real numbers can be generalized and extended in several different directions:
• The complex numbers contain solutions to all polynomial equations and hence are an algebraically closed field unlike the real numbers. However, the complex numbers are not an ordered field.
• The affinely extended real number system adds two elements +∞ and −∞. It is a compact space. It is no longer a field, not even an additive group, but it still has a total order; moreover, it is a complete lattice.
• The real projective line adds only one value ∞. It is also a compact space. Again, it is no longer a field, not even an additive group. However, it allows division of a non-zero element by zero. It is not ordered anymore.
• The long real line pastes together ℵ1* + ℵ1 copies of the real line plus a single point (here ℵ1* denotes the reversed ordering of ℵ1) to create an ordered set that is "locally" identical to the real numbers, but somehow longer; for instance, there is an order-preserving embedding of ℵ1 in the long real line but not in the real numbers. The long real line is the largest ordered set that is complete and locally Archimedean. As with the previous two examples, this set is no longer a field or additive group.
• Ordered fields extending the reals are the hyperreal numbers and the surreal numbers; both of them contain infinitesimal and infinitely large numbers and are therefore Non-Archimedean ordered fields.
• Self-adjoint operators on a Hilbert space (for example, self-adjoint square complex matrices) generalize the reals in many respects: they can be ordered (though not totally ordered), they are complete, all their eigenvalues are real and they form a real associative algebra. Positive-definite operators correspond to the positive reals and normal operators correspond to the complex numbers.
## "Reals" in set theory
In set theory, specifically descriptive set theory, the Baire space is used as a surrogate for the real numbers since the latter have some topological properties (connectedness) that are a technical inconvenience. Elements of Baire space are referred to as "reals".
## Real numbers and logic
The real numbers are most often formalized using the Zermelo–Fraenkel axiomatization of set theory, but some mathematicians study the real numbers with other logical foundations of mathematics. In particular, the real numbers are also studied in reverse mathematics and in constructive mathematics.[9]
Abraham Robinson's theory of nonstandard or hyperreal numbers extends the set of the real numbers by infinitesimal numbers, which allows building infinitesimal calculus in a way closer to the usual intuition of the notion of limit. Edward Nelson's internal set theory is a non-Zermelo–Fraenkel set theory that considers non-standard real numbers as elements of the set of the reals (and not of an extension of it, as in Robinson's theory).
The continuum hypothesis posits that the cardinality of the set of the real numbers is $\aleph_1$, i.e. the smallest infinite cardinal number after $\aleph_0$, the cardinality of the integers. Paul Cohen proved in 1963 that it is an axiom independent of the other axioms of set theory; that is, one may choose either the continuum hypothesis or its negation as an axiom of set theory, without contradiction.
## Notes
1. More precisely, given two complete totally ordered fields, there is a unique isomorphism between them. This implies that the identity is the unique field automorphism of the reals that is compatible with the ordering.
2. Scott Aaronson, , ACM SIGACT News, Vol. 36, No. 1. (March 2005), pp. 30–52.
3. Cohen, Joel S. (2002), Computer algebra and symbolic computation: elementary algorithms 1, A K Peters, Ltd., p. 32, ISBN 978-1-56881-158-1
4. T. K. Puttaswamy, "The Accomplishments of Ancient Indian Mathematicians", pp. 410–1, in Selin, Helaine; D'Ambrosio, Ubiratan, eds. (2000), Mathematics Across Cultures: The History of Non-western Mathematics, Springer, ISBN 1-4020-0260-2
5. Matvievskaya, Galina (1987), "The Theory of Quadratic Irrationals in Medieval Oriental Mathematics", 500: 253–277 [254], doi:10.1111/j.1749-6632.1987.tb37206.x
6. Jacques Sesiano, "Islamic mathematics", p. 148, in Selin, Helaine; D'Ambrosio, Ubiratan (2000), Mathematics Across Cultures: The History of Non-western Mathematics, Springer, ISBN 1-4020-0260-2
7. Bishop, Errett; Bridges, Douglas (1985), Constructive analysis, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 279, Berlin, New York: Springer-Verlag, ISBN 978-3-540-15066-4 , chapter 2.
## References
• Georg Cantor, 1874, "Über eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen", Journal für die Reine und Angewandte Mathematik, volume 77, pages 258–262.
• Robert Katz, 1964, Axiomatic Analysis, D. C. Heath and Company.
• Edmund Landau, 2001, ISBN 0-8218-2693-X, Foundations of Analysis, American Mathematical Society.
• Howie, John M., Real Analysis, Springer, 2005, ISBN 1-85233-314-6
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9161660075187683, "perplexity_flag": "head"}
|
http://rjlipton.wordpress.com/2010/04/12/socks-shoes-and-the-axiom-of-choice/
|
a personal view of the theory of computation
by
A finite version of the Axiom of Choice
Thomas Jech is a set theorist and logician, who among many other things wrote a classic classic book on The Axiom of Choice (AC). I strongly recommend this book for its wonderfully lucid explanation of many aspects of AC—including Kurt Gödel’s and Paul Cohen’s results on the independence of AC from ZF set theory.
Today I want to discuss an old result about the famous Axiom of Choice. The result concerns a finite version of AC, and may be related to some of our problems in complexity theory. In any event it is a quite neat result, in my opinion.
At the recent Association of Symbolic Logic meeting, hosted by George Washington University in Washington, DC., a group of us went to dinner after our session on complexity theory. As usual we discussed many things during dinner: who is hiring, who is moving, P=NP, but somehow the conversation hit upon the AC. I told the group, I recalled vaguely a result on the AC for finite sets—no one seemed to be aware of such a result. I was a bit embarrassed, since I could not recall the exact statement of the theorem, but did recall it concerned choice on finite sets. We then moved onto other topics.
This is my attempt to make up for forgetting these pretty theorems about the AC for finite sets.
Finite Choice
“The Axiom of Choice is necessary to select a set from an infinite number of socks, but not an infinite number of shoes.” — Bertrand Russell
Recall the AC is all about choice functions, since we are computer scientists we will only consider choices among finite objects. Consider a family ${\cal F}$ of sets each of cardinality exactly ${n}$. The statement ${C_{n}}$ says every such family has a choice function: More precisely, there is a function ${s}$ from sets ${A \in {\cal F}}$ to elements so,
$\displaystyle s(A) \in A$
for every set ${A}$. Jech gives several theorems, but the most interesting ones solve the following problem: when does ${C_{m}}$ imply ${C_{n}}$? Here are two key examples:
Theorem: The statement ${C_{2}}$ implies ${C_{4}}$.
Theorem: The statement ${C_{2}}$ does not imply ${C_{3}}$.
What I find interesting is the non-monotone nature: the relationship from ${C_{m}}$ to ${C_{n}}$ is not simple.
Let me prove the first, ${C_{2}}$ implies ${C_{4}}$. In the spirit of Russell this implication says
The ability to choose from “socks” implies the ability to choose from “four dinning table chairs”.
I will follow Jech’s proof almost exactly. It is important to understand what we can do with sets:
1. Given a finite set ${A}$ we can tell the cardinality of the set.
2. Given a set ${A}$ and ${B}$ we can form the set difference ${A-B}$.
3. Given any set ${A = \{a,b\}}$ of two elements, there is a universal function ${g}$ that is a choice function:
$\displaystyle g(A) = x$
where ${x=a}$ or ${x=b}$.
The last property follows from the assumption ${C_{2}}$.
We now assume we have a family ${\cal F}$ of four element sets, and are looking for a way to select among any four elements. We will construct the choice function using the above properties—the existence of ${g}$ is critical.
Let ${A}$ be in the family ${\cal F}$. There are six two-element subsets of ${A}$. For every ${a \in A}$, let
$\displaystyle q(a) = \mbox{ Number of pairs } \{a,b\} \subset A \mbox{ such that } g(\{a,b\}) = a.$
Obviously ${q(a)}$ cannot be the same for all ${a \in A}$: if all had ${q(a)=1}$ this would be too few choices, if all had ${q(a)=2}$ this would be too many choices. Let ${u}$ be the least positive value of ${q(a)}$ over all ${a \in A}$. Then, define ${B}$ as
$\displaystyle B = \{ a \mid q(a) = u \}.$
Thus, ${B}$ must have ${1,2}$ or ${3}$ elements.
1. Suppose ${B}$ has one element. Then use this element as the choice for the set ${A}$.
2. Suppose ${B}$ has two elements. Then use ${g}$ to choose among the two elements and this is the choice for the set ${A}$.
3. Suppose ${B}$ has three elements. Then use the unique ${a \in A-B}$ as the choice for the set ${A}$.
Pretty neat argument—no?
The General Theorem
In order to define the general relationship we need what Jech calls condition ${S}$: Say ${(m,n)}$ satisfy condition ${S}$ provided there is no decomposition of ${n}$ into a sum of primes,
$\displaystyle n = p_{1} + \ldots + p_{s}$
such that ${p_{i} > m}$ for all ${i=1,\dots,s}$.
Theorem: If ${(m,n)}$ satisfy condition ${S}$ and if ${C_{k}}$ holds for every ${k \le m}$, then ${C_{n}}$ holds.
See his book for the details of the proof. The proof is an induction argument and uses some simple properties of binomials. Note, ${(2,4)}$ does satisfy property ${S}$.
He also shows the condition ${S}$ can be used to completely characterize when ${C_{m}}$ implies ${C_{n}}$. The technically harder direction is to show that a ${C_{m}}$ does not imply a ${C_{n}}$, since this requires set theory independence machinery. If you are interested please check his book for the details. It allows him to prove:
Theorem: If ${(m,n)}$ does not satisfy condition ${S}$, then there is a model of set theory in which ${C_{k}}$ holds for every ${k \le m}$ but ${C_{n}}$ fails.
Open Problems
Can we use this finite AC in any part of complexity theory? Does the “trick” of proving ${C_{2}}$ implies ${C_{4}}$ have any application in any part of theory?
Like this:
from → History, People, Proofs
11 Comments leave one →
1. April 12, 2010 10:11 am
Neat argument indeed! Though it is just barely related, this seems an opportunity to mention the delightful paper Division by three by Doyle and Conway. “In this paper we show that it is possible to divide by three.”
2. Vijay
April 12, 2010 12:12 pm
This is tangential to the main content of your post but I hope it is interesting. Martin Escardo has been doing some intriguing work on searching infinite sets in finite time. Programs that implement such search are available on his page.
A hands-on demonstration is given here. There is a LICS paper that provides more details.
• Vijay
April 12, 2010 12:16 pm
Whoops! The demonstration link is here.
3. richde
April 12, 2010 12:58 pm
Great post. You are really raising two interesting questions. The first whether AC has direct application (I suppose there might be a meta result in which choice is essential) the second is whether the methods of independence proofs can be applied. I think the answer is almost certainly yes. In fact our early independence results used some related methods although I think Paul Young eventually showed they were not needed.
4. April 13, 2010 9:24 am
I heard about this type of problem once. I managed to figure out that C2 implies C4, but couldn’t prove any other nice relation. It’s nice to know about the last result you mention (and to have the reference, too!).
For what it’s worth, here’s my seemingly ungeneralizable argument for C2 implies C4:
Each four element set can be expressed as the disjoint union of two two element sets in three different ways. Using a choice function for two element sets you can select, for each of those three ways, one of the two element sets. Think of the four element set as the complete graph on four vertices; we are choosing one edge out of each of the three pairs of disjoint edges. Three such edges either form a triangle or a K1,3. Define the choice function on the four element set as follows: if the edges from a triangle, pick the fourth vertex; if they form a K1,3, pick the degree 3 vertex.
I’m biased of course, but I like my argument slightly better than Jech’s.
• rjlipton *
April 13, 2010 10:58 am
Very nice argument. I like it very much.
5. I. J. Kennedy
April 13, 2010 11:44 pm
I can’t thank you enough for this blog. Every post is a gem, although sometimes I have to work pretty hard to understand exactly what’s being said (a reflection on me, not on Dr. Lipton’s excellent expository skills).
Would someone mind pointing me to a definition of universal function as used above in describing the function g(a,b)?
• I. J. Kennedy
April 14, 2010 4:40 pm
The reason I ask is this. If universal means something like Turing computable, then wouldn’t the arguments to g(a,b) need to be encoded as marks on a tape, or more familiarly as strings of bits? And if that’s the case, wouldn’t you be able to make a choice function for families of sets of cardinality n just by looking at the bit-representations of the elements and simply returning the least one in lexicographic order?
• May 1, 2010 8:07 pm
They helped me out over at http://mathoverflow.net/questions/21879/what-is-a-universal-function.
Trackbacks
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 80, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930341899394989, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/38864/visualizing-orthogonal-polynomials/38866
|
## Visualizing Orthogonal Polynomials
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Recently I was introduced to the concept of Orthogonal Polynomials through the poly() function in the R programming language. These were introduced to me in the concept of polynomial transformations in order to do a linear regression. Bear in mind that I'm an economist and, as should be obvious, am not all that smart (choice of profession has an odd signaling characteristic). I'm really trying to wrap my head around what Orthogonal Polynomials are and how, if possible, to visualize them. Is there any way to visualize orthogonal polynomials vs. simple polynomials?
-
## 4 Answers
Helge presented the continuous case in his answer; for the purposes of data fitting in statistics, one usually deals with discrete orthogonal polynomials. Associated with a set of abscissas `$x_i$`, `$i=1\dots n$` is the discrete inner product
`$$\langle f,g\rangle=\sum_{i=1}^n w(x_i)f(x_i)g(x_i)$$`
where `$w(x)$` is a weight function, a function that associates a "weight" or "importance" to each abscissa. A frequently occurring case is one where the `$x_i$` are equispaced, `$x_{i+1}-x_i=h$` where $h$ is a constant, and the weight function is `$w(x)=1$`; for this special case, special polynomials called Gram polynomials are used as the basis set for polynomial fitting. (I won't be dealing with the nonequispaced case in the rest of this answer, but I'll add a few words on it if asked).
Let's compare a plot of the regular monomials $x^k$ to a plot of the Gram polynomials:
On the left, you have the regular monomials. The "bad" thing about using them in data fitting is that for $k$ high enough, $x^k$ and $x^{k+1}$ are nigh-indistinguishable, and this spells trouble for data-fitting methods since the matrix associated with the linear system describing the fit is dangerously close to becoming singular.
On the right, you have the Gram polynomials. Each member of the family does not resemble its predecessor or successor, and thus the underlying matrix used for fitting is a lot less likely to be close to singularity.
This is the reason why discrete orthogonal polynomials are of interest in data fitting.
-
A few small notes: 1. in the limit of infinitely many abscissas (the discrete becomes continuous), the Gram polynomial becomes the Legendre polynomial. 2. The orthogonal polynomials associated with nonequispaced abscissas are easily constructed through the so-called Stieltjes procedure. Different abscissas correspond to different basis sets. The reason the equispaced case is much nicer is that one can factor out the spacing $h$ from the appropriate equations. 3. Any good old book on the difference calculus should have some mention on the Gram polynomials. – J. M. Sep 16 2010 at 5:40
1
Thanks for taking the time to explain this. The issue of multicollinearity (singular matrix) is exactly the context where I was using the poly() function which creates orthogonal polynomials. Thank you for grabbing hold of my context and sharing an answer that really gets at my specific domain of interest. – jd long Sep 16 2010 at 18:13
jd:You're very much welcome. I have to say I wish someone had explained it like that to me when I was first learning about this stuff. – J. M. Sep 17 2010 at 1:03
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Start here
-
That was actually a really helpful resource. Thank you. – jd long Sep 15 2010 at 20:34
I am not sure of your math-background, so I am trying to keep it simple, without oversimplifying some ideas. First off polynomials are nice for various reasons, e.g. a polynomial of degree $n$ has at most $n$ zeros. However, there are still many polynomials and it makes sense to choose VERY nice ones: orthogonal polynomials.
To choose orthogonal polynomials, one has a problem at hand, which comes with a way to measure functions $f: \Bbb R \to\Bbb R$ by an expression of the form $$\mathcal{E}(f) = \int_{-\infty}^{\infty} f(x)^2 w(x) dx,$$ where $w(x) > 0$ is a weight that satisfy $\int w(x) dx = 1$. One should think of $\mathcal{E}$ as an energy.
Now the orthogonal polynomial of degree $n$ can be defined as the polynomial $P_n(x) = x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0$, where $a_{n-1}, \dots, a_0$ are real numbers, that minimizes $\mathcal{E}(P_n)$. It is this minimization property that is responsible for some of the power of orthogonal polynomials.
At this point let me also say that it is through the weight $w$ that your problem enters the definition of orthogonal polynomials. And that one also has orthonormal polynomials, which satisfy $\mathcal{p_n} = 1$. These are given by $p_n = \frac{1}{\sqrt{\mathcal{E}(P_n)}} P_n$.
-
I think of the space of polynomials on R as a set of graphs arranged round the real line like the pages of a book round the axis. Polynomials which have almost the same graph are close to each other; then orthogonal polynomials are those which fall at right angles in the picture, and the linear combinations generate the space just as the basis vectors of $\mathbb{R}^n$ generate $\mathbb{R}^n$.
-
PS If you have had a decent linear algebra course you will be familiar with the concept of a vector space as a collection of abstract vectors satisfying certain axioms. You can see that the set of polynomials on $\mathbb{R}$ satisfies these axioms and so is a vector space. An orthogonal set of polynomials then generates the whole space in roughly the same way that an orthogonal basis for an ordinary vectors space does. – Tom Smith Sep 15 2010 at 21:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519811868667603, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/171573-relative-rates.html
|
# Thread:
1. ## Relative Rates
I've added an attachment to this post to explain the problem.
Two carts, A and B, are connected by a rope 39 feet long that passes over a pulley P. The point Q is on the floor h = 12 ft directly beneath P and between the carts. Cart A is being pulled away at a speed of 2.5 ft/s. How fast is cart B moving toward Q at the instant when cart A is 5 ft from Q. (round to 2 decimal places)
From the problem, I know the following things:
PQ = h = 12
AQ = 5
AP = 13 (by Pythagoras)
APB = 39
PB = 39-13 = 26
QB = $2\sqrt{133}$ (by Pythagoras)
(work to follow in next posting for organization)
Attached Thumbnails
2. My strategy is to work clockwise around the triangle to find $\frac{dQB}{dt}$ as follows:
First, find $\frac{dAP}{dt}$ since this must be equal to $\frac{dPB}{dt}$ since the length of these two segments is constant:
$\frac{dAP}{dt}(AP)^2 = \frac{dAQ}{dt}(AQ)^2$ (note that PQ is a fixed length so its derivative is zero)
$\frac{dAP}{dt} = \frac{AQ}{AP} \frac{dAQ}{dt} = \frac{5}{13} (2.5) = \frac{25}{26} ft/sec$
Again, if $\frac{dAP}{dt} = \frac {25}{26}$, then $\frac{dPB}{dt} = \frac{-25}{26}$ since the length of this segment is constant. THIS IS A KEY ASSUMPTION! IS IT CORRECT???? (sorry for the caps)
Then, I find my ultimate answer using Pythagoras:
$\frac{dQB}{dt}(2\sqrt{133})^2 = \frac{dPB}{dt} (PB)^2 - \frac{dPQ}{dt} (PQ)^2$
$(4\sqrt{133}) \frac{dQB}{dt} = \frac{-25}{26} (26^2) - 0$
$\frac{dQB}{dt} = \frac{-650}{4\sqrt{133}} = 0.30545$ feet / second
Unfortunately, this answer doesn't check out, so I'm doing something wrong, probably in my key assumption, but I don't see it. Can somebody help?
Thanks.
3. Hello, joatmon!
Two carts, A and B, are connected by a rope 39 feet long that passes over a pulley P.
The point Q is on the floor h = 12 ft directly beneath P and between the carts.
Cart A is being pulled away at a speed of 2.5 ft/s.
How fast is cart B moving toward Q at the instant when cart A is 5 ft from Q.
(Round to 2 decimal places)
Code:
``` P
*
/:\
/ : \
/ : \
/ : \
/ :12 \
/ : \
/ : \
A * - - - * - - - * B
x Q y```
Let $x = AQ,\;y = QB$
We are given: . $AP + PB \,=\,39\,\text{ and }\,\dfrac{dx}{dt} = 2.5$
In right triangle $PQA\!:\;AP \,=\,\sqrt{x^2+12^2}$
. . Then: . $PB \,=\,39 - \sqrt{x^2+144}$
In right triangle $PQB\!:\;y^2 \:=\:(39 - \sqrt{x^2+144})^2 - 12^2$
. . . . . . $y^2 \;=\;x^2 - 78(x^2+144)^{\frac{1}{2}} + 1521$ .[1]
Differentiate implicitly:
. . $2y\,\dfrac{dy}{dt} \;=\;\bigg[2x - 78\cdot\frac{1}{2}(x^2+144)^{-\frac{1}{2}}\cdot2x\bigg]\,\dfrac{dx}{dt}$
. . . . $\displaystyle \frac{dy}{dt} \;=\;\frac{1}{y}\bigg[x - \frac{39x}{\sqrt{x^2+144}}\bigg]\,\frac{dx}{dt}$ .[2]
Substitute $x = 5$ into [1]:
. . $y^2 \;=\;25 - 78(13) + 1521 \:=\:532 \quad\Rightarrow\quad y \:=\:2\sqrt{133}$
Substitute into [2]:
. . $\displaystyle \frac{dy}{dt} \;=\;\frac{1}{2\sqrt{133}}\left[5 - \frac{39(5)}{\sqrt{169}}\right](2.5) \;=\;\frac{1}{2\sqrt{133}}(-10)(2.5)$
$\displaystyle\text{Therefore: }\frac{dy}{dt} \;=\;-\frac{25}{2\sqrt{133}} \;\approx\;-1.08\text{ ft/sec}$
But check my work . . . please!
.
4. Originally Posted by joatmon
My strategy is to work clockwise around the triangle to find $\frac{dQB}{dt}$ as follows:
First, find $\frac{dAP}{dt}$ since this must be equal to $\frac{dPB}{dt}$ since the length of these two segments is constant:
$\frac{dAP}{dt}(AP)^2 = \frac{dAQ}{dt}(AQ)^2$ (note that PQ is a fixed length so its derivative is zero)
$\frac{dAP}{dt} = \frac{AQ}{AP} \frac{dAQ}{dt} = \frac{5}{13} (2.5) = \frac{25}{26} ft/sec$
Again, if $\frac{dAP}{dt} = \frac {25}{26}$, then $\frac{dPB}{dt} = \frac{-25}{26}$ since the length of this segment is constant. THIS IS A KEY ASSUMPTION! IS IT CORRECT???? (sorry for the caps)
Yes, that is correct. AP+ PB= 39 so (AP)'+ (PB)'= 0, (BP)'= -(AP)'.
Then, I find my ultimate answer using Pythagoras:
$\frac{dQB}{dt}(2\sqrt{133})^2 = \frac{dPB}{dt} (PB)^2 - \frac{dPQ}{dt} (PQ)^2$
I don't understand why you have the squares there nor where you got $2\sqrt{33}$. What formula did you differentiate to get this?
$(4\sqrt{133}) \frac{dQB}{dt} = \frac{-25}{26} (26^2) - 0$
$\frac{dQB}{dt} = \frac{-650}{4\sqrt{133}} = 0.30545$ feet / second
Unfortunately, this answer doesn't check out, so I'm doing something wrong, probably in my key assumption, but I don't see it. Can somebody help?
Thanks.
5. Thanks to both of you. Soroban, your work helped me immensely. We were together in calculating the segment lengths. Where I went wrong was in applying the differential equation. I didn't equate x and y correctly, so my differential was all screwed up. Thanks for setting me straight.
Halls, the $2\sqrt{133}$ comes from the Pythagoream theorem. We determined that the PB hypotenuse was 26 and the height of 12 was given. Thus, the segment at the bottom that needed to be measured in order to differentiate was calculated like this:
$\sqrt{26^2 - 12^2} = 2\sqrt{133}$
Thanks again!
6. I originally thought the sum of the bases of the triangle (x and y in your diagram) would have to be constant by thinking that in real life if the base was not constant and say it was getting shorter that would require there to be some slack in the rope. But it seems that x+y is not constant here. I guess that's because our assumption is that this problem can be modeled with a triangle no matter what.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 36, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9620644450187683, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/44984-linear-algebra-help.html
|
# Thread:
1. ## Linear Algebra help
Let T e an invertible linear operator. Prove that a scalar Lamda is an eigenvalue of T. if and only if the inverse of lamda is and eigen value of T^-1.
2. Originally Posted by JCIR
Let T e an invertible linear operator. Prove that a scalar Lamda is an eigenvalue of T. if and only if the inverse of lamda is and eigen value of T^-1.
Could it be this simple?:
$T^{-1} v = \frac{1}{\lambda} v \Rightarrow T \, T^{-1} v = \frac{1}{\lambda} T \, v \Rightarrow \lambda v = T \, v$.
$T \, w = \lambda w \Rightarrow T^{-1} T \, w = T^{-1}\lambda \, w \Rightarrow \frac{1}{\lambda} \, w = T^{-1} w$.
Details and justifications left to you.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8129671216011047, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/138271-cauchy-s-integral-theorem.html
|
# Thread:
1. ## cauchy's integral theorem
Im attempting to show that the following is not true by cauchy's integral theorem.
$\int_{C}\frac{\overline{z}}{z^{2} - 5z - 6},dz = 0$ where $C$ is the unit circle (travelled anti-clockwise).
Resolving $z^{2} - 5z - 6 = 0$ i got z = 2 and z = 3 but these singularities lie outside the unit circle...
I know i need to do something with the $\overline{z}$ but cant figure out what
2. Originally Posted by Tekken
Im attempting to show that the following is not true by cauchy's integral theorem.
$\int_{C}\frac{\overline{z}}{z^{2} - 5z - 6},dz = 0$ where $C$ is the unit circle (travelled anti-clockwise).
Resolving $z^{2} - 5z - 6 = 0$ i got z = 2 and z = 3 but these singularities lie outside the unit circle...
I know i need to do something with the $\overline{z}$ but cant figure out what
Either it is $z^2-5z+6$ or you did a mistake in computing the roots.
Anyway, the trick is to notice that, if $z\in C$, then $\overline{z}=\frac{1}{z}$, so you can apply Cauchy theorem and there is a pole inside the contour.
3. ya sorry about that, it should have been $z^{2} - 5z + 6$
Thanks for the help
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425597190856934, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/complex-integration
|
# Tagged Questions
The complex-integration tag has no wiki summary.
1answer
43 views
### Change of variables in a complex integral
I want to evaluate this integral using Residue Theorem $$\int_C^\ \frac{4z} {z^4 +6z^2 +1} dz =$$ $$C : |z| = 1$$ so I substitute letting $$\ W = z ^ {2 }$$ $$dw = 2z dz$$ and the ...
1answer
34 views
### Complex analysis contour integral
I am working on the integral $\displaystyle\int_0^{\infty}\frac{\log(x)}{x^2-1}$. I see it done here $\int_0^\infty\frac{\log x dx}{x^2-1}$ with a hint. but I am wondering if it is possible to ...
1answer
49 views
### Complex Integral of a meromorphic function
Please help with the following prelim problem. Thanks! Express the integral as a complex integral of a meromorphic function, where $\rho>0$ and $a$ is complex valued \int_{|z|=\rho} ...
1answer
27 views
### Cauchy integral formula and holomorphic functions
I am stuck in a problem about holomorphic functions and using of Cauchy integral formula. I really have no idea how to start, so i would be glad if somebody could help me with it. Let $C=C(0,1)$ a ...
1answer
46 views
### Did I calculate this (simple) integral correctly?
Given the contour $C$: we are asked to calculate $\displaystyle\frac{1}{2\pi i}\oint \frac{ze^{z^2-4z}}{z^2-1}dz$. I wrote it as such: \frac{1}{2}\left(\frac{1}{2\pi i}\oint ...
0answers
71 views
### Laplace transform of a product of functions
While trying to compute the Laplace transform of a certain product, part of the calculation leaves me with a Bromwich integral which has the form: ...
2answers
66 views
### How to integrate complex exponential??
Consider $$\int^{\frac{1}{2}}_{-\frac{1}{2} } e^{i2\pi f} \,df = \int^{\frac{1}{2} }_{-\frac{1}{2} } \cos(2 \pi f)\, df$$ Why do we only look at the real part? What about the imaginary part ...
6answers
92 views
### Integration issue
I am trying to solve $\int^{+\infty}_{-\infty}\frac{1}{x}dx$. I read that it is a contour integral along the semi-circle of large radius in the lower complex plane. First, is there any justification ...
0answers
30 views
### Solving an complex Integration with complex exp and other terms
I am trying to solve a partial differential equation and while solving I need to solve the following integral. If anyone could help me solve this integral that would be great. y(x,t) = \int_{c-i ...
1answer
76 views
### How to do complex integration. E.g. $\int_\frac{\pi}{2}^{\frac{\pi}{2} + i} \cos(2z) \; \mathrm{d}z$
For my homework assignment I've been given a number of complex integrals to solve. I've already asked for help on a specific example here, but I was somewhat dissatisfied with the answers. The answers ...
1answer
121 views
### How to solve using Cauchy Integral formula?
Let $C$ be the positively oriented boundary of the square whose sides lie along the lines $x=+/-2$ and $y=+/-2$. I am supposed to use the Cauchy Integral formula to evaluate \int_C ...
1answer
56 views
### Need help integrating $\tan x$ and $\tan^n x$ using reduction
I have tried to use integration by parts taking $u$ as $\tan x$ and $v$ as $1$: $$\int \tan x \,dx = \int \tan x \cdot 1\; dx = \tan x \cdot x - \int \sec^2 x \cdot x\; dx$$ then by taking $u$ as ...
1answer
57 views
### Integrating $z^n$ and $(\overline{z})^n$ along a line segment in the complex plane
Let $z_1$ and $z_2$ be distinct points of $\mathbb{C}$. Let $[z_1,z_2]$ denote the oriented line segment starting at $z_1$ and ending at $z_2$. Evaluate the integral of $z^n$ and $(\overline{z})^n$ ...
2answers
80 views
### Is an integral in the complex plane an integral over a single number?
A recent question from Juan Saloman reminded me of something that has nagged me for years, and I have never understood and never heard explained. (or maybe I just don't remember, but anyway ...) In ...
1answer
65 views
### Finding the integral $\int_0^\pi\dfrac{d\theta}{(2+\cos\theta)^2}$ by complex analysis
Trying to find the integral $\int_0^\pi\dfrac{d\theta}{(2+\cos\theta)^2}$ by complex analysis, I let $z = \exp(i\theta)$, $dz = i \exp(i\theta)d\theta$, so $d\theta=\dfrac{dz}{iz}$. I am trying ...
1answer
81 views
### Calculating Residues
I want to calculate this integral $$I:=\int dk^{0}\frac{e^{-ik^{0}(x^{0}-x'^{0})}}{\left(\left(k^{0}\right)^{2}-|\vec{k}|^{2}\right)}$$ for that I recall the Residue Theorem: I=2\pi i \left\{ ...
2answers
273 views
### Summation using residues
In reference to this question about showing that the following interesting series takes on the value $$\sum_{n=0}^\infty \frac{1}{(2n+1)\operatorname{sinh}((2n+1)\pi)}=\frac{\log(2)}{8}$$ I tried ...
1answer
111 views
### Problems in interpreting an integral that should be solved with residue method
Usually, when I solve an integral using residue method, I find real functions as integrands. I am not able to provide an interpretation for the following complex integral \int_{-\infty}^{\infty} ...
4answers
167 views
### contour integration of logarithm
I must compute the following integral $$\displaystyle\int_{0}^{+\infty}\frac{\log x}{1+x^3}dx$$ Can someone suggest me the right circuit in the complex plane over which to do the integration? I ...
1answer
89 views
### finding $\int_0^\infty \dfrac{dx}{1+x^4}$ through complex analysis
I am trying to find $\int_0^\infty \dfrac{dx}{1+x^4}$ by setting it equal to $\dfrac{1}{2}\oint_C \dfrac{dz}{1+z^4}$ and solving that. By a computer program I've calculated it to be $\approx 1.11072$; ...
2answers
71 views
### Computing with Cauchy Residue theorem
how do I calculate $$\operatorname{Res}\left(\frac{1}{z^2 \cdot \sin(z))}, 0\right)$$ What is the order of the pole? $3$?
2answers
100 views
### a question about Cauchy integral formula
I'm new in the complex analysis and I'm stuck with this integral : $I=\displaystyle \int_{|z|=4} \frac{\mathrm{d}z}{(z^2+9)(z+9)}$ the exercise is about Cauchy integral, I don't want the whole ...
1answer
49 views
### Is this OK: $\int_a^b \!\mathrm{d}x \,\,f(x) =^? \int_{\mathrm{i}\,a}^{{\mathrm{i}\,b}} \!\mathrm{d} (\mathrm{-i}y)\,\,f(\mathrm{-i}y).$ Any proof?
This is related to Wick rotation in QFT but it is not exactly it. I'll take a 2-dimensional spacetime to be brief but usually there are more. I've checked with a few functions and with finite ...
1answer
195 views
### How to evaluate this complex integral !?
We have the following complex integral : $$\frac{1}{2\pi i}\int_{-i\infty}^{i\infty}e^{-\frac{\pi}{2}\cot\left(\frac{\pi}{s}\right)}\frac{x^{s}}{s}ds$$ Where $x\in\mathbb{R}:x>1$. i tried closing ...
1answer
159 views
### Complex integral over circle using Cauchy's formula
I have to integrate the complex function $$\frac{e^z-1}{z^5}$$ over the curve $\gamma(t)=1+re^{-5it}$ where $t \in [0,2\pi]$. The curve has winding number -5 with respect to a point inside the disc ...
4answers
198 views
### Integrate $\int_0^\infty \frac{\sqrt{x}}{x^{2}+1}\, \mbox{d} x$
I've been trying to integrate the following $$\int_{0}^{\infty} \frac{\sqrt{x}}{x^{2}+1} \mbox{d} x$$ on half an annulus in the upper half plane. I keep getting $\frac{\pi}{\sqrt{2}}\ i$, which ...
1answer
217 views
### Evaluating real integral using residue calculus: why different results?
I have to evaluate the real integral $$I = \int_0^{\infty} \frac{\log^2 x}{x^2+1}.$$ using residue calculus. Its value is $\pi^3/8$, as you can verify (for example) introducing the function ...
5answers
262 views
### Showing that $\displaystyle\int_{-a}^{a} \frac{\sqrt{a^2-x^2}}{1+x^2}dx = \pi\left (\sqrt{a^2+1}-1\right)$.
How can I show that $\displaystyle\int_{-a}^{a} \frac{\sqrt{a^2-x^2}}{1+x^2}dx = \pi\left(\sqrt{a^2+1}-1\right)$?
0answers
115 views
### Show Smoothness by Morera
I'm trying to show smoothness on $(0,\infty)(\Re)$ of the following function: $$f(t,x)=\sum_{n=-\infty}^\infty e^{-\large \frac{(x-2\pi n)^2}{2t}}\frac{1}{\sqrt{2\pi t}}$$ The function is ...
1answer
82 views
### Complex form of gauss divergence theorem
Just as complex form of green's theorem $\int {f(z)}dz=i\int\int \frac{\partial f}{\partial x} + i\frac{\partial f}{\partial y}dxdy$ where $z=x+iy$ , do we have complex form of gauss divergence ...
1answer
105 views
### Evaluating complex integrals involving log (finding bounds)
When evaluating real integrals involving log, I am having trouble with the step that involves finding a bound on circular segments. Let me explain what I mean: If, for example, we have ...
3answers
157 views
### Equality of absolute values of complex integrals
It was pretty hard finding a short and precise title, heres my problem: The equation $$\bigg|\int_\gamma f(z)\text{d}z\bigg|\le\int_\gamma\big|f(z)||\text{d}z|$$holds true if f is integratable (where ...
1answer
99 views
### Does anyone know this functional integral equation?
$$\sqrt{2}f(x) =\lim_{\delta \to 0^{+}}\left[x-i\delta-\int_{-1}^{1} \frac{|f(y)|^2}{y-i\delta-x}dy\right]$$ I'd like to know if there is a solution for $f\colon(-1,1) \to\mathbb{C}$. Of course if it ...
3answers
159 views
### Improper integration involving complex analytic arguments
I am trying to evaluate the following: $\displaystyle \int_{0}^{\infty} \frac{1}{1+x^a}dx$, where $a>1$ and $a \in \mathbb{R}$/ Any help will be much appreciated.
1answer
62 views
### Finding $\int_{\partial B_{2}(0)} \frac{1}{(z^n-1)^2}dz$.
I'd really like some help with this problem. I'm supposed to find $$\int_{\partial B_{2}(0)} \frac{1}{(z^n-1)^2}dz,$$ where $B_2(0) = \{ z \in \mathbb{C} \; | \; |z|<2 \}$ (ie. the ball of radius ...
4answers
110 views
### What is the value of $\int_{\gamma} \bar{z} dz$?
I could use some help in calculating $$\int_{\gamma} \bar{z} \; dz,$$ where $\gamma$ may or may not be a closed curve. Of course, if $\gamma$ is known then this process can be done quite directly (eg. ...
1answer
104 views
### Definite integral involving hyperbolic cosine
I have had no experience so far with hyperbolic functions so any help will be appreciated. This is on the chapter of complex integration but I would especially appreciate it if you could turn this ...
2answers
112 views
### Computation of a certain integral
I would like to compute the following integral. This is for a complex analysis course but I managed to around some other integrals using real analysis methodologies. Hopefully one might be able to do ...
2answers
137 views
### Complex Analysis Help
Let $γ\colon[-1,1]\to\mathbb{C}$ , $γ(t)= z_0 + itc$ , $z_0$ fixed and c>0 Prove for x>0 $$\lim_{x\to0} \frac{1}{2πi} \int_γ \left(\frac{1}{z-w} - \frac{1} {z-w'}\right)dz = -1$$ Where $w=z_0 + x$ ...
1answer
176 views
### Complex analysis integration with residues.
I have to show that $$\int_{0}^{2\pi}\frac{d\theta}{(a^{2}\cos^{2}\theta+b^{2}\sin^{2}\theta)^{2}} =\frac{ \pi(a^{2}+b^{2})}{a^{3}b^{3}}$$ where $a,b>0$. I have tried using double angle formulas ...
1answer
84 views
### Integrating squared absolute value of a complex sequence
I was reading through my book in complex analysis and i encountered this problem. Given, $F=\sum_{n=0}^{\infty} a_nX^n$ is a convergent power series with radius of convergence R. We are asked to show ...
1answer
216 views
### integral of complex logarithm
Consider the integral $$I=\int_0^{2\pi}\log\left|re^{it}-a\right|\,dt$$ where $a$ is a complex number and $0<r<|a|$. We have ...
1answer
148 views
### line integral versus complex integral
Let $a\in \mathbb C, r>0$ and $\gamma_r=\partial D(0,r)$. I want to evaluate the following line integral $$I=\int_{\gamma_r}\frac{1}{|z-a|^2}ds.$$ I'm looking for a complex function $g(z)$ such ...
0answers
52 views
### Exercise of Complex Integration
Let $f(z)$ be such that along the path $C_N$ of the following figure If $|f(z)|\leq \frac{M}{|z|^k}$ where $k>1$ and $M$ are constants independent of $N$. How to prove that ...
0answers
155 views
### Residue of $\mathrm{exp} (z+1/z)/((z-z_1)(z-z_2))$ at z=0
The ultimate aim is to solve the following integral: \begin{equation} \label{eq:Icos1} \begin{aligned} I = \int\limits_{0}^{2\pi} \frac{\mathrm{exp}\left(c \cos(\varphi)\right)\mathrm{d}\varphi}{a - ...
2answers
87 views
### $\int_{-1}^{1}\exp(-at^2)/t^2 dt$ using residue theorem
How to calculate $$\int_{-1}^{1}\frac{e^{-at^2}}{t^2}dt$$ where $a>0$, using residue theorem?
1answer
134 views
### Riemann Lebesgue Lemma for polynomial?
I was asked to prove that $$\lim_{n\to\infty} \int_{0}^{1} \exp(i\cdot n\cdot p(x))\;dx =0$$ for nonconstant real polynomial $p(x)$. if $p(x)$ is of degree $1$... It reduces to Riemann-Lebesgue ...
2answers
101 views
### Show the value of a complex integral is independent of R for R > 1
Question: Show that for R > 1 $$\int_{|z|=1} \frac{z^{2011}}{2z^{2012}-1} dz = \int_{|z|=R} \frac{z^{2011}}{2z^{2012}-1} dz$$ Thoughts thus far: (i) I know that we cannot use Cauchy's integral ...
1answer
74 views
### Is $(-1+i)\log(2e^{it}+i)$ same as $\frac{1}{2}\left((2+2i)\tan^{-1}(2e^{it})-(1-i)\log(1+4e^{2it})\right)$?
Is $\displaystyle(-1+i)\log(2e^{it}+i)$ the same as $\displaystyle\frac{1}{2}\left((2+2i)\;\tan^{-1}(2e^{it})-(1-i)\log(1+4e^{2it})\right)$? WolframAlpha shows that they are same, but this page on W|A ...
2answers
121 views
### “Convergent” Integral in Davenport's Multiplicative Number Theory
I am currently learning analytic number theory using Davenport's Multiplicative Number Theory book, and at some point I believe something silly is happening. I have great faith that I am wrong AND ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 96, "mathjax_display_tex": 26, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202433228492737, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/144545-uniform-continuity.html
|
# Thread:
1. ## Uniform Continuity
Let f(x) be continous on the real number line. Suppose that lim(x-> infinity) f(x) = 0 and lim(x-> negative infinity) f(x) = 0. Show that f(x) is unformly continous or else give a counterexample.
2. Originally Posted by seams192
Let f(x) be continuous on the whole real number line. Suppose that lim(x-> infinity) f(x) = 0 and lim(x-> negative infinity) f(x) = 0. Either show that f(x) is uniformly continuous or else give a counterexample.
----------------------------
A perfect example of this would be a function like 1/(x^2+1) which is uniformly continuous. I would have difficulty proving it in the general case, but am also blind to any counterexamples that could exist.
Any thoughts?
More generally, if $\lim_{x\to\infty}f(x)=L_1,\lim_{x\to-\infty}f(x)=L_2<\infty$ and $f$ is continuous then it's uniformly continuous.
Start like this, for some $\varepsilon>0$ choose $A>0$ such that $|f(x)-L_1|<\frac{\varepsilon}{2},\text{ }x\in[A,\infty)$ AND $|f(x)-L_2|<\frac{\varepsilon}{2},\text{ }x\in(-\infty,-A]$.
You clearly have then that $|f(x_1)-f(x_2)|<\varepsilon,x_1,x_2\in\mathbb{R}-[-A,A]$ and you surely have that $f$ is unif. cont. on $[-A,A]$. See if you piece the riece together.
3. Why does it follow that it is uniformly continuous on the interval [-M,M]? I can see the intervals (-inf,-M] and [M,inf) , since the delta neighborhood of x values need not depend on x, only epsilon as the slope varys less and less towards the ends of the domain.
4. Originally Posted by seams192
Why does it follow that it is uniformly continuous on the interval [-M,M]?
Any function that is continuous on a compact set of real numbers is uniformly continuous. $[-M,M]$ is compact.
5. Originally Posted by seams192
Why does it follow that it is uniformly continuous on the interval [-M,M]? I can see the intervals (-inf,-M] and [M,inf) , since the delta neighborhood of x values need not depend on x, only epsilon as the slope varys less and less towards the ends of the domain.
The specific case we need of the theorem says that
If $f:E\to\mathbb{R}$ is continuous with $E\subseteq\mathbb{R}$ compact then $f$ is in fact uniformly continuous
Are you aware of this theorem?
6. Ohh right. The Uniform Continuity theorem. Thanks!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942616879940033, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/30406/black-hole-white-hole-collision
|
# Black hole - white hole (collision)
This is it: Nonspinnig, equally massive black hole and white hole experience a direct collision. What shall happen? What shall be the result of such a collision?
-
though i would also like to point out as far as i am aware there has never been any evidence for the existance of a white hole they are purely hyperthetical. but not impossible, If the universe is a infinate sheet expanding in parts and contracting in others, then they are a real possibility, I like your idea michael it is inkeeping with M theory, If the universe is finite and expanding or infinatly expanding as it were that means we are seeing the efects of expansion as it was a cosmic echo if you will, then i would insist the universe itself came from a blackhole. though i can only think of – dr carlin Nov 28 '12 at 10:26
## 8 Answers
This question has a somewhat faulty premise that a white hole and a black hole (or anything else for that matter) could collide. In fact a white hole is defined as an area of spacetime where nothing could enter from the outside, or, mathematically speaking, it is a maximal extension of a black hole (part of an eternal black hole that wasn't formed via gravitational collapse).
This idea can be best viewed in terms of Kruskal coordinate diagrams, a special sort of spacetime diagram where a coordinate transformation is preformed. This diagrams now have the fun property that null geodesics (basically rays of light) follow paths at 45 degrees from the positive y axis. The "white hole" of the graph below is area IV, and as you can see, not even light from outside of this region could theoretically enter it. In fact, it is defined as the region where no null geodesic could theoretically enter, only as the area where particles can come out of. Therefore, by the very definition of a white hole, a white hole could not collide with a black hole (seen in region II of this diagram).
-
1
Dear Benjamin, both white hole and black hole spacetimes are asymptotically flat. It is well known, that one cannot enter the white holes "event horizon" (which is the area where one cannot enter, loosely speaking). However the black holes event horizon is the area where one cannot escape from. So, here is my question again: As two spacetimes are asymptotically flat, one can freely put a black hole and a white hole, sufficiently separated, on a collision course. What will happen further? – Alexey Bobrick Jun 20 '12 at 12:28
I would think that in any conceivable reference frame it would take an infinite amount of time for the two to collide. The event horizons of each object would just asymptotically approach each other. It would take an infinite amount of energy to make the event horizons cross. – Benjamin Horowitz Jun 20 '12 at 15:50
1
I see the issue as tautological. A white whole is defined as a region where no null or time-like geodesics can enter. Therefore, by definition, no black holes will be colliding with white holes. It should be noted that any geodesic that is approaching the white hole, as it goes to future infinity, will find itself at a black hole horizon (in essence, the white whole becomes a black hole). So my first comment that it would take an infinite amount of time is somewhat deceiving, in that the white hole will eventually become a black hole, and collision will work like a normal black hole collision. – Benjamin Horowitz Jun 20 '12 at 19:45
1
Alexey wrote: "both white hole and black hole spacetimes are asymptotically flat". I think they are one and the same spacetime as the Kruskal coordinate diagram shows. The "eternal black hole" solution necessarily includes the white hole region doesn't it? It seems to me that what you are asking for is a solution that somehow includes "just" a black hole spatially separated from "just" a white hole in some region of the spacetime. Before I would speculate on what this spacetime would look like, you'd have to convince me that such a solution exists. – Alfred Centauri Jun 21 '12 at 13:55
2
@Alexey, that's not true. (Non-eternal) black holes may form but there is no past singularity (white-hole) in the solution. The region of the Kruskal diagram that contains the past singularity isn't there; the geometry "to the left" of the world line of the collapsing surface of the star is not Schwarzschild; only outside the collapsing star may be. – Alfred Centauri Jul 8 '12 at 20:15
show 8 more comments
## Did you find this question interesting? Try our newsletter
email address
White hole is an impossible object in universe.
Mathematically it is a black hole under inverted time. This can be interpreted as a black hole in an universe where second law of thermodynamics is inverted, that is the entropy always diminishes.
Since second law of thermodynamics has probablistic nature, one can see a white hole as a highly unprobable state of black hole: the state where it consumes high-entropy Hawking radiation and exhales low-entropy objects instead of doing the opposite.
In theories which consider collisions between objects which have opposite arrows of time it is usually derived that upon such collision the object with reverse time arrow will quickly switch its time direction for which only a microscopic perturbation is enough. This means that in a hypothetical universe where there is a black and a white hole, in a short time after their first interaction the white hole will become another black hole so that the system will end up with two black holes.
It should be noted that in the universe which reached termodynamic equilibrium, there is no difference between a black and a white hole, the both behave the same: consume and radiate high-entropy radiation.
-
Thank you very much for the answer! Could you please explain a bit more, what does it mean for an object to change the axis of time, and how does it happen in terms of possible background fields and metric of the system? – Alexey Bobrick Jan 1 at 21:04
A black hole pretty much is the same as a white hole.
Hawking's result proves they're essentially the same object, so the result will be a black hole with a radius larger than the sum of the radius of the black hole and the "white hole".
I'm just an undergraduate so possibly one of the other members can give a more detailed answer.
edit: I implied but did not directly say that due to a white hole being the same as a black hole your question becomes "what is the result of the collision between two black holes" so the answer is what I said above. I put white hole in quotations because it's really just another (possibly smaller or larger) black hole.
-
An obvious qusteion then: why wouldn't the result be one big white hole? – Alexey Bobrick Jun 20 '12 at 12:29
There is no difference, so it's one big white/black hole. – Ron Maimon Jun 20 '12 at 19:03
1
Dear Ron, here is a natural question then: Imagine an observer who has just being launched from the center of a white whole to the outer space, and another observer who is on the collision course with the center of a black hole. Shall they be able to distinguish somehow whether they are in a black/white hole? – Alexey Bobrick Jun 20 '12 at 20:39
The interior of a white hole is different from the interior of a black hole, only the exterior is the same. The two observers can distinguish the difference, but Hawking principle guarantees that if there is such a thing as an observer exiting a white hole, there is such a thing as the same kind of observer exiting a black hole. While counterintuitive, this is nowadays accepted holographic physics. – Ron Maimon Jul 4 '12 at 6:59
@Ron: Well, at least you agree that the interior is different. Then it would make a difference during WH-BH collision, wouldn't it? Then back to observers. One day you wish to enter a(static, non-spinning, uncharged) BH. You fly in, and get into the singularity. Another day you wish to enter a WH. You fly close, try to enter, and what do you see? – Alexey Bobrick Jul 6 '12 at 17:25
show 13 more comments
Probably white holes, as the opposite of black holes, do not exist. The reason is an unsurpassable singularity from the black hole to the white hole region. The reason why people believed in white holes was because the complete extended solution to the black hole space time was unknown. There was a time when physicist believed that AGN were actually white holes instead of supermassive BHs. I said probably, because this are all theoretical models that, although have supporting evidence, are still speculative. So, in GR there is nothing today that we can call a white hole. Wormholes are not white holes, and are hypothetical objects that violate the strong energy principle in GR. I have learned not to say that something doesn't exist because the accepted theories say so. Maybe white holes do exist in quantum gravity, because this theory may eliminate singularities. At present we don't know. If they do exists, in some theory not yet realized, their collision and or creation with respect to BHs will be a very interesting problem.
-
White holes and black holes are known to be the same since Hawking argued this in 1976. – Ron Maimon Jul 4 '12 at 6:59
– mmc Jul 9 '12 at 5:02
No,Hawking did not prove a white hole and a black hole are one of the same nor could he or anybody ever do . We can only prove the existence of what we can make actual observations of, but i will give a challenge to who is willing to accept, a gravitational field in of space-time itself nothing! escapes a black hole if anybody believes the universe is finite then explain me why it is expanding rather than contracting.?
As I can only think of very complicated scenarios which none of which make much sense. But i don't believe the universe to be finite, the universe is full of weird contradiction even at the most fundamental level this is why many scientists are beginning to think the universe is holographic. If that being the case we can't really say what created the projector because our brains cannot outgrow the confines of the computers program. But i personally don't believe this scenario even though its much more logical than a fairy waving a wound and saying let there be light.
-
In an entirely time symmetric situation, and assuming the universe does not contain any other fields besides the two B&W holes, the only way to determine what is the preferred time direction is by looking which of the two holes have the bigger entropy.
Each hole has an entropy proportional to the square of its mass. if $M_{bh} \gg M_{wh}$, then we can consider the white hole a "temporary fluctuation" in a background of forward-oriented thermodynamic increase (using the convention that black-holes increase their entropy "forward" in time), in the case where $M_{bh} \ll M_{wh}$ is the other way around: The preferred thermodynamic arrow is backward and the black hole is a temporary fluctuation going forward.
In the case where $M_{bh} \approx M_{wh}$ there is no preferred thermodynamic time direction. If thermodynamic time is considered a flow, this would be a fixed point.
-
Could you please elaborate on your last paragraph? The question describes the case when the two obects have arbitrary, for example comparable, masses. The time direction may be assumed known. – Alexey Bobrick Jan 1 at 21:00
Another amateur answer: the energy of a White Hole is convex and the energy of a Black Hole is concave, so they cannot approach each other. Two black holes can approach one another; two white holes can approach one another. But white holes and black holes are kept apart magically, in much the same ways that matter and antimatter are kept apart.
The White Hole and the Black Hole are the same, just at different stages of their life -- the White Hole is the Expansive Stage; the Black Hole is the Recessive or Withdrawn Stage. White Holes become Black Holes become White Holes become Black Holes over and over again during their life cycle.
A Black Hole in one dimension is effectively a White Hole in an Opposite Dimension. A Black Hole in this dimension is effectively building an anti-Universe in an Opposite Dimension.
-
Could you please clarify what you mean by concave/convex energy? Are you referring to a potential? – user9886 Jul 3 '12 at 15:33
matter and antimatter are not kept apart, magically or otherwise – lurscher Jul 3 '12 at 18:48
The word 'magically' was used poetically. My theory is that gravity and anti-gravity work to keep twin universes apart, one composed of matter (light) and the other composed of anti-matter (anti-light -- which is dark from the perspective of the material universe). The two universes are kept apart by the primary forces of gravity and anti-gravity, and are separated by the plasma which acts as a boundary between the two. In a real and metaphorical sense, the anti-universe is the universe turned inside out and upside down -- the AU is 'hidden' in the inner dimensions of the Universe. – Michael J. Clark Jul 4 '12 at 8:06
Concave energy is the flow away from (inside) the event horizon caused by increasing gravity . Big Crunch. Convex energy is the flow of the event horizon (the boundary) of the universe itself as it expands through the force of anti-gravity. Big Bang – Michael J. Clark Jul 4 '12 at 8:09
Technically you would say that a black hole is a causal region which is not in the past of any point in the external universe. Similarly a white hole must be causally before any point in the external universe. That is a perfect description of the Big Bang. Mathematically a singularity exists inside a black hole, where matter becomes infinitely compressed. I don't believe there is more than one singularity in the universe, however. All black holes are from our viewpoint frozen portions of time and space, and the singularity within all of them is only going to "occur" - in a sense - at the end of the universe. Therefore a black hole could never meet a white hole, except in one sense: as both the singularity at the beginning and end of the universe are both "beyond time" they may well be the same singularity, it would certainly resolve the question of where everything came from! So black holes and the Big Bang are two sides of one coin. In another sense you can't collide them, they are eternally linked to each other.
-
If you wish to make an analogue statement of "BH region is causally disconnected from the outer world" for the case of a WH, you would rather have to say the "WH is a region, which the outer world is causally disconnected from". This is not the same as being causally before everything. Also, BHs and WHs are not frozen in spacetime, in particular they have world lines. That's why the analogy to the Big Bang is not applicable really. Further,there may be more than one singularity at the same time, as there are, for example, spacetimes with several black holes. – Alexey Bobrick Nov 18 '12 at 18:03
## protected by Qmechanic♦Jan 25 at 20:52
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474815130233765, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/tagged/oeis
|
Tagged Questions
0answers
415 views
Is OEIS A007018 really a subsequence of squarefree numbers?
A comment in A007018 a(n) = a(n-1)^2 + a(n-1), a(0)=1 claims Subsequence of squarefree numbers (A005117). - Reinhard Zumkeller, Nov 15 2004 Is it really so? As far as I know …
1answer
820 views
Number of distinct values taken by $\alpha$ ^ $\alpha$ ^ $\dots$ ^ $\alpha$ with parentheses inserted in all possible ways, $\alpha\in\mathbf{Ord}$
Let $\alpha\in\mathbf{Ord}$ and $n\in\mathbb{N}^+$. Let $F_\alpha(n)$ be the number of distinct values taken by ordinal exponentiation \$\underbrace{\alpha \hat{\phantom{\hat{}}} \ …
1answer
173 views
Understanding a sequence generation formula of the A064532
I'm trying to understand the formula presented for the sequence A064532 from the OEIS, looks like a recurrence relation with complex numbers: $a(10i+j) = a(i) + a(j), etc.$ Sorry …
1answer
854 views
Number of distinct values taken by x^x^…^x with parentheses inserted in all possible ways
For what positive x's the number of distinct values taken by x^x^...^x with parentheses inserted in all possible ways is not represented by the sequence A000081? Is it exactly the …
2answers
753 views
Least number of non-zero coefficients to describe a degree n polynomial
I'd be grateful for a good reference on this, it feels like a classic subject yet I couldn't find much about it. Polynomials in one variable of the form \$x^n+a_{n-1}x^{n-1}+\dot …
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8432306051254272, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/20567?sort=newest
|
What is the oriented Fano plane?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
One way to remember the multiplication table of the octonions is to use the following diagram (which I got from John Baez's online paper): if $(e_i,e_j,e_k)$ is one of the lines listed according to the cyclic order indicated in the diagram, then $e_ie_j=e_k$ and $e_je_i=-e_k$ in $\mathbb O$.
If we forget the cyclic orientation of the lines, this is of course a well-known depiction of the Fano plane $P^2(\mathbb F_2)$, which is an example of many different structures: it is a Steiner triple system, a quasigroup, &c.
What kind of object is this oriented Fano plane?
NB1: Naive googling informs of the concept of Mendelsohn triple systems and of transitive triple systems, both of which are enrichments of the notion of Steiner triple systems with orderings on the blocks. The oriented Fano plane above is not an example of these concepts, though.
NB2: One way to reconstruct the orientation is as follows: it is (up to projective linear automorphisms) the unique way to cyclically orient the lines in the plane in such a way that for each point $x$, the set of three points which follow $x$ in the three lines that go through it is itself a line. In fact, it is the only Steiner triple system which can be oriented with this property.
-
Let me add one more object which it isn't. Viewing the Fano plane as a matroid, this oriented Fano plane is not an oriented matroid either. See [here](springerlink.com/content/r3w1m2764n25tg84). – Tony Huynh Apr 7 2010 at 2:25
2
I hope this does not evolve into a big-list of things it isn't! :) – Mariano Suárez-Alvarez Apr 7 2010 at 2:27
2 Answers
Here is one answer: It is an oriented line over `$\mathbb{F}_7$`.
An affine line over `$\mathbb{F}_7$` is a set of 7 points with a simply transitive action of $\mathbb{Z}/7\mathbb{Z}$, but no distinguished origin. Here, we don't have a distinguished origin and we also don't remember the precise translation action, but we have a distinguished notion of addition by a square (think of what this would mean for real numbers). In other words, it is a set with seven elements, equipped with an unordered triple of simply transitive actions of $\mathbb{Z}/7\mathbb{Z}$, such that translation by 1 under one of the actions is equivalent to translation by the square classes $2$ and $4$ under the other two actions.
If you take any pair of points $(x,y)$ in the above picture and subtract their indices, the orientation of the arrow between them is $x \to y$ if and only if $y-x$ is a square mod 7. Furthermore, a triple of points $(x,y,z)$ with directed arrows $x \to y \to z$ is collinear if and only if $\frac{z-y}{y-x} = 2$. Even though the numerator and denominator are only well-defined up to multiplication by squares, the quotient is a well-defined element of `$\mathbb{F}_7^\times$`, since each of the three translation actions yield the same answer. These two data let us reconstruct the diagram from the oriented line structure.
There is a group-theoretic interpretation of this object. The oriented hypergraph you've given has automorphism group of order 21, generated by the permutations $(1234567)$ (one of the translation actions) and $(235)(476)$ (changes translation action by conjugating). This can be identified with the quotient `$B^+(\mathbb{F}_7)/\mathbb{F}_7^\times$`, where `$B^+(\mathbb{F}_7)$` is the group of upper triangular matrices with entries in $\mathbb{F}_7$ and invertible square determinant, and `$\mathbb{F}_7^\times$` is the subgroup of scalar multiples of the identity. This group is the stabilizer of infinity under the transitive action of the simple group of order 168 on the projective line `$\mathbb{P}^1(\mathbb{F}_7)$`. In this sense, we can view the simple group as the automorphism group of an oriented projective line, since it is the subgroup of `$PGL_2(\mathbb{F}_7)$` whose matrices have square determinant.
Unfortunately, I do not know a natural notion of orientation on an `$\mathbb{F}_2$`-structure. I tried something involving torsors over `$\mathbb{F}_8^\times$` and the Frobenius, but it became a mess.
-
1
Indeed. This construction proceeds from what's called a $\lambda$-*difference set*, a subset $D$ of $\mathbb Z_n$ such that each non-zero element of $\mathbb Z_n$ can be written in exactly $\lambda$ ways as a difference of elements of $D$; in the case of the Fano plane, $n=7$, $D={1,2,4}$ and $\lambdda=1$. When $\lambda$ is one, the resulting hypergraph is a projective plane. – Mariano Suárez-Alvarez Apr 8 2010 at 20:38
I am confused. The number of $F_7$-affine structures is 6!=720. The number of orientations on each line is $2^7=128$. Clearly, some affine structures are no good, for instance, those that have 3 collinear points follow one another. How do you account for this? – Bugs Bunny Apr 9 2010 at 10:37
@Bugs: Admissible sets of line orientations need to admit 3 independent Hamiltonian cycles, taken to each other by a transitive symmetry (or, see NB2 in the question). Admissible triples of affine structures have to obey the collinearity condition I gave above. There are 21 of each. – S. Carnahan♦ Apr 9 2010 at 18:58
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
No clue how to answer your question but one way to choose an orientation is to choose a basis of the vector space ${F_2}^3$. By basis I mean a totally ordered basis. For instance, the orientation on the picture corresponds to the basis $e_1$, $e_2$, $e_3$ where I take the liberty to identify a line with its non-zero element. If you think of the vector space with a basis as "the standard vector space" then you can think of the oriented Fano plane as the standard projective space.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358111023902893, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/tagged/normal-distribution?sort=active&pagesize=15
|
# Tagged Questions
The normal-distribution tag has no wiki summary.
5answers
339 views
### In Black-Scholes, why is $\log{\frac{S_{t+\triangle t}}{S_t}} \sim \phi{((\mu - \frac{1}{2}\sigma^2)\triangle t, \sigma^2 \triangle t)}$?
Namely, I dont understand why the mean is $(\mu - \frac{1}{2}\sigma^2)\triangle t$ and not just $\mu \triangle t$. I am aware that it is supposed to represent a lognormal distribution, but I guess I'm ...
2answers
87 views
### Transformation to reduce standard deviation without changing median
Consider some negative skew and high kurtosis return time-series $X_t$. I do not know the functional form of the pdf of $X_t$ and have about 150,000 data points. Suppose that I was to create an ...
0answers
69 views
### How to trade risk-adjusted returns?
Why does dividing daily returns by daily range eliminates fat tails and results in an (almost) gaussian distribution? And how could that distribution be exploited to enter trades?
1answer
126 views
### normalized accumulation distribution
I am looking for a way to take an accumulation/distribution indicator and normalize it so I can compare a bunch of stocks with stock prices that have no relationship with each other. EDIT: This ...
1answer
341 views
### a simpler test for normality given skewness, kurtosis and autocorrelation and size of time series
I typically do a JB (Jarque Bera) test and DW (Durbin Watson) tests for check for normality given skewness, kurtosis and autocorrelation of the data. However this requires a CHI distribution table ...
1answer
397 views
### Monte carlo portfolio risk simulation
My objective is to show the distribution of a portfolio's expected utilities via random sampling. The utility function has two random components. The first component is an expected return vector ...
2answers
511 views
### Whats the equation to calculate the area under the curve of a normal distribution, given an upper and lower standard deviation?
Lets say I want to find out the area under the graph of normal distribution curve, between X1=standard deviation of -0.5 and X2 = standard deviation of 0.5. Is there a formula for this? Case study: ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8922460675239563, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/64752/is-there-a-complex-analog-of-this-sharpened-cauchy-inequality
|
## Is there a complex analog of this sharpened Cauchy Inequality?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $x$ and $y$ be two points on the unit sphere $S^{n-1}$ in Euclidean space ${\mathbb{R}}^n$. Suppose that the angle $\theta$ between the points $x$ and $y$ is acute, so that the dot product $x\cdot y=\cos\theta>0$, where we treat $x$ and $y$ as unit vectors. The angle $\theta$ can be interpreted as the geodesic distance between $x$ and $y$ in the round metric on the sphere $S^{n-1}$.
The Cauchy inequality applied here states that $x\cdot y\le 1$. Equality holds if and only if $x=y$. (The case where $x$ and $y$ are antipodal is ruled out by the condition that the angle $\theta$ is acute.)
I'm curious as how we can sharpen this inequality to account for the situation when the geodesic distance $\theta$ is large. Noting that $\cos \theta < 1-\theta^2/2$ as $\theta$ is acute, this basic estimate geometrically becomes $x\cdot y <1-\theta^2/2$.
I'm wondering if my lower bound is the best possible. In other words, is it true that $1-x\cdot y=O(\theta^2)$ if the angle $\theta$ is acute?
Is there an analogue of this sharpened inequality when we consider points on the unit sphere $S^{2n-1}$ in complex Euclidean space ${\mathbb{C}}^n$ and apply the complex Cauchy inequality instead?
-
2
Since $x\cdot y = \cos \theta$, isn't the answer to your first question trivially yes by looking at the Taylor expansion of $\cos$? – Yemon Choi May 12 2011 at 6:44
(there are also lower-tech ways of seeing that $1 - \cos\theta = 2\sin^2\theta$ is $O(\theta^2)$) – Yemon Choi May 12 2011 at 6:44
3
I think the answer to the first question is yes; you can by symmetry always reduce to the $n=2$ case by rotating. – J.C. Ottem May 12 2011 at 6:45
1
Following JC's comment, it seems that the complex case can be reduced to the ${\mathbb{C}}^2$ by unitary transformations too. – Colin Tan May 12 2011 at 7:00
So the question is settled then? In $\mathbb{C}^2$ this is a simple computation.. – J.C. Ottem May 12 2011 at 8:55
show 1 more comment
## 1 Answer
We checked the book "Complex hyperbolic geometry" by Goldman. In there, it is given that for two points $z,w$ in complex projective space ${\mathbb{CP}}^n$, we have $\cos(d(z,w))=\frac{|\langle z,w\rangle|}{|z||w|}$. Here $d(z,w)$ denotes the geodesic distance on ${\mathbb{CP}}^n$ after a normalization.
Since $\cos\theta\le 1-\theta^2/2$, thus $\frac{|\langle z,w\rangle|}{|z||w|}\le 1-d(z,w)^2/2$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418767094612122, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/207343/sqrtaabcd-sqrtaabe-sqrtacde?answertab=oldest
|
# $\sqrt{A(ABCD)} =\sqrt{A(ABE)}+ \sqrt{A(CDE)}$
I am self-studying Euclidean Geometry, and I want to solve the following exercise.
Let $ABCD$ a trapezoid with bases $AB, CD$, and $AC, BD$ are the lateral sides. If $E$ is the point of intersection of its diagonals, then we have $$\sqrt{A(ABCD)} =\sqrt{A(ABE)}+ \sqrt{A(CDE)}$$ where $A(ABCD)$ denotes de area of the trapezoid $ABCD$, $A(ABE)$ denotes the area of the triangle $ABE$ and so on.
I was not able to prove that. I would appreciate any help.
-
## 1 Answer
Let me switch the notation to brackets: that is let $A(ABCD) = [ABCD]$. Then what we want to prove is
$\sqrt{[ABCD]} = \sqrt{[ABE]} + \sqrt{[CDE]}.$
Squaring both sides and subtracting $[ABE]$ and $[CDE]$ from both sides gives:
$[AED] + [BEC] = 2\sqrt{[ABE][CDE]}$.
Dividing both sides by $[BEC]$ gives:
$\begin{equation} \dfrac{[AED]}{[BEC]} + 1 = 2 \sqrt{\dfrac{[ABE]}{[BEC]} \dfrac{[CDE]}{[BEC]}} \end{equation}$. (1)
We prove this instead (if this is true we can go back to the original equation given).
Proof. Since triangles $ABE$ and $BEC$ have the same height from $B$, the ratio of their areas is given by $AE/EC$. Similarly, $[CDE]/[BEC] = ED/BE$. But observe that by AA similarity, $ABE$ and $CDE$ are similar, and hence $BE/ED = AE/EC$. This means $\dfrac{AE}{EC} \cdot \dfrac{ED}{BE} = 1$, and so $\dfrac{[ABE]}{[BEC]} \dfrac{[CDE]}{[BEC]} = 1$. Thus the RHS of (1) equals 2.
Notice further that $[ABD] = [ABC]$ and subtracting $[ABE]$ from both sides gives
$[ABD]-[ABE] = [ABC]-[ABE] \Leftrightarrow [AED] = [BEC]$,
and so the LHS of (1) is: $\dfrac{[AED]}{[BEC]} + 1 = 2$. QED.
-
I apologize for how rough the proof is - I will clean it up in a bit. – Michael Zhao Oct 4 '12 at 19:34
There is no need to clean it up. It is perfect! – spohreis Oct 4 '12 at 22:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428569078445435, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/61890?sort=newest
|
## (A very limited instance of) Lagrange’s Theorem’s converse and A_5
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $G$ is a finite simple group and $|G|$ is a multiple of $60$. Does it follow that $G$ has a subgroup isomorphic to $A_{5}$? If so, can this be proven without using the Classification?
-
1
The "converse" in the header is overstated, since the question concerns only a very special type of divisibility. Aside from that minor point, it's hard to avoid quoting a big chunk of the classification work such as what's involved in determining all possible minimal nonabelian simple groups and their orders. – Jim Humphreys Apr 16 2011 at 17:07
Thanks. I fixed the title accordingly. – DavidLHarden Apr 16 2011 at 17:12
## 3 Answers
I think the answer to the question is yes, but it is very unlikely that it can be proved without using the classification of finite simple groups.
Note that $A_5$ of order 60 is the only simple group order for which this statement is true, because for all higher order simple groups $G$, there will be groups $L_2(p)$ with order divisible by $|G|$ that do not contain $G$ as subgroup.
Let's quickly look at all families if finite simple groups. I hope someone will correct any mistakes I make!
The Suzuki groups (Lie type $^2B_2$) are not divisible by 3, so we can forget them. All other finite simple groups have order divisible by 12, so their order is divisible by 60 if and only if it's divisible by 5.
The claim is clearly true for $A_n$, $n \ge 5$.
It is well-known that $L_2(q)$ contains $A_5$ if and only if $q \equiv \pm{1} \bmod 5$, which is equivalent to $|G|$ divisible by 5.
$U_3(q)$, $L_3(q)$, $G_2(q)$, $^3D_4(q)$ all contain $L_2(q)$ and also have order divisible by 5 if and only $q \equiv \pm{1} \bmod 5$.
$^2F_4(2^{2e+1})$ contains $^2F_4(2)$, which contains $A_5$.
$^2G_2(3^{2e+1})$ never has order divisible by 5.
$S_4(q)$ contains $L_2(q^2)$, which always contains $A_5$ for all $q$.
All remaining groups of Lie type contain $S_4(q)$ and hence contain $A_5$.
It is easily checked, for example by looking at their lists of maximal subgroups in the ATLAS or on
http://brauer.maths.qmul.ac.uk/Atlas/v3/
that the sporadic groups contain $A_5$.
-
1
Hey! Don't leave us hanging like that! Gerhard "Ask Me About System Design" Paseman, 2011.04.17 – Gerhard Paseman Apr 17 2011 at 20:28
Good. I can breathe easier now. Gerhard "Really Likes To Breate Easy" Paseman, 2011.04.17 – Gerhard Paseman Apr 17 2011 at 22:16
For a layman (nay, this is already too generous, an ignoramus) as myself, this is a quite impressive answer. – Olivier Apr 18 2011 at 8:48
Thanks, but I am hoping somebody will read it critically, and highlight any dubious claims or possible inaccuracies! – Derek Holt Apr 18 2011 at 14:01
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I am not sure what the answer to this question is offhand, but once you know the character table of a finite group G, it is, in principle, straightforward to determine whether G has a subgroup isomorphic to $A_5$. For G has such a subgroup if and only if G contains elements $x,y,z$ of respective orders 2, 3 and 5, with $xyz = 1$. This can be done from the character table, using "class algebra constant" calculations, using a formula which dates back at least as far as W.Burnside, and can be found in most texts on character theory of finite groups. The trick to checking this efficiently for a group whose character table you know is to try to choose the elements $x,y,$ and $z$ so that lots of irreducible characters vanish one at least one of $x,y,z.$ For example, all non-trivial irreducible characters of $A_5$ vanish on one of $x,y,z,$ when $x,y,z$ have those orders. It is hard to believe that this problem could be resolved without the classification of finite simple groups. A related question is a theorem of Graham Higman, who proved that $A_5$ is the only finite simple group which has a maximal subgroup which is dihedral of order 10. This did not use the classification of all finite simple groups, but did use the fact that Suzuki groups were the only simple groups of order prime to 3.
-
See http://en.wikipedia.org/wiki/N-group_%28finite_group_theory%29 where it talks about minimal simple groups. You'd need to check in particular that the example of 2x2 projective special linear groups can't have order divisible by 60. But I don't believe that.
Edit: I was however using the wrong formula for the order, which can indeed avoid divisibility by 5.
-
1
$PSL_2(2^p)$, $PSL_2(3^p)$, $PSL_2(p)$ with $p=2,3$ mod 5, $Sz(2^p)$ all have order not divisible by 60. – Junkie Apr 16 2011 at 7:50
1
It's not that easy. If the minimal simple groups are checked, and that says that the only one that has order equal to a multiple of 60 is A_5 itself, then it is still possible that some group larger than A_5 is not minimal among simple groups, but minimal among those whose order is a multiple of 60 (and hence a counterexample). Also, the list of minimal simple groups is incomplete as given here, since it doesn't include $PSL_{3}(3)$. – DavidLHarden Apr 16 2011 at 16:56
If my arithmetic is correct (not guaranteed), 5 doesn't divide the order of `$PSL_3(3)$`; the comment by Junkie should dispose of the other minimal simple groups. Still, as David points out there are further possibilities. It seems that 60 does divide the order of all but one of the 26 sporadic simple groups, so they need more scrutiny. Even assuming CFSG is in hand, what is the strategy for deciding whether or not `$A_5$` occurs as a subgroup when 60 divides? – Jim Humphreys Apr 16 2011 at 19:48
My strategy for disposing of the claim having failed, it seems I'm out of my depth. – Charles Matthews Apr 16 2011 at 20:48
My strategy was essentially to look at (known) maximal subgroups, as Derek Holt did. I am not as facile with the subject as he. – Junkie Apr 18 2011 at 4:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420486688613892, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/95639/finding-an-equation-for-a-circle-given-two-points/95676
|
Finding an equation for a circle given two points
No idea how to do this, I used to have these conic shapes committed to memory but I forget them already.
I am supposed to find an equation for the circle that has center $(-1, 4)$ and passes through $(3, -2)$.
I tried graphing it, but it didn't help since the point was not in a straight line with the center and I have no idea what the diameter is.
-
3
You aren't being given "two points" of the circle (that would be insufficient). You are being given the center and a point on the circle. Very different things. – Arturo Magidin Jan 1 '12 at 21:16
2
I notice that you say "the point was not in a straight line with the center". But, given any two points, there is some straight line that joins them. (The straight line may happen to not be parallel to either of the coordinate axes.) – idmercer Jan 1 '12 at 23:40
6 Answers
Recall the:
Distance Formula:
The distance between the points $(x_1,y_1)$ and $(x_2,y_2)$ is $$D=\sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}$$
Example:
The radius of your circle is the distance between the points $(-1,4)$ and $(3,-2)$. Using the Distance Formula: $$D= \sqrt{ \bigl(3-(-1)\bigr)^2 + (-2-4)^2 } = \sqrt{ 4^2 + (-6)^2 } = \sqrt{ 16+36 } = \sqrt{ 52 } .$$
What is the equation of the circle?
It is important to realize the "equation of the circle" is: a point $(x,y)$ is on the circle if and only if the coordinates of the point $x$ and $y$ satisfy the equation. So, how to get the equation? What is the relationship between the $x$ and $y$ coordinates of a point on the circle?
Well, let $(x,y)$ be a point on the circle. The big idea is:
$$\text{The distance from the point }(x,y)\text{ to the center }(-1,4)\text{ is }\sqrt{52}.$$
So, using the distance formula (with $(x_2,y_2)=(x,y)$ and $(x_1,y_1)=(-1,4)$) , it follows that $$\sqrt{52}=\sqrt{\bigl(x-(-1)\bigr)^2 +(y-4)^2}.$$
Or $$52=(x+1)^2 +(y-4)^2.$$
The shortcut would be to just use the following formula (But it's important to realize why you'd use it and where it comes from):
Equation of a Circle
The equation of the circle with center located at $(a,b)$ and with radius $r$ is $$r^2=(x-a)^2 +(y-b)^2$$ Note that the radius squared is on the left-hand side of the equation.
The following may help:
-
Am I supposed to have the distance formula memorized? – Jordan Jan 1 '12 at 21:32
4
@Jordan Yes. (Not really, if all you want to do is find the equation of the circle. You can use the formula.) But it's really just the Pythagorean Theorem (draw a right triangle with the two points forming its hypotenuse). – David Mitra Jan 1 '12 at 21:37
I don't understand how this is the pythagorean theorem. I can work it on on a graph but I don't see how the equation fits that at all or why the answer isn't the square root of 52, not just 52. – Jordan Jan 1 '12 at 21:41
@Jordan I meant the distance formula is an application of the Pythagorean Theorem. Let $c$ be the distance between two points $(x_1,y_1)$ and $(x_2,y_2)$. Call the distance between the $x$-coordinates $a$ and the distance between the $y$ coordinates $b$. Then $c^2=a^2+b^2$. And $a=|x_2-x_1|$, $b=|y_2-y_1|$. – David Mitra Jan 1 '12 at 21:44
@Jordan The radius of your circle is $\sqrt{52}$. The equation of the circle is $52=(x+1)^2+(y-4)^2$. – David Mitra Jan 1 '12 at 21:47
show 3 more comments
The equation of a circle with center $(a,b)$ and radius $r$ is $$(x-a)^2 + (y-b)^2 = r^2.$$
You know the center. You also know one point on the circle. The radius is the distance from the center of the circle to any one point on the circle.
So find the distance between $(-1,4)$ and $(3,-2)$ to get the radius. Then use the radius and the center to get the equation.
The diameter is, of course, twice the radius, but is irrelevant here.
-
I don't know how to find the difference between two points on a graph. – Jordan Jan 1 '12 at 21:29
@Jordan: The distance between two points (which in this case, are not both "on a graph"), not the difference. I suspect you do know it, you just don't remember it or haven't made the connections. In any case, Wikipedia has the formula – Arturo Magidin Jan 1 '12 at 22:58
In general a circle with center $(h,k)$ and radius $r$ can be expressed as:
$$(x-h)^2+(y-k)^2=r^2$$
In the case of our questions, you have $(h,k)=(-1,4)$ in the above equation. You can then substitute $(-3,2)$ for $(x,y)$ to find $r$.
-
Well you should know (or look up in your textbook) that a circle is described by the equation $(x - x_0)^2 + (y - y_0)^2 = r^2$ where $(x_0, y_0)$ is the center and $r$ is the radius. So you see your circle should be $(x +1)^2 + (y - 4)^2 = r^2$. The only thing missing is $r^2$. But the problem tells you (3, -2) is a point on the circle. So plugging this into your equation, you will find your answer.
-
The equation of a circle with center $(a,b)$ and radius $r$ is $$(x-a)^2 + (y-b)^2 = r^2.$$
Because the center is $(a,b)=(-1,4)$ so $a=-1,b=4$. You also know one point on the circle. The radius is the distance from the center of the circle to any one point on the circle. The distance between $(-1,4)$ and $(3,-2)$ is $r=\sqrt52$ or $r^2=52$. Then use the radius and the center to get the equation.
$$(x+1)^2 + (y-4)^2 = 52$$
-
Perhaps if you understand what a circle is, it may help. A circle is really just all points on the plane equidistant from the center.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417992234230042, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2011/10/21/line-integrals/?like=1&_wpnonce=401286028f
|
# The Unapologetic Mathematician
## Line Integrals
We now define some particular kinds of integrals as special cases of our theory of integrals over manifolds. And the first such special case is that of a line integral.
Consider an oriented curve $c$ in the manifold $M$. We know that this is a singular $1$-cube, and so we can pair it off with a $1$-form $\alpha$. Specifically, we pull back $\alpha$ to $c^*\alpha$ on the interval $[0,1]$ and integrate.
More explicitly, the pullback $c^*\alpha$ is evaluated as
$\displaystyle\left[c^*\alpha\left(\frac{d}{dt}\right)\right](t_0)=\left[\alpha_{c(t_0)}\right]\left(c_{*t_0}\frac{d}{dt}\bigg\vert_{t_0}\right)$
That is, for a $t_0\in[0,1]$, we take the value $\alpha_{c(t_0)}\in\mathcal{T}^*_{c(t_0)}M$ of the $1$-form $\alpha$ at the point $c(t_0)\in M$ and the tangent vector $c'(t_0)\in\mathcal{T}_{c(t_0)}M$ and pair them off. This gives us a real-valued function which we can integrate over the interval.
So, why do we care about this particularly? In the presence of a metric, we have an equivalence between $1$-forms $\alpha$ and vector fields $F$. And specifically we know that the pairing $\alpha_{c(t)}\left(c'(t)\right)$ is equal to the inner product $\langle F(c(t)),c'(t)\rangle$ — this is how the equivalence is defined, after all. And thus the line integral looks like
$\displaystyle\int\limits_c\alpha=\int\limits_{[0,1]}\langle F(c(t)),c'(t)\rangle\,dt$
Often the inner product is written with a dot — usually called the “dot product” of vectors — in which case this takes the form
$\displaystyle\int\limits_{[0,1]}F(c(t))\cdot c'(t)\,dt$
We also often write $ds=c'(t)\,dt$ as a “vector differential-valued function”, in which case we can write
$\displaystyle\int\limits_{[0,1]}F\cdot ds$
Of course, we often parameterize a curve by a more general interval $I$ than $[0,1]$, in which case we write
$\displaystyle\int\limits_IF\cdot ds$
This expression may look familiar from multivariable calculus, where we first defined line integrals. We can now see how this definition is a special case of a much more general construction.
### Like this:
Posted by John Armstrong | Differential Geometry, Geometry
## 5 Comments »
1. [...] any -form on the image of — in particular, given an defined on — we can define the line integral of over . We already have a way of evaluating line integrals: pull the -form back to the parameter [...]
Pingback by | October 24, 2011 | Reply
2. [...] flip side of the line integral is the surface [...]
Pingback by | October 27, 2011 | Reply
3. [...] Stokes’ theorem only works in dimension , where the differential can take us straight from a line integral over a -dimensional region to a surface integral over an -dimensional [...]
Pingback by | November 23, 2011 | Reply
4. [...] if we have a curve that starts at a point and ends at a point then fundamental theorem of line integrals makes it easy to [...]
Pingback by | December 15, 2011 | Reply
5. [...] We calculate the work done by a force in moving a particle along a path is given by the line integral [...]
Pingback by | January 13, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 28, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9200475215911865, "perplexity_flag": "head"}
|
http://terrytao.wordpress.com/tag/exterior-algebra/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘exterior algebra’ tag.
## Supercommutative gaussian integration, and the gaussian unitary ensemble
19 February, 2013 in expository, math.CA, math.PR, math.QA, math.RA | Tags: Berezin integration, exterior algebra, gaussian unitary ensemble, Grassmann algebra, supersymmetry | by Terence Tao | 13 comments
The fundamental notions of calculus, namely differentiation and integration, are often viewed as being the quintessential concepts in mathematical analysis, as their standard definitions involve the concept of a limit. However, it is possible to capture most of the essence of these notions by purely algebraic means (almost completely avoiding the use of limits, Riemann sums, and similar devices), which turns out to be useful when trying to generalise these concepts to more abstract situations in which it becomes convenient to permit the underlying number systems involved to be something other than the real or complex numbers, even if this makes many standard analysis constructions unavailable. For instance, the algebraic notion of a derivation often serves as a substitute for the analytic notion of a derivative in such cases, by abstracting out the key algebraic properties of differentiation, namely linearity and the Leibniz rule (also known as the product rule).
Abstract algebraic analogues of integration are less well known, but can still be developed. To motivate such an abstraction, consider the integration functional ${I: {\mathcal S}({\bf R} \rightarrow {\bf C}) \rightarrow {\bf C}}$ from the space ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ of complex-valued Schwarz functions ${f: {\bf R} \rightarrow {\bf C}}$ to the complex numbers, defined by
$\displaystyle I(f) := \int_{\bf R} f(x)\ dx$
where the integration on the right is the usual Lebesgue integral (or improper Riemann integral) from analysis. This functional obeys two obvious algebraic properties. Firstly, it is linear over ${{\bf C}}$, thus
$\displaystyle I(cf) = c I(f) \ \ \ \ \ (1)$
and
$\displaystyle I(f+g) = I(f) + I(g) \ \ \ \ \ (2)$
for all ${f,g \in {\mathcal S}({\bf R} \rightarrow {\bf C})}$ and ${c \in {\bf C}}$. Secondly, it is translation invariant, thus
$\displaystyle I(\tau_h f) = I(f) \ \ \ \ \ (3)$
for all ${h \in {\bf C}}$, where ${\tau_h f(x) := f(x-h)}$ is the translation of ${f}$ by ${h}$. Motivated by the uniqueness theory of Haar measure, one might expect that these two axioms already uniquely determine ${I}$ after one sets a normalisation, for instance by requiring that
$\displaystyle I( x \mapsto e^{-\pi x^2} ) = 1. \ \ \ \ \ (4)$
This is not quite true as stated (one can modify the proof of the Hahn-Banach theorem, after first applying a Fourier transform, to create pathological translation-invariant linear functionals on ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ that are not multiples of the standard Fourier transform), but if one adds a mild analytical axiom, such as continuity of ${I}$ (using the usual Schwartz topology on ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$), then the above axioms are enough to uniquely pin down the notion of integration. Indeed, if ${I: {\mathcal S}({\bf R} \rightarrow {\bf C}) \rightarrow {\bf C}}$ is a continuous linear functional that is translation invariant, then from the linearity and translation invariance axioms one has
$\displaystyle I( \frac{\tau_h f - f}{h} ) = 0$
for all ${f \in {\mathcal S}({\bf R} \rightarrow {\bf C})}$ and non-zero reals ${h}$. If ${f}$ is Schwartz, then as ${h \rightarrow 0}$, one can verify that the Newton quotients ${\frac{\tau_h f - f}{h}}$ converge in the Schwartz topology to the derivative ${f'}$ of ${f}$, so by the continuity axiom one has
$\displaystyle I(f') = 0.$
Next, note that any Schwartz function of integral zero has an antiderivative which is also Schwartz, and so ${I}$ annihilates all zero-integral Schwartz functions, and thus must be a scalar multiple of the usual integration functional. Using the normalisation (4), we see that ${I}$ must therefore be the usual integration functional, giving the claimed uniqueness.
Motivated by the above discussion, we can define the notion of an abstract integration functional ${I: X \rightarrow R}$ taking values in some vector space ${R}$, and applied to inputs ${f}$ in some other vector space ${X}$ that enjoys a linear action ${h \mapsto \tau_h}$ (the “translation action”) of some group ${V}$, as being a functional which is both linear and translation invariant, thus one has the axioms (1), (2), (3) for all ${f,g \in X}$, scalars ${c}$, and ${h \in V}$. The previous discussion then considered the special case when ${R = {\bf C}}$, ${X = {\mathcal S}({\bf R} \rightarrow {\bf C})}$, ${V = {\bf R}}$, and ${\tau}$ was the usual translation action.
Once we have performed this abstraction, we can now present analogues of classical integration which bear very little analytic resemblance to the classical concept, but which still have much of the algebraic structure of integration. Consider for instance the situation in which we keep the complex range ${R = {\bf C}}$, the translation group ${V = {\bf R}}$, and the usual translation action ${h \mapsto \tau_h}$, but we replace the space ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ of Schwartz functions by the space ${Poly_{\leq d}({\bf R} \rightarrow {\bf C})}$ of polynomials ${x \mapsto a_0 + a_1 x + \ldots + a_d x^d}$ of degree at most ${d}$ with complex coefficients, where ${d}$ is a fixed natural number; note that this space is translation invariant, so it makes sense to talk about an abstract integration functional ${I: Poly_{\leq d}({\bf R} \rightarrow {\bf C}) \rightarrow {\bf C}}$. Of course, one cannot apply traditional integration concepts to non-zero polynomials, as they are not absolutely integrable. But one can repeat the previous arguments to show that any abstract integration functional must annihilate derivatives of polynomials of degree at most ${d}$:
$\displaystyle I(f') = 0 \hbox{ for all } f \in Poly_{\leq d}({\bf R} \rightarrow {\bf C}). \ \ \ \ \ (5)$
Clearly, every polynomial of degree at most ${d-1}$ is thus annihilated by ${I}$, which makes ${I}$ a scalar multiple of the functional that extracts the top coefficient ${a_d}$ of a polynomial, thus if one sets a normalisation
$\displaystyle I( x \mapsto x^d ) = c$
for some constant ${c}$, then one has
$\displaystyle I( x \mapsto a_0 + a_1 x + \ldots + a_d x^d ) = c a_d \ \ \ \ \ (6)$
for any polynomial ${x \mapsto a_0 + a_1 x + \ldots + a_d x^d}$. So we see that up to a normalising constant, the operation of extracting the top order coefficient of a polynomial of fixed degree serves as the analogue of integration. In particular, despite the fact that integration is supposed to be the “opposite” of differentiation (as indicated for instance by (5)), we see in this case that integration is basically (${d}$-fold) differentiation; indeed, compare (6) with the identity
$\displaystyle (\frac{d}{dx})^d ( a_0 + a_1 x + \ldots + a_d x^d ) = d! a_d.$
In particular, we see, in contrast to the usual Lebesgue integral, the integration functional (6) can be localised to an arbitrary location: one only needs to know the germ of the polynomial ${x \mapsto a_0 + a_1 x + \ldots + a_d x^d}$ at a single point ${x_0}$ in order to determine the value of the functional (6). This localisation property may initially seem at odds with the translation invariance, but the two can be reconciled thanks to the extremely rigid nature of the class ${Poly_{\leq d}({\bf R} \rightarrow {\bf C})}$, in contrast to the Schwartz class ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ which admits bump functions and so can generate local phenomena that can only be detected in small regions of the underlying spatial domain, and which therefore forces any translation-invariant integration functional on such function classes to measure the function at every single point in space.
The reversal of the relationship between integration and differentiation is also reflected in the fact that the abstract integration operation on polynomials interacts with the scaling operation ${\delta_\lambda f(x) := f(x/\lambda)}$ in essentially the opposite way from the classical integration operation. Indeed, for classical integration on ${{\bf R}^d}$, one has
$\displaystyle \int_{{\bf R}^d} f(x/\lambda)\ dx = \lambda^d \int f(x)\ dx$
for Schwartz functions ${f \in {\mathcal S}({\bf R}^d \rightarrow {\bf C})}$, and so in this case the integration functional ${I(f) := \int_{{\bf R}^d} f(x)\ dx}$ obeys the scaling law
$\displaystyle I( \delta_\lambda f ) = \lambda^d I(f).$
In contrast, the abstract integration operation defined in (6) obeys the opposite scaling law
$\displaystyle I( \delta_\lambda f ) = \lambda^{-d} I(f). \ \ \ \ \ (7)$
Remark 1 One way to interpret what is going on is to view the integration operation (6) as a renormalised version of integration. A polynomial ${x \mapsto a_0 + a_1 + \ldots + a_d x^d}$ is, in general, not absolutely integrable, and the partial integrals
$\displaystyle \int_0^N a_0 + a_1 + \ldots + a_d x^d\ dx$
diverge as ${N \rightarrow \infty}$. But if one renormalises these integrals by the factor ${\frac{1}{N^{d+1}}}$, then one recovers convergence,
$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N^{d+1}} \int_0^N a_0 + a_1 + \ldots + a_d x^d\ dx = \frac{1}{d+1} a_d$
thus giving an interpretation of (6) as a renormalised classical integral, with the renormalisation being responsible for the unusual scaling relationship in (7). However, this interpretation is a little artificial, and it seems that it is best to view functionals such as (6) from an abstract algebraic perspective, rather than to try to force an analytic interpretation on them.
Now we return to the classical Lebesgue integral
$\displaystyle I(f) := \int_{\bf R} f(x)\ dx. \ \ \ \ \ (8)$
As noted earlier, this integration functional has a translation invariance associated to translations along the real line ${{\bf R}}$, as well as a dilation invariance by real dilation parameters ${\lambda>0}$. However, if we refine the class ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ of functions somewhat, we can obtain a stronger family of invariances, in which we allow complex translations and dilations. More precisely, let ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$ denote the space of all functions ${f: {\bf C} \rightarrow {\bf C}}$ which are entire (or equivalently, are given by a Taylor series with an infinite radius of convergence around the origin) and also admit rapid decay in a sectorial neighbourhood of the real line, or more precisely there exists an ${\epsilon>0}$ such that for every ${A > 0}$ there exists ${C_A > 0}$ such that one has the bound
$\displaystyle |f(z)| \leq C_A (1+|z|)^{-A}$
whenever ${|\hbox{Im}(z)| \leq A + \epsilon |\hbox{Re}(z)|}$. For want of a better name, we shall call elements of this space Schwartz entire functions. This is clearly a complex vector space. A typical example of a Schwartz entire function are the complex gaussians
$\displaystyle f(z) := e^{-\pi (az^2 + 2bz + c)}$
where ${a,b,c}$ are complex numbers with ${\hbox{Re}(a) > 0}$. From the Cauchy integral formula (and its derivatives) we see that if ${f}$ lies in ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$, then the restriction of ${f}$ to the real line lies in ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$; conversely, from analytic continuation we see that every function in ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ has at most one extension in ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$. Thus one can identify ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$ with a subspace of ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$, and in particular the integration functional (8) is inherited by ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$, and by abuse of notation we denote the resulting functional ${I: \mathcal{SE}({\bf C} \rightarrow {\bf C}) \rightarrow {\bf C}}$ as ${I}$ also. Note, in analogy with the situation with polynomials, that this abstract integration functional is somewhat localised; one only needs to evaluate the function ${f}$ on the real line, rather than the entire complex plane, in order to compute ${I(f)}$. This is consistent with the rigid nature of Schwartz entire functions, as one can uniquely recover the entire function from its values on the real line by analytic continuation.
Of course, the functional ${I: \mathcal{SE}({\bf C} \rightarrow {\bf C}) \rightarrow {\bf C}}$ remains translation invariant with respect to real translation:
$\displaystyle I(\tau_h f) = I(f) \hbox{ for all } h \in {\bf R}.$
However, thanks to contour shifting, we now also have translation invariance with respect to complex translation:
$\displaystyle I(\tau_h f) = I(f) \hbox{ for all } h \in {\bf C},$
where of course we continue to define the translation operator ${\tau_h}$ for complex ${h}$ by the usual formula ${\tau_h f(x) := f(x-h)}$. In a similar vein, we also have the scaling law
$\displaystyle I(\delta_\lambda f) = \lambda I(f)$
for any ${f \in \mathcal{SE}({\bf C} \rightarrow {\bf C})}$, if ${\lambda}$ is a complex number sufficiently close to ${1}$ (where “sufficiently close” depends on ${f}$, and more precisely depends on the sectoral aperture parameter ${\epsilon}$ associated to ${f}$); again, one can verify that ${\delta_\lambda f}$ lies in ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$ for ${\lambda}$ sufficiently close to ${1}$. These invariances (which relocalise the integration functional ${I}$ onto other contours than the real line ${{\bf R}}$) are very useful for computing integrals, and in particular for computing gaussian integrals. For instance, the complex translation invariance tells us (after shifting by ${b/a}$) that
$\displaystyle I( z \mapsto e^{-\pi (az^2 + 2bz + c) } ) = e^{-\pi (c-b^2/a)} I( z \mapsto e^{-\pi a z^2} )$
when ${a,b,c \in {\bf C}}$ with ${\hbox{Re}(a) > 0}$, and then an application of the complex scaling law (and a continuity argument, observing that there is a compact path connecting ${a}$ to ${1}$ in the right half plane) gives
$\displaystyle I( z \mapsto e^{-\pi (az^2 + 2bz + c) } ) = a^{-1/2} e^{-\pi (c-b^2/a)} I( z \mapsto e^{-\pi z^2} )$
using the branch of ${a^{-1/2}}$ on the right half-plane for which ${1^{-1/2} = 1}$. Using the normalisation (4) we thus have
$\displaystyle I( z \mapsto e^{-\pi (az^2 + 2bz + c) } ) = a^{-1/2} e^{-\pi (c-b^2/a)}$
giving the usual gaussian integral formula
$\displaystyle \int_{\bf R} e^{-\pi (ax^2 + 2bx + c)}\ dx = a^{-1/2} e^{-\pi (c-b^2/a)}. \ \ \ \ \ (9)$
This is a basic illustration of the power that a large symmetry group (in this case, the complex homothety group) can bring to bear on the task of computing integrals.
One can extend this sort of analysis to higher dimensions. For any natural number ${n \geq 1}$, let ${\mathcal{SE}({\bf C}^n \rightarrow {\bf C})}$ denote the space of all functions ${f: {\bf C}^n \rightarrow {\bf C}}$ which is jointly entire in the sense that ${f(z_1,\ldots,z_n)}$ can be expressed as a Taylor series in ${z_1,\ldots,z_n}$ which is absolutely convergent for all choices of ${z_1,\ldots,z_n}$, and such that there exists an ${\epsilon > 0}$ such that for any ${A>0}$ there is ${C_A>0}$ for which one has the bound
$\displaystyle |f(z)| \leq C_A (1+|z|)^{-A}$
whenever ${|\hbox{Im}(z_j)| \leq A + \epsilon |\hbox{Re}(z_j)|}$ for all ${1 \leq j \leq n}$, where ${z = \begin{pmatrix} z_1 \\ \vdots \\ z_n \end{pmatrix}}$ and ${|z| := (|z_1|^2+\ldots+|z_n|^2)^{1/2}}$. Again, we call such functions Schwartz entire functions; a typical example is the function
$\displaystyle f(z) := e^{-\pi (z^T A z + 2b^T z + c)}$
where ${A}$ is an ${n \times n}$ complex symmetric matrix with positive definite real part, ${b}$ is a vector in ${{\bf C}^n}$, and ${c}$ is a complex number. We can then define an abstract integration functional ${I: \mathcal{SE}({\bf C}^n \rightarrow {\bf C}) \rightarrow {\bf C}}$ by integration on the real slice ${{\bf R}^n}$:
$\displaystyle I(f) := \int_{{\bf R}^n} f(x)\ dx$
where ${dx}$ is the usual Lebesgue measure on ${{\bf R}^n}$. By contour shifting in each of the ${n}$ variables ${z_1,\ldots,z_n}$ separately, we see that ${I}$ is invariant with respect to complex translations of each of the ${z_j}$ variables, and is thus invariant under translating the joint variable ${z}$ by ${{\bf C}^n}$. One can also verify the scaling law
$\displaystyle I(\delta_A f) = \hbox{det}(A) I(f)$
for ${n \times n}$ complex matrices ${A}$ sufficiently close to the origin, where ${\delta_A f(z) := f(A^{-1} z)}$. This can be seen for shear transformations ${A}$ by Fubini’s theorem and the aforementioned translation invariance, while for diagonal transformations near the origin this can be seen from ${n}$ applications of one-dimensional scaling law, and the general case then follows by composition. Among other things, these laws then easily lead to the higher-dimensional generalisation
$\displaystyle \int_{{\bf R}^n} e^{-\pi (x^T A x + 2 b^T x + c)}\ dx = \hbox{det}(A)^{-1/2} e^{-\pi (c-b^T A^{-1} b)} \ \ \ \ \ (10)$
whenever ${A}$ is a complex symmetric matrix with positive definite real part, ${b}$ is a vector in ${{\bf C}^n}$, and ${c}$ is a complex number, basically by repeating the one-dimensional argument sketched earlier. Here, we choose the branch of ${\hbox{det}(A)^{-1/2}}$ for all matrices ${A}$ in the indicated class for which ${\hbox{det}(1)^{-1/2} = 1}$.
Now we turn to an integration functional suitable for computing complex gaussian integrals such as
$\displaystyle \int_{{\bf C}^n} e^{-2\pi (z^\dagger A z + b^\dagger z + z^\dagger \tilde b + c)}\ dz d\overline{z}, \ \ \ \ \ (11)$
where ${z}$ is now a complex variable
$\displaystyle z = \begin{pmatrix} z_1 \\ \vdots \\ z_n \end{pmatrix},$
${z^\dagger}$ is the adjoint
$\displaystyle z^\dagger := (\overline{z_1},\ldots, \overline{z_n}),$
${A}$ is a complex ${n \times n}$ matrix with positive definite Hermitian part, ${b, \tilde b}$ are column vectors in ${{\bf C}^n}$, ${c}$ is a complex number, and ${dz d\overline{z} = \prod_{j=1}^n 2 d\hbox{Re}(z_j) d\hbox{Im}(z_j)}$ is ${2^n}$ times Lebesgue measure on ${{\bf C}^n}$. (The factors of two here turn out to be a natural normalisation, but they can be ignored on a first reading.) As we shall see later, such integrals are relevant when performing computations on the Gaussian Unitary Ensemble (GUE) in random matrix theory. Note that the integrand here is not complex analytic due to the presence of the complex conjugates. However, this can be dealt with by the trick of replacing the complex conjugate ${\overline{z}}$ by a variable ${z^*}$ which is formally conjugate to ${z}$, but which is allowed to vary independently of ${z}$. More precisely, let ${\mathcal{SA}({\bf C}^n \times {\bf C}^n \rightarrow {\bf C})}$ be the space of all functions ${f: (z,z^*) \mapsto f(z,z^*)}$ of two independent ${n}$-tuples
$\displaystyle z = \begin{pmatrix} z_1 \\ \vdots \\ z_n \end{pmatrix}, z^* = \begin{pmatrix} z_1^* \\ \vdots \\ z_n^* \end{pmatrix}$
of complex variables, which is jointly entire in all ${2n}$ variables (in the sense defined previously, i.e. there is a joint Taylor series that is absolutely convergent for all independent choices of ${z, z^* \in {\bf C}^n}$), and such that there is an ${\epsilon>0}$ such that for every ${A>0}$ there is ${C_A>0}$ such that one has the bound
$\displaystyle |f(z,z^*)| \leq C_A (1 + |z|)^{-A}$
whenever ${|z^* - \overline{z}| \leq A + \epsilon |z|}$. We will call such functions Schwartz analytic. Note that the integrand in (11) is Schwartz analytic when ${A}$ has positive definite Hermitian part, if we reinterpret ${z^\dagger}$ as the transpose of ${z^*}$ rather than as the adjoint of ${z}$ in order to make the integrand entire in ${z}$ and ${z^*}$. We can then define an abstract integration functional ${I: \mathcal{SA}({\bf C}^n \times {\bf C}^n \rightarrow {\bf C}) \rightarrow {\bf C}}$ by the formula
$\displaystyle I(f) := \int_{{\bf C}^n} f(z,\overline{z})\ dz d\overline{z}, \ \ \ \ \ (12)$
thus ${I}$ can be localised to the slice ${\{ (z,\overline{z}): z \in {\bf C}^n\}}$ of ${{\bf C}^n \times {\bf C}^n}$ (though, as with previous functionals, one can use contour shifting to relocalise ${I}$ to other slices also.) One can also write this integral as
$\displaystyle I(f) = 2^n \int_{{\bf R}^n \times {\bf R}^n} f(x+iy, x-iy)\ dx dy$
and note that the integrand here is a Schwartz entire function on ${{\bf C}^n \times {\bf C}^n}$, thus linking the Schwartz analytic integral with the Schwartz entire integral. Using this connection, one can verify that this functional ${I}$ is invariant with respect to translating ${z}$ and ${z^*}$ by independent shifts in ${{\bf C}^n}$ (thus giving a ${{\bf C}^n \times {\bf C}^n}$ translation symmetry), and one also has the independent dilation symmetry
$\displaystyle I(\delta_{A,B} f) = \hbox{det}(A) \hbox{det}(B) I(f)$
for ${n \times n}$ complex matrices ${A,B}$ that are sufficiently close to the identity, where ${\delta_{A,B} f(z,z^*) := f(A^{-1} z, B^{-1} z^*)}$. Arguing as before, we can then compute (11) as
$\displaystyle \int_{{\bf C}^n} e^{-2\pi (z^\dagger A z + b^\dagger z + z^\dagger \tilde b + c)}\ dz d\overline{z} = \hbox{det}(A)^{-1} e^{-2\pi (c - b^\dagger A^{-1} \tilde b)}. \ \ \ \ \ (13)$
In particular, this gives an integral representation for the determinant-reciprocal ${\hbox{det}(A)^{-1}}$ of a complex ${n \times n}$ matrix with positive definite Hermitian part, in terms of gaussian expressions in which ${A}$ only appears linearly in the exponential:
$\displaystyle \hbox{det}(A)^{-1} = \int_{{\bf C}^n} e^{-2\pi z^\dagger A z}\ dz d\overline{z}.$
This formula is then convenient for computing statistics such as
$\displaystyle \mathop{\bf E} \hbox{det}(W_n-E-i\eta)^{-1}$
for random matrices ${W_n}$ drawn from the Gaussian Unitary Ensemble (GUE), and some choice of spectral parameter ${E+i\eta}$ with ${\eta>0}$; we review this computation later in this post. By the trick of matrix differentiation of the determinant (as reviewed in this recent blog post), one can also use this method to compute matrix-valued statistics such as
$\displaystyle \mathop{\bf E} \hbox{det}(W_n-E-i\eta)^{-1} (W_n-E-i\eta)^{-1}.$
However, if one restricts attention to classical integrals over real or complex (and in particular, commuting or bosonic) variables, it does not seem possible to easily eradicate the negative determinant factors in such calculations, which is unfortunate because many statistics of interest in random matrix theory, such as the expected Stieltjes transform
$\displaystyle \mathop{\bf E} \frac{1}{n} \hbox{tr} (W_n-E-i\eta)^{-1},$
which is the Stieltjes transform of the density of states. However, it turns out (as I learned recently from Peter Sarnak and Tom Spencer) that it is possible to cancel out these negative determinant factors by balancing the bosonic gaussian integrals with an equal number of fermionic gaussian integrals, in which one integrates over a family of anticommuting variables. These fermionic integrals are closer in spirit to the polynomial integral (6) than to Lebesgue type integrals, and in particular obey a scaling law which is inverse to the Lebesgue scaling (in particular, a linear change of fermionic variables ${\zeta \mapsto A \zeta}$ ends up transforming a fermionic integral by ${\hbox{det}(A)}$ rather than ${\hbox{det}(A)^{-1}}$), which conveniently cancels out the reciprocal determinants in the previous calculations. Furthermore, one can combine the bosonic and fermionic integrals into a unified integration concept, known as the Berezin integral (or Grassmann integral), in which one integrates functions of supervectors (vectors with both bosonic and fermionic components), and is of particular importance in the theory of supersymmetry in physics. (The prefix “super” in physics means, roughly speaking, that the object or concept that the prefix is attached to contains both bosonic and fermionic aspects.) When one applies this unified integration concept to gaussians, this can lead to quite compact and efficient calculations (provided that one is willing to work with “super”-analogues of various concepts in classical linear algebra, such as the supertrace or superdeterminant).
Abstract integrals of the flavour of (6) arose in quantum field theory, when physicists sought to formally compute integrals of the form
$\displaystyle \int F( x_1, \ldots, x_n, \xi_1, \ldots, \xi_m )\ dx_1 \ldots dx_n d\xi_1 \ldots d\xi_m \ \ \ \ \ (14)$
where ${x_1,\ldots,x_n}$ are familiar commuting (or bosonic) variables (which, in particular, can often be localised to be scalar variables taking values in ${{\bf R}}$ or ${{\bf C}}$), while ${\xi_1,\ldots,\xi_m}$ were more exotic anticommuting (or fermionic) variables, taking values in some vector space of fermions. (As we shall see shortly, one can formalise these concepts by working in a supercommutative algebra.) The integrand ${F(x_1,\ldots,x_n,\xi_1,\ldots,\xi_m)}$ was a formally analytic function of ${x_1,\ldots,x_n,\xi_1,\ldots,\xi_m}$, in that it could be expanded as a (formal, noncommutative) power series in the variables ${x_1,\ldots,x_n,\xi_1,\ldots,\xi_m}$. For functions ${F(x_1,\ldots,x_n)}$ that depend only on bosonic variables, it is certainly possible for such analytic functions to be in the Schwartz class and thus fall under the scope of the classical integral, as discussed previously. However, functions ${F(\xi_1,\ldots,\xi_m)}$ that depend on fermionic variables ${\xi_1,\ldots,\xi_m}$ behave rather differently. Indeed, a fermonic variable ${\xi}$ must anticommute with itself, so that ${\xi^2 = 0}$. In particular, any power series in ${\xi}$ terminates after the linear term in ${\xi}$, so that a function ${F(\xi)}$ can only be analytic in ${\xi}$ if it is a polynomial of degree at most ${1}$ in ${\xi}$; more generally, an analytic function ${F(\xi_1,\ldots,\xi_m)}$ of ${m}$ fermionic variables ${\xi_1,\ldots,\xi_m}$ must be a polynomial of degree at most ${m}$, and an analytic function ${F(x_1,\ldots,x_n,\xi_1,\ldots,\xi_m)}$ of ${n}$ bosonic and ${m}$ fermionic variables can be Schwartz in the bosonic variables but will be polynomial in the fermonic variables. As such, to interpret the integral (14), one can use classical (Lebesgue) integration (or the variants discussed above for integrating Schwartz entire or Schwartz analytic functions) for the bosonic variables, but must use abstract integrals such as (6) for the fermonic variables, leading to the concept of Berezin integration mentioned earlier.
In this post I would like to set out some of the basic algebraic formalism of Berezin integration, particularly with regards to integration of gaussian-type expressions, and then show how this formalism can be used to perform computations involving GUE (for instance, one can compute the density of states of GUE by this machinery without recourse to the theory of orthogonal polynomials). The use of supersymmetric gaussian integrals to analyse ensembles such as GUE appears in the work of Efetov (and was also proposed in the slightly earlier works of Parisi-Sourlas and McKane, with a related approach also appearing in the work of Wegner); the material here is adapted from this survey of Mirlin, as well as the later papers of Disertori-Pinson-Spencer and of Disertori.
Read the rest of this entry »
### Recent Comments
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Luqing Ye on 245A, Notes 2: The Lebesgue…
E.L. Wisty on Simons Lecture I: Structure an…
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 274, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916282594203949, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/88540/how-to-motivate-and-present-epsilon-delta-proofs-to-undergraduates/88570
|
## How to motivate and present epsilon-delta proofs to undergraduates?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This would seem to be a common question, but I am surprised not to see it already asked and answered on MO!
I am teaching an undergraduate course, and I want to teach them to construct basic epsilon-delta proofs, say, of $\lim_{x \rightarrow 3} x^2 = 9$ and $\lim_{x \rightarrow 4} x^2 \neq 17$. (Elementary, continuous functions only.) This is a serious stumbling block for many students, with good reason, and I anticipate it will be for mine as well.
Do other MO'ers have suggestions (beyond what I can find in typical calculus books) for presenting epsilon-delta for the first time? Any success stories to share?
Thank you very much! --Frank
(Background: I am teaching a discrete math course to American undergraduates who have already had a year of calculus. Whether $\epsilon-\delta$ is on topic for discrete math is perhaps questionable, but we did material on making sense of statements with lots of quantifiers, and also an introduction to techniques of proof, and so the material seemed like a natural fit. I should also mention that I intend to test the students on this material and not just expose them to it.)
-
5
Why surprise? MO is for research mathematics, and generally not pedagogy, especially not undergraduate pedagogy. – Gerald Edgar Feb 15 2012 at 19:20
14
@Gerald: As I verified before posting my question, plenty of undergraduate pedagogy questions have been asked here, highly upvoted, given interesting answers, and not closed. – Frank Thorne Feb 15 2012 at 19:36
2
@Tony: You could well be correct. After discussing why statements like "There exists x such that for all y $x = 1 + y$" are false, epsilon-delta seemed like a good place to go. I thought I could do this quickly, which has proved rather naive. Perhaps I will not do it next time I teach discrete math (and especially since there is so much other good stuff.) However, I told the students that we would cover this material, and my feeling is that it would be unwise to backtrack on this. – Frank Thorne Feb 15 2012 at 19:41
2
@Tony: I couldn't disagree more. The epsilon-delta definition of continuity is a natural example of nested quantifiers, something that shows up everywhere in discrete math. Yes, of course it's hard, but that's precisely what makes it useful and powerful! – JeffE Feb 15 2012 at 20:43
1
I think if you are just wanting to get people used to epsilon and delta type thinking then it is worth motivating the discussion with sequences. For example you can ask what is meant when we write out 3.141592... and similarly for any infinite sum. This may be a disadvantage if you were wanting to introduce continuity but seems a bit more discrete friendly. – Callan McGill Feb 16 2012 at 5:29
show 3 more comments
## 9 Answers
Epsilon-delta represents a pair of quantifiers (for all ... there exists ...). Challenge-response. The discrete mathematics take ought to be "hey, this is like a game", because if you iterate the quantification you are actually talking about a game-like structure. In other words, the teaching point is something like the transition from "if you want the answer to agree to five decimal places then you have to go far enough towards the limit", to "I know I can always respond to your challenge because I have a strategy!" Proof that a limit exists is the same as showing that there is such a strategy: but NB that the strategy is no more constructive than the proof is.
In other words talking about limits is just discussion of existence proofs for certain kinds of low-level strategies. Roll over Weierstrass!
-
3
This is how we were taught epsilon-delta, where our opponent was "our worst enemy". – Harry Gindi Feb 15 2012 at 21:41
This is how epsilon-delta was first introduced to me as a student, too. You're right that it may work nicely in a Discrete Math class! – Jon Bannon Feb 15 2012 at 21:59
Fantastic. .... – Frank Thorne Feb 16 2012 at 14:26
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For some reason, students I teach always love epsilon-delta (not that they write good epsilon-delta proofs per se), and the more "wrong" I teach it, the more they enjoy it. The "wrong" thing that I like to do is to define real numbers via Cauchy sequences right at the beginning, at least in a hand-wavy way.
Calling a real number "real" is Orwellian, really- none of you have ever seen a real number. You might estimate pi to 100 or 1000 or a million places after the decimal point, but you can never write it all down- you never know precisely what it is. Even numbers like 0 and 1 are unknown as real numbers. You can write "0.00000..." until you're blue in the face, but you'll never know if the number you've written was actually zero or not, because there might be a sneaky "...000001..." coming up just around the corner to bite you.
So real numbers come with "fuzz". They're inherently "fuzz". There's no way around it. You can pretend they're points on a line, and crash face-first into Zeno's paradoxes, or you can accept their fuzziness and work with it; and I argue that calculus is none other than that "second approach".
So my "epsilon" is the fuzz. It's all those digits of "pi" (or of "zero") you never wrote down. It's the tree falling when nobody is there to hear it. It's the gap between human knowledge and universal truth. As such, it's part of what a real number is- a real number (as observed by mortal beings) comes with fuzz. A continuous function from the reals to the reals, then, has to "respect the fuzz".
Anyway, counter-intuitive though it is (perhaps), this motivation has worked wonders for me in practice. And so I keep using it, and keep getting excited about it, and it's the best part of the course, year after year.
Added: Interpreting the verb "to motivate" in another way, I always discuss the history of the ideas in some depth (I learnt it myself from wikipedia and books on the history of mathematics), and just how much people struggled to find the "right" definition, with no success, until Bernard Bolzano (primarily a Catholic priest!) finally hit upon an idea that worked in 1810. What idea were they trying to capture? Why was it so hard? How come it took 2500 years (Zeno of Elea to Bolzano) to find the right idea?
I'll also discuss the definition having been reworked and distilled by many many people- first its inventors, then mathematicians, then textbook writers, becoming more and more refined and smaller and smaller until that which is left looks to one who sees it for the first time like a small cold hard stone. It's only once you polish it (working it over in your mind, and solve problems) and shine it under a bright light (make sense for yourself of all those nested quantifiers) that you can finally see it for what it is- a diamond.
-
That is a pretty cool way of thinking about it. I never thought about fuzz before... – Frank Thorne Feb 16 2012 at 15:57
I second Frank Thorne's comment! This is a really cool point of view. :) It confuses me, though, when you say, "you can pretend [real numbers] are points on a line." If you have a straight line drawn on a plane, with two points marked "zero" and "one," you can do addition and multiplication with a compass and straightedge, so I find it hard to escape the conclusion that a line with two marked points is a field... and surely you can prove that this field is isomorphic to the field of Cauchy sequences of rational numbers? – Vectornaut Feb 16 2012 at 20:57
@Vectornaut: But there is no concept of motion on a set of points (implicitly given the discrete topology). This is Fletcher's Paradox, one of the Zeno paradoxes. en.wikipedia.org/wiki/Zeno's_paradoxes#The_arrow_paradox So, real numbers aren't points on a line at all- they're interlocking fuzz on the line, the "fuzz" being the topology, if one wants to think of it that way. And, on some level, I'll discuss all of this. I spend 3 lectures introducing this stuff, at a hand-wavy level, but one which gives some motivation. – Daniel Moskovich Feb 16 2012 at 23:01
I just love this. It's quite encouraging to me that your students do too! – Jon Bannon Feb 17 2012 at 23:36
When I'm talking to engineers, I like to motivate $\epsilon, \delta$ proofs this way...
If you feed steam into a turbine at a pressure of $p$ MPa, the turbine will generate electricity with a frequency of $f(p)$ Hz. The turbine is supposed to be putting out $120$ Hz; to get that, you have to put in steam at $24$ MPa.
Electric appliances are designed to work as long as the frequency going into them is within some tolerance $\epsilon$ of the expected frequency, $120$ Hz. If the frequency from the generator goes above $120 + \epsilon$ or below $120 - \epsilon$, users' applicances may stop working, possibly in spectacular and horrifying ways.
Your job, as an engineer, is to find a safety margin $\delta$ such that the output frequency $f(p)$ will be within $\epsilon$ of $120$ as long as the input pressure $p$ is within $\delta$ of $24$.
If the function $f$ is continuous at $24$, you can always do this, no matter how small the tolerance $\epsilon$ is. If $f$ isn't continuous at $24$, there are some tolerances which are just too small... no matter how tight you make the safety margin $\delta$, you'll never be able to guarantee that that the output frequency will be within tolerance of $120$.
-
Engineering students do $\epsilon$-$\delta$ proofs? – Samuel Reid Feb 16 2012 at 2:00
2
This example is great! – Frank Thorne Feb 16 2012 at 14:28
@Samuel Reid: A friend of mine who's doing a Ph.D. in biomedical engineering recently took a "mathematical methods" course that used $\epsilon, \delta$ proofs heavily. And I think a lot of intro calculus courses use $\epsilon, \delta$ arguments in lecture, even if the students never have to write one themselves. – Vectornaut Feb 16 2012 at 20:46
So I actually presented this in class. I think they were amused by my attempt to draw a turbine on the chalkboard. – Frank Thorne Feb 17 2012 at 23:35
Creating a $\epsilon-\delta$ game is really interesting. Thanks Charles Matthews! BTW, a similar strategy has been stated by Prof.Terry Tao in:Thinking and Explaining, http://mathoverflow.net/questions/38882 (version: 2011-10-12)
One other issue that usually undergrads feel elusive about in $\epsilon-\delta$ method is why is it "for every $\epsilon>0$ there exists a corresponding $\delta>0$" and not the other way round. In this context, the following simple analogy may illustrate the point:
Assuming the discrete maths course is offered to CS students, I will consider software development analogy. In software development, there are essentially two parties. One Developer($\delta$ producer) and the other Client/User($\epsilon$ giver).
We can ask the students which of the below models is preferred:
Model 1: Client gives a specification and developer abides by it. That is, client demands for certain feature in her product and developer accordingly makes the product. Analogously, fix $\epsilon>0$ in the range adjust $\delta$ in the domain.
Model 2: Developer gives certain product and client should accept it however pathetic it may be. Analogously, fix $\delta>0$ and expect $\epsilon>0$ to be satisfied.
Model 1 is naturally preferred. And that is our $\epsilon-\delta$ method.
Of course, we can change the setting depending on the target students(engineers/physicists/biologists etc. ).
-
Basic $\epsilon$-$\delta$ thinking is easy to motivate by using the concepts of input control and output error tolerance. If $f(a)=c$, how accurately do you need to control the input (by specifying $\delta$ and requiring $|x-a|<\delta$) to guarantee you meet a given tolerance for output error ($|f(x)-c|<\epsilon$)?
Week 2 of freshman calculus is not too soon to insist US students learn to answer questions about this topic for simple examples. It's clearly a relevant skill for eventually addressing questions like How accurately do you need to aim a spacecraft to safely enter orbit around Mars?'' (Recall, failing to do that once cost NASA around a half-billion dollars.) Or, for designing safety margins in engineering as Vectornaut suggests.
The significance and usefulness of fundamental calculus concepts is often underappreciated not only by beginning students. A graduate engineering student once explained to me his recent realization that the sensitivity coefficients'' used everywhere in his engineering courses were nothing but derivatives!
-
I'm inclined to talk about images like shooting an arrow at a target and such, but I'm sure you've heard these. The trouble is that it is very difficult to communicate why we'd want to move from the intuitive notion of limit to the modern one. Most of the arguments we'd give simply aren't convincing for students.
This will not be a popular answer, but one way around the above difficulty is to take a formalist approach and give the students a more or less mechanical way to deal with the multiple quantifiers. While doing this, students get a sense of the proper use of quantifiers and get an idea of how to "do" the proofs. After they gain some ability, they seem more willing to see how it affords clarity. My students have responded well to the treatment of basic proof found in Chapter 3 of Daniel J. Velleman's book How to Prove It: a structured approach.
Remember, in the question, the OP asked for ways to teach students to construct basic epsilon-delta proofs.
Here's an example: Prove that $lim_{x \rightarrow 0}2=2$. The students write two columns on their page a "givens" column and a "goal" column. In the goal column write the definition of the limit in question symbolically. Then show them that when they see a universal quantifier in the goal column, they can move it to the givens column as "Let $\epsilon$>0 be arbitrary." (This temporarily gets around student misunderstanding of the use of the word 'arbitrary'.) Next, they must manufacture a delta. Here you show them where the "real mathematics" takes place: do what is needed to manufacture your delta. Start with what you are trying to estimate, and work backwards. In our example we see that any positive delta will do. The important thing is that after doing your "scratchwork", you go back and write Let $\delta$ equal 3, (or whatever you picked) in the givens column, (you may explain to them the reason this is logical, or talk about it later). The point is, they have seen you discover the delta and that the proof is written in "reverse order". One can then remove the existential quantifier from the goal column. The goal is now simply to run your scratchwork in reverse until you end up with something formally identical to what appears in the goal column.
The idea is, if students can actually construct such a few proofs like this, maybe you will have a chance to discuss what happened. If it seems too miraculous or divorced from common sense, they simply won't listen.
After some experience has been gained (I do this looking at sequence limits instead of epsilon-delta) it is nice to give some informal yet precise descriptions of what a limit is. For example, a sequence $a_{n}$ of real numbers converges to a real number $L$ when for any $\epsilon>0$ all but finitely many terms of the sequence lie within $\epsilon$ of $L$. After they know how to construct proofs, trying to get students to conceptualize seems to work better.
Best of luck with this, Frank!
-
1
Back before he was an avatar of revolution/host for the Phoenix Force, and was merely "superb mathematician and teacher", Tim Gowers wrote some thoughts on understanding/explaining continuity in the epsilon-delta sense, as well as "how to solve basic analysis exercises without thinking" - this is at a higehr level than the calculus being discussed, but I mention it since it has the same spirit of taking a formalist approach and trying to show students that these things can be done mechanically. dpmms.cam.ac.uk/~wtg10/mathsindex.html – Yemon Choi Feb 15 2012 at 21:40
This is great, Yemon! I feel a bit less ashamed to suggest such an approach, now. – Jon Bannon Feb 15 2012 at 21:57
@Yemon: What an excellent webpage! – Frank Thorne Feb 16 2012 at 14:20
1
@Jon: An interesting suggestion, thank you! My goal is to get the students comfortable with the formalism, and indeed I think one difficulty of epsilon-delta is that there is often more machinery in these proofs than "real math". – Frank Thorne Feb 16 2012 at 14:25
Hi!
One of my professors back in Iran used to say everybody by heart has an intuitive understanding of "epsilon-delta" definition of limit/continuity based on the following everyday life example: if you want to increase the amount of water coming out of a water faucet by epsilon, you know that there is delta that if you turn the faucet by delta you get epsilon change in the output. This I guess everybody has experienced specially adjusting water temperature in shower where usual relationship between amount of output water and how much you need to turn is not linear/uniform but nevertheless always continuous.
-
Following on from this answer, contrast the volume knob and tuning knob on a radio. We expect the volume to change continuously, a small turn gives a small variation in volume, whereas a small variation of the tuning knob gives a large change in output, as we want. Of course the output is a signal, so how do you measure change in a signal? This leads to the ideas of the difference between a sup norm and an integral norm: for example, speedometers tend to use an integral norm, so that the speedo does not react to every little change. – Ronnie Brown Feb 18 2012 at 11:01
I like to relate the intuition of students that "when $x$ approaches $a$, then $f(x)$ approaches $b$" to the formal definition of limit by drawing the graph of a continuous function $f\colon \mathbb{R} \to \mathbb{R}$ and $\delta$ and $\epsilon$ intervals around $a$ and $b$ on the axes.
Then one can try to discuss and argue the order of quantifying $\epsilon - \delta$: the function comes $\epsilon$-close to $b$ if we take arguments $\delta$-close to $a$. One can also ask students about their intuition of a limit and if these are not precise enough, construct counterexamples, such as when $f$ jumps at $a$ even though it is monotonic, i.e. it "always approaches $a$" but not arbitrarily close.
I also like the explicit "game" and "worst enemy" approach already given. This especially helps in teaching students that the problem lies in small $\epsilon$'s, not large values.
-
I like the idea of using the term "neighbourhood" and the notation $f(M)$ for a function $f$ and set $M$. So we have the definition: for all neighbourhoods $N$ of $f(x)$there is a neighbourhood $M$ of $x$ such that $f(M) \subseteq N$. Then you can draw pictures of the actual sets that are being mapped.
Part of the psychological problem with an $\varepsilon$-$\delta$ proof is that these are measures of the size of the neighbourhood rather than the actual neigbourhood. For particular proofs you need the numbers.
This also relates to the idea that the neighbourhood definition of a topology is the most intuitive, even if you need of course to bring in the equivalent definitions in terms of open or closed sets.
Motivated by the idea of "reverse chaining" in the psychology of learning, we used the idea of "fill-in proofs". Take a proof that the product of limits is a limit, rub out bits, and ask the students to fill in the missing bits using clues that you have left from the rest of the proof. So the structure of the proof is given. This is analogous to the way a professional works, get the structure first, then fill in the details.
Also these exercises are very easy to mark!
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550631642341614, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4206214
|
Physics Forums
## Rest frame of the photon
I know that the rest frame of the photon is a non inertial reference frame. In that sense obviously the physics will not be Lorentz co-variant and so on. I have the following question
Is possible to define a coordinate transformation from an inertial reference frame to the rest frame of a particle traveling at the speed of light?
Obviously the coordinate transformation will not be a Lorentz transforms, but maybe another kind of coordinate transformation can do the trick.
My idea is that we can use any reference frame to describe the physics. The laws of nature will be Lorentz covariant in any inertial reference frame, however if I do a coordinate transformation from an inertial reference frame to a non inertial reference frame then I can see how the laws will be in this coordinate system.
Do you believe that this coordinate transformation can be found?
For example, I can imagine the rest frame of the photon as a non inertial reference frame in which all the massive free particles will travel at the speed of light. My problem is to define a coordinate transformation from this reference frame to an inertial reference frame.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 3 Recognitions: Gold Member Please see the FAQ http://www.physicsforums.com/showthread.php?t=511170 In GR it is possible to write a metric which describes a spacetime containing massless energy which travels at the speed of light. It is called the 'null dust' solution. I think there is no static frame field in the spacetime.
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Quote by Mentz114 It is called the 'null dust' solution. I think there is no static frame field in the spacetime.
I believe this is correct, except for the trivial case where the density of the null dust is zero (which is just the Schwarzschild vacuum). See here:
http://en.wikipedia.org/wiki/Vaidya_metric
Note that the line elements (6) and (7) are formally identical to the Eddington-Finkelstein forms of the Schwarzschild line element; the only difference is that the metric parameter M is a function of the null coordinate u (for the ingoing form) or v (for the outgoing form), instead of being constant.
Recognitions:
Science Advisor
Staff Emeritus
## Rest frame of the photon
There isn't any rest "frame" for the photon, but as mentioned, you can find coordinates for space-time such that a constant coordinate is the worldline of a photon. They're called null coordinates.
For the 2d case, the flat Minkowskii space time has the metric
s = dx^2 - dt^2. Let u = x+t and v=x-t. Then in terms of u and v
the metric is just
s = du dv = (dx + dt)(dx-dt) = dx^2 - dt^2
and either u or v is constant depening on the direction of travel as a photon, which is the desired property of represting a photon's world line via a constant coordinate.
photon spin component along any chosen z direction can have value either +1 or -1.It does not have any zero.this statement follows from fact that it is not possible to choose a reference frame in which photon polarization state(real) can be increased to 3(a longitudinal component) while it is two for a real photon.this follows rather directly from the fact that photon can not move slowly in any reference frame.The two polarization states can be easily gotten in coulomb gauge in which longitudinal component is already integrated out.Also A0 associated with it does contain any dynamical degree of freedom.so this also can be eliminated.
Thanks I think I can play with light cones coordinates, but I found that it lacks some properties that I would like of the rest frame of a particle traveling at the speed of light. Particularly only the the particle at rest in the original reference frame will "travel" at the speed of light in the light cone coordinates. I believe that 'from the photon point of view' all the inertial observers should travel at the speed of light. Anyway it is a non inertial reference and I can use it as inspiration to play with. Probably it really do not make sense to talk of the rest frame of the photon (inertial or non inertial), but I think it is a fun idea. Thanks everyone for their contribution. Andrien I agree that the polarization of the photon only has two degree of freedom in any inertial reference frame. In a non inertial reference frame non physical degree of freedom can appear or physical degree of freedom can "disappear". For that reason I believe that the argument that you gave is not enough to justified the non existence of a rest frame of the photon. An example will be a particle that is constrained to move in a two dimension plane in an inertial reference frame, but in a non inertial reference which is oscillating in the perpendicular direction to the plane we can have the impression that the particle is moving in three dimension and not in two dimensions. Obviously this is extra degree of freedom is not a physical degree of freedom. Maybe I am wrong and your argument is enough to justify that the photon cannot be at rest in any possible reference frame, but then it is not enough to justify that a spinless particle traveling at the speed of light cannot have a non inertial reference frame. However your argument is important and is a question that I should check if I decide to keep playing with this idea. Thank You.
Blog Entries: 47 Recognitions: Gold Member Homework Help Science Advisor Here are some thoughts on trying to define a reference frame for a photon http://physicsforums.com/showthread....778#post899778
I agree that the polarization of the photon only has two degree of freedom in any inertial reference frame. In a non inertial reference frame non physical degree of freedom can appear or physical degree of freedom can "disappear". For that reason I believe that the argument that you gave is not enough to justified the non existence of a rest frame of the photon. An example will be a particle that is constrained to move in a two dimension plane in an inertial reference frame, but in a non inertial reference which is oscillating in the perpendicular direction to the plane we can have the impression that the particle is moving in three dimension and not in two dimensions. Obviously this is extra degree of freedom is not a physical degree of freedom.
photon's mass is zero.if it's mass might not have been zero then it might be possible to go to another reference frame in which it will have a longitudinal component of polarization which will contradict the physical degree of freedoms,which are two for photon.your argument regarding non inertial frame is not valid.
edit:here is a reference related to it-
http://www.damtp.cam.ac.uk/user/tong/qft/six.pdf
Which part of the lecture notes of David Tong do you want me to read? I suppose that you are using some argument based in the gauge symmetry of the photon. Yes I agree that the U (1) gauge symmetry and Lorentz co-variance will implies that the photon does not has a mass term. You probably are thinking in the equation ε(p)$\bullet$p=0 This equation hold in a inertial reference frame. There is no reason why I cannot define a non inertial reference frame in which it do not hold. In that case I will have something that will appear as a Longitudinal polarization. In general the transformation that will leave this product invariant are Lorentz transformation, then it will hold in any inertial reference frame. This proof that does not exist an inertial reference frame in which the photon does has a longitudinal component. I completely agree. My question is about non inertial reference frame in which that equation does not hold. Probably you are right and it is impossible, but in principle I do not see why it should be true. Thanks robphy Thanks I agree with your original post. That is the reason I am looking for a non inertial reference frame. My idea is to find a patch of coordinates in which any inertial observer will appear to travel at the speed of light. That is my definition of the photon rest frame which needs to be non inertial. Obviously an inertial reference frame is impossible and some of the arguments that you gave justify it.
why do you think that a non inertial frame description would be adequate.do you think that in non inertial frame there will be some extra degree of freedom.
My idea is that if I can find a non inertial reference frame in which stuff can move faster than the speed of light. Why I can't find a non inertial reference frame in which particle that move at the speed of light are not moving? At example of a reference frame in which things can move faster than the speed of light is the earth. During the night I will see that the stars are traveling millions of light years in hours. Obviously the reason is that the earth is a non inertial reference frame. The stars are not moving faster than the speed of light in an inertial reference frame. The postulates that Einstein used in Special relativity were that the physic will take the same form (covariant) in any inertial reference frame and that the light will travel at the speed of light in any inertial reference frame. If these two postulates define what an inertial reference frame is, then a non inertial reference frame can be defined as a reference frame in which the physics do not take the same form or/and the light does not propagate at the speed of light in the vacuum. I want to find an non inertial reference frame of the second kind in which the speed of the light is zero. (I can define a frame of this kind using the light cone coordinates, but it is not what I am looking for.) For me the spin and the mass of a particle label the representations of the Poincare group. In an inertial reference frame I can give a satisfactory answer of what mass and spin is. Really it is labeling how the the field will transform under Poincare transformations, but it does not said anything about how will the system will transform under other coordinate transformation. Finally I believe that the degree of freedom of a particle are defined in a inertial reference frame. In that sense the physical degree of freedom will never change, but in a non inertial reference frame the interpretation will change and the constraints also because the physics will change. If you do not agree with me it is ok. Probably I will find that I am delusional.
Recognitions:
Science Advisor
Staff Emeritus
Quote by chwie Thanks I think I can play with light cones coordinates, but I found that it lacks some properties that I would like of the rest frame of a particle traveling at the speed of light. Particularly only the the particle at rest in the original reference frame will "travel" at the speed of light in the light cone coordinates. I believe that 'from the photon point of view' all the inertial observers should travel at the speed of light.
I'm not sue what you mean by this. Anyone with u= constant will be travelling at the speed of light, not just u=0 in the metric I gave.
And it's still not right to talk about "inertial observers travelling at light speed" no matter what coordinates you use...
Prevect I used the definition of light cone coordinates (null coordinates) and I found that the speed in this coordinates is (1-β)/(1+β) such that β is the speed in the inertial reference frame In the case that β=0 we have a speed of unity. In the case of the photon traveling to the right β=1 we have that the speed is zero and in the case of 0<β<1 the speed will be any value between zero and unity. Maybe I am doing it wrong. We can talk about an inertial observer moving at the speed of light relative to a non inertial reference frame. The reason is that there is no reason why we cannot do this. It is just a coordinate transformation. For example a non inertial reference frame which is rotating with a frequency ω relative to a inertial reference frame. In this non inertial reference frame we will have that the inertial observers can have any speed. The speed will be determine by the product ωR. Obviously we can choose ωR≥c and it will not represent any problem because this reference frame is non inertial.
Recognitions:
Science Advisor
Staff Emeritus
Quote by chwie Prevect I used the definition of light cone coordinates (null coordinates) and I found that the speed in this coordinates is (1-β)/(1+β) such that β is the speed in the inertial reference frame In the case that β=0 we have a speed of unity. In the case of the photon traveling to the right β=1 we have that the speed is zero and in the case of 0<β<1 the speed will be any value between zero and unity. Maybe I am doing it wrong.
If we take x= βt, and u=x+t and v=x-t, then we have
(u+v) = β(u-v) or (1- β)u = -(1+ β) v
I would guess this is similar (except for sign) as to w hat you are doing because the result looks similar. However, I wouldn't call either du/dv or dv/du a "speed". For one thing, neither u nor v represents distance, or time. "Speed" is a rather ambiguous term in any event. It also has physical implications that don't apply here. So I'd just give the derivative (whichever one you aure using) a symbol, and not call it a "speed".
IT remains true that u=constant represents light moving in one direction, and v=constant represents light moving in the opposite direction in these coordinates. The value of the constant doesn't matter.
pervect The different of sing is because I used v=t-x. Thank You for the comments.
ε(p)∙p=0,regarding this relation you are saying that this does not hold.The problem is coulomb gauge which is non-covariant so it might have confused you.You can take lorentz gauge then which in momentum space reads kμAμ=0,now this relation is covariant.you can write this in expanded form as k0A0-$k.A$=0,now as you can see in the first three pages of reference that A0 is not any degree of freedom.It can be expressed in terms of $∇.\frac{∂A} {∂t}$.It is not dynamical so it can be discarded.it is possible to ellimiate it in other frame also.after it you are left with again $k.A$=0 so it again implies the transversality condition.However it is tough to see two degree of freedom in lorentz gauge.There are other methods also like bleuler gupta method which introduces indefinite metric(negative probalities) to resolve it.But the point is that even coulomb gauge is non covariant,it's results are covariant in nature.By the way,don't confuse between virtual photons and real photons.Virtual photons can have all 4 polarization states.I hope it will clear some confusions.
Adrien I perfectly agree with everything that you say. I belief that we are thinking in two different problems.
Thread Tools
| | | |
|-----------------------------------------------|---------------------------------------|---------|
| Similar Threads for: Rest frame of the photon | | |
| Thread | Forum | Replies |
| | Special & General Relativity | 25 |
| | Special & General Relativity | 60 |
| | Frequently Asked Relativity Questions | 0 |
| | Classical Physics | 0 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215298295021057, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/19143?sort=newest
|
“adjoint” =?= “inverse of composite endofunctor is uniform bi-composition”
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Understanding adjoints has always been (and continues to be) a bit of a struggle for me.
Today I stumbled upon a property of adjoint functors which seemed extremely intuitive to me. I was wondering why this property isn't mentioned more often in introductory category theory literature, and whether or not it completely characterizes adjunctions.
If two functors $F:C\to D$ and $U:D\to C$ are adjoint $F\dashv U$, then for every $f:F(Y)\to X$ in $D$ there exists an $\hat f:Y\to U(X)$ in $C$ such that
$$U(f)\circ \eta_Y = \hat f$$ $$\epsilon_X\circ F(\hat f)=f$$
If we substitute the top equation into the bottom, we get
$$\epsilon_X\circ F(U(f)\circ \eta_Y)=f$$
and by functoriality we get
$$\epsilon_X\circ F(U(f))\circ F(\eta_Y)=f$$ $$\epsilon_X\circ (F\circ U)(f)\circ F(\eta_Y)=f$$
What the last equation says is that we can recover any morphism $f$ from the action of the "round trip endofunctor" $F\circ U$ on it by pre-composing with $\epsilon_X$ and post-composing with $F(\eta_Y)$. These two morphisms are determined only by the domain and codomain of $f$ -- we only needed to know $X$ and $F(Y)$ in order to pick the two morphisms. We would have picked the same two morphisms for some $g\neq f$ as long as $g:F(Y)\to X$.
So, I believe it is correct to say that "if the domain of a morphism is within the range of a functor which has a right adjoint, then it can be recovered from the action of the composite endofunctor on it by pre-composition with some morphism and post-composition with some other morphism, where the choice of these two morphisms is completely determined by the domain and codomain of the original morphism". There is, of course, an equivalent statement for morphisms with a codomain in the range of a functor with a left adjoint.
So, my three questions are: (1) is this correct, (2) if so, why isn't it used to explain adjunctions to beginners (I certainly would have caught on quicker!) and (3) does the condition completely characterize adjoint functors?
Thanks,
-
2
I don't understand why you say that "if we want to know the action of FU on f, we just pre- and post- compose with $\epsilon$ and $F(\eta)$". Doesn't your formula (which is true) really say that "f can be recovered from FU(f), by pre- and post-composing ..."? – Charles Rezk Mar 23 2010 at 20:23
Wow, thank you, yes that was a glaring error. I have modified the question and its title to correct this. Thank you! – Adam Mar 23 2010 at 20:41
The condition you wrote + its dual + the assertion that $\eta:I_C\to UF$ and $\varepsilon:FU\to I_D$ are natural totally characterize the adjunction, because an adjunction is totally characterized by the triangular identities $U\varepsilon\circ\eta U=I_U$, $\varepsilon F\circ F\eta=I_F$, see part (v) of Theorem IV.2 on page 83 of Mac Lane. To get the triangular identities from yours, just substitute $f=1_{FY}$ (so that $X=FY$), and similarly in the dual. BTW, the derivation of the triangular identities is very similar to yours, see p. 82 in Mac Lane. – unknown (google) Mar 23 2010 at 22:55
This might sound dumb, but I never understood the $F\eta$ notation. If $F$ is a functor and $\eta$ is a morphism of functors, what does $F\eta$ mean? It can't be composition because they aren't both morphisms in the same category... and viewing $\eta$ as an object-indexed family of morphisms doesn't seem to work out either because $F$ isn't an object. Does this mean reindexing? – Adam Mar 24 2010 at 0:21
$F\eta$ is the natural transformation whose component on an object $x$ is the map $F(\eta_x)$. – Reid Barton Mar 24 2010 at 2:12
show 1 more comment
1 Answer
(1) Yes. (2) Well, it doesn't give me any additional intuition. You didn't say why it helps you understand, so I can't judge what the advantage of it might be. I think this is really just a complicated way of giving the "bijection of hom-sets" condition.
(3) No, you need something more. For instance, let $r:B\to A$ be a surjection with section $s$, let $C$ have two objects $x$ and $y$ with $C(x,y)=B$, $C(y,x)=\emptyset$, and $C(x,x)=C(y,y)=1$ (only identities), let $D$ be similar using $A$ instead, and let $F:C\to D$ and $U:D\to C$ be the identity on objects and with action on arrows given by $r$ and $s$ respectively. Pick $\varepsilon$ and $\eta$ to be identities. Then every morphism in $D$ can be recovered, as you describe, but the components of $\eta$ are not natural, and the dual condition fails.
The "unknown (google)" comment above explained why if you additionally require the dual condition, plus naturality of $\eta$ and $\varepsilon$, then you do get an adjunction. (Although it's not clear to me from the condition you stated whether you wanted to require the morphism playing the role of $F(\eta)$ to actually be $F$ of something, which is also necessary for this argument to work.)
-
I think this is really just a complicated way of giving the "bijection of hom-sets" condition. -- Except that it never mentions sets. That's the advantage. The whole notion of hom-**sets** (rather than hom-objects) still seems weird and unusual to me, and I'm always afraid that some of my deep-seated intuitions about sets are going to limit my thinking. So I try to block the concept of hom-set out of my mind. The definition above is something that would make sense to a beginner long before they're ready to learn about enrichment. – Adam Oct 11 2010 at 9:16
1
But it still mentions elements of hom-sets, i.e. single morphisms, and therefore does not carry over to the enriched world. – Mike Shulman Oct 11 2010 at 17:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9558953046798706, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/36747/dynamic-programming-problem-and-shortest-path-problem
|
# Dynamic programming problem and shortest path problem
I was wondering
1. if any dynamic programming problem can always be converted to a source-sink shortest path problem in a network with source and sink nodes given? And vice versa?
2. Is any dynamic programming problem essentially a linear integer programming problem?
3. if in a network, the shortest path length between every vertices defines a metric on the set of vertices of the network?
Thanks and regards!
-
## 1 Answer
For the first part of (1): The answer is no. The simplest example off the top of my head is the longest substring of ones in a 0,1 string. The typical DP solution would be to use a 1D array and store the length of the longest substring up that includes the $i$-th character in the $i$-th coordinate.
For the vice-versa part of (1): Yes.
The not-so-direct answer for (1) and (2) is that dynamic programming should be seen as a method and not a set of problems. It just describes a general paradigm for solving problems by memoizing solutions to subproblems and building on them. In that sense, it really is some kind of extension of greedy algorithms, only that it can build on multiple subproblems instead of just one in the case of greedy algorithms.
For part (3): This holds for finite undirected graphs with positive weights. It also holds for countably infinite graphs, but does not hold in general for graphs of larger cardinality.
-
Thanks! (1) why is the example not able to be formulated into a source-sink shortest path problem in a network? (2) Do you mean a DP problem is able to be formulated as an integer programming problem, but not always a linear one? (3) why the shortest path length may not be a metric for infinite graphs and directed graphs? – Tim May 3 '11 at 18:41
For (1) I don't really see an easy way of doing it as a source-sink SPP. For the DP solution, the answer is obtained by scanning the array for the largest number. – Cong Han May 3 '11 at 18:44
For (2) No. DP is a method for designing algorithms. Not a class of problems. – Cong Han Aug 2 '11 at 5:23
For (3). directed doesn't work. Metric needs symmetry. – Cong Han Aug 2 '11 at 5:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934573769569397, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4076517
|
Physics Forums
example of onto function R->R^2
What is an example of an "onto" function f: ℝ→ℝ2 ?
Any combination of vectors along the x-axis will not be able to leave the x-axis to cover the entire xy plane, so i was thinking of something like f(x) = $\left|\stackrel{t}{x}\right|$ for any t$\in$ℝ, but I was wondering if there are more elegant examples...
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Homework Help See what I can do: Let: ##t\in\mathbb{R}## ... then I can define two functions ##x,y \in \mathbb{R}\rightarrow\mathbb{R}\; : \; x=f(t), y=g(t)## ... then I can make ##z:\mathbb{R}\rightarrow\mathbb{R}^2\; : \; \vec{z}=(x,y)## ... we would say that f(t) and g(t) is a parameterization of z(x,y).
Recognitions: Homework Help Science Advisor @Aziza, what do you mean by "any $t \in \mathbb{R}$". If your $f$ is a function, then for a given input $x$, you need to specify a unique output $f(x)$. E.g. $f(1)$ can only be one point in the plane, you can't have $f(1) = (1, 0)$, and also $f(1) = (1,1)$, and also $f(1) = (1, -\pi)$, and so on. Since I can't tell whether this is a homework question or not, I'll give you something between a hint and a full answer: for a given $x \in \mathbb{R}$, split the decimal expansion of $x$ into two parts and create new real numbers out of each of those two parts (we can call these two numbers $a(x)$ and $b(x)$). Then if you've done things right, the function $f:\mathbb{R} \to \mathbb{R}^2$ defined by $f(x) = (a(x), b(x))$ will be surjective. @Simon, Aziza has asked for a surjective function $\mathbb{R} \to \mathbb{R}^2$. You've given a function $\mathbb{R} \to \mathbb{R}^2$, but nothing about it makes it onto. Note that onto is a technical term, another word for it is surjective. A function $f: A \to B$ is surjective iff $\forall b \in B, \exists a \in A : f(a) = b$.
Recognitions:
Homework Help
example of onto function R->R^2
@AKG: understood, thanks.
Thread Tools
| | | |
|------------------------------------------------------|----------------------------|---------|
| Similar Threads for: example of onto function R->R^2 | | |
| Thread | Forum | Replies |
| | Linear & Abstract Algebra | 6 |
| | Calculus & Beyond Homework | 7 |
| | Advanced Physics Homework | 0 |
| | General Physics | 5 |
| | Advanced Physics Homework | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8515518307685852, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/276718/find-the-area-of-the-pentagon-formed-in-the-plane-with-the-fifth-roots-of-unity?answertab=votes
|
# Find the area of the pentagon formed in the plane with the fifth roots of unity as its vertices
Find the area of the pentagon formed in the plane with the fifth roots of unity as its vertices.
is there any formula to solve this type of problem?
-
Do you have to use complex numbers? Can you use trigonometry? – Calvin Lin Jan 13 at 4:15
## 1 Answer
Consider the lines joining the vertices of the circle to the center. Each of these triangles are isosceles, with side length 1 and vertex angle $\frac {2 \pi}{5}$.
Hence, the area of the pentagon is $\frac {5}{2} \sin \frac {2\pi}{5}$, which we can evaluate to be $\frac {5}{4} \sqrt{ \frac {5+ \sqrt{5}}{2} }$.
In general, the area of the n-gon is $\frac {n}{2} \sin \frac {2\pi}{n}$.
If you have to use complex numbers to approach this question, then since the cross product uses $\sin \theta$, hence the area of one of these triangles will be $\frac {1}{2} \left \| 1 \cdot \omega \right \| = \frac {1}{2} \sin \frac {2 \pi}{5}$.
-
Did you lose a $1/2$ in your general formula for an $n$-gon's area? – B.D Jan 13 at 4:32
1
@B.D I did. Thanks for finding it! – Calvin Lin Jan 13 at 4:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325558543205261, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/294253/pmf-of-discrete-random-variables
|
# PMF of Discrete Random Variables
Prove that $$\frac{1} {(1-p)^2} = \sum_{x=2}^\infty x^2p^{x-2} \, .$$
The professor suggested that we start by taking the derivative of $$\frac{1} {(1-p)^2} = \sum_{x=1}^\infty xp^{x-1} \, .$$
So I have tried that, and then broken the derivative up into $$\sum_{x=2}^\infty x^2p^{x-2} \, - \sum_{x=2}^\infty xp^{x-2} \, .$$ by taking apart x(x-1)
but I can't get any further and I am extremely confused.
-
1
You should consider adding some thoughts about your current approach to the problem. Without knowing how you're trying to do this, we don't have any idea what background you have or how best to help you. – Muphrid Feb 4 at 5:53
## 2 Answers
From $$\frac{1}{(1-p)^2} = \sum_{x \geq 1} x \cdot p^{x-1} \tag{1}$$ you obtain by differentiating (with respect to $p$):
$$\begin{align} \frac{2}{(1-p)^3} &= \sum_{x \geq 2} x \cdot (x-1) \cdot p^{x-2} = \sum_{x \geq 2} x^2 \cdot p^{x-2} - \underbrace{\sum_{x \geq 2} x \cdot p^{x-2}}_{\frac{1}{p} \cdot \sum_{x \geq 2} x \cdot p^{x-1}} \\ &= \sum_{x \geq 2} x^2 \cdot p^{x-2} - \frac{1}{p} \cdot \left( \sum_{x \geq 1} x \cdot p^{x-1} -1 \right) \\ &\stackrel{(1)}{=} \sum_{x \geq 2} x^2 \cdot p^{x-2} - \frac{1}{p} \cdot \left( \frac{1}{(1-p)^2} -1 \right) \end{align}$$
Hence
$$\sum_{x \geq 2} x^2 \cdot p^{x-2} = \frac{p^2-3p+4}{(1-p)^3}$$
(There has to be a mistake or typo in your claim, because if the equality $$\sum_{x \geq 2} x^2 \cdot p^{x-2} =\frac{1}{(1-p)^2}$$ holds, this would imply $$0 = \frac{1}{(1-p)^2} - \frac{1}{(1-p)^2} = \sum_{x \geq 2} x^2 \cdot p^{x-2} - \sum_{x \geq 1} x \cdot p^{x-1} = \sum_{x \geq 0} \underbrace{((x+2)^2-(x+1))}_{\geq 0} \cdot \underbrace{p^x}_{\geq 0}$$ by using (1). And this can only hold iff $$(x+2)^2-(x+1) = 0$$ for all $x \in \mathbb{N}$. And this is clearly not fulfilled.)
-
For $\,|p|<1\,$ it is well known that
$$f(p):=\frac{1}{1-p}=\sum_{x=0}^\infty p^x\Longrightarrow f'(p)=\frac{1}{(1-p)^2}=\sum_{x=1}^\infty xp^{k-1}$$
So in fact your first equality's right hand must begin from $\,x=1\,$ and with $\,x\,$ but not squared...check this.
-
That's exactly what Kathleen wrote in the second equation. (But as you can see in my answer, I would also say that there is some mistake in the claim.) – saz Feb 7 at 13:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9721575975418091, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/47421/qed-commutation-relations-implications?answertab=votes
|
# QED Commutation Relations Implications
In Brian Hatfield's book on QFT and Strings there is the following quote:
In particular $$[A_i (x,t), E_j(y,t)] = -i \delta_{ij}\delta(x-y)$$ implies that $$[A_i(x,t),\nabla \cdot E(y,t)] = -i\partial _i \delta(x-y).$$
I'm not sure how to get between those lines. If I take the partial of the fist line I get $$[\partial_j A_i(x,t),E_j(y,t)] +[A_i(x,t),\partial_jE_j(y,t)] = -i\partial_i \delta(x-y)$$ So perhaps my question turns into: "Why is $[\partial_j A_i(x,t),E_j(y,t)] = 0$ ?" Thanks.
-
3
Maybe the partial is with respect to the y coordinate? – twistor59 Dec 22 '12 at 23:07
Lol, of course. Thanks. – kηives Dec 23 '12 at 2:20
## 1 Answer
There's nothing strange going on. $\partial_i$ is shorthand for $$\frac{\partial}{\partial X^i},$$ where some coordinate set $X^i$ is implied. Since $E_j = E_j(y,t)$ you 'obviously' need to derive with respect to $y$ (as twistor59 notes), i.e. $$\nabla \cdot E = \frac{\partial}{\partial y^i} E^i(y,t).$$ The derivative doesn't act on $A_i(x,t)$, so you're done.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9497462511062622, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/1019/when-is-the-libor-market-model-markovian/1021
|
# When is the LIBOR market model Markovian?
The question is inspired by a short passage on the LMM in Mark Joshi's book.
The LMM cannot be truly Markovian in the underlying Brownian motions due to the presence of state-dependent drifts. Nevertheless, the drifts can be approximated 'in a Markovian way' by using predictor-corrector schemes to make the rates functions of the underlying increments across a single step.
Ignoring the drifts, the LMM would be Markovian in the underlying Brownian motions if the volatility function is separable. The volatility function $\sigma_i(t)$ is called separable if it can be factored as follows $$\sigma_i(t)=v_i\times\nu(t),$$ where $v_i$ is a LIBOR specific scalar and $\nu(t)$ is a deterministic function which is the same across all LIBOR rates.
Questions.
1. The separability condition above is sufficient for the LMM to be Markovian in the Brownian motion. How far is it from being a necessary one?
2. What is the intuition behind the separability condition?
3. Are there any weaker sufficient conditions?
-
## 2 Answers
You can use a matrix type seperability condition as well. This is similar but the equation has more flexibiliity. The rates are then markovian in some combinations of the Brownian motion. See More Mathematical Finance for details.
-
3
Thank you for the answer and welcome to Quant.SE! – olaker♦ Dec 20 '11 at 19:57
Re 2: It's a mathematical trick. Insisting on the separability of volatility function makes LMM useless. Its power lies in its powerful calibration abilities. If you constrain the vol function to separable form, you throw that ability out of the window. You might just as well use LGM then, and it will be more intuitive and faster.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9029867053031921, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/82858/fast-multiplication-of-constant-symmetric-positive-definite-matrix-and-vector/82882
|
## Fast multiplication of constant symmetric positive-definite matrix and vector.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider the matrix $H=H^T$, $H>0$, $H \in R^{n \times n}$, and the vector $v \in R^n$. In a numerical algorithm, I need to compute the product $b = Hv$. Right now I am following the naive approach: $b_i = \sum_{j=1}^{n} h_{ij} v_j, i=1,...,n$. Is there a faster way to compute this product? $H$ is non-sparse and constant (i.e. eigenvectors, eigenvalues, etc. of $H$ are available).
-
Thank you everyone. In some of the computing platforms on which the algorithm will run (e.g. microcontroller without FPU), there is no BLAS implementation – unknown (google) Dec 7 2011 at 16:00
## 4 Answers
If I understand the question right, by "constant" it is meant that $H$ is a fixed, but arbitrary positive definite matrix.
In general, I don't think that you can compute the matrix-vector product $Hv$ faster than $O(n^2)$. But if $H$ has structure (Toeplitz, Circulant, Strictly diagonally dominant, etc.), or if you are willing to settle for approximate answers, you can compute the product faster ("randomized linear algebra" is a good search term).
A very naive approach is the following: Suppose that $v$ is some vector, and instead of using $H$, you use $H_k$, the top-$k$ rank approximation to $H$. Then, $H_kv$ can be computed in time $O(nk)$. The error of this computation is $\|Hv-H_kv\| \le \|H-H_k\|\times\|v\|$, which is fairly easy to characterize.
-
$H$ is fixed and has no special structure. Thanks for the search term. – unknown (google) Dec 7 2011 at 15:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In practice no, unless $H$ has some additional structure that you haven't told us about (low rank blocks, displacement structure, or similar stuff).
There are some results on faster-than-$O(n^3)$ matrix multiplication (by the way, the exponent has recently been lowered to $O(n^{2.373})$), but as far as I know none of them is actually viable in practice.
Grab the fastest BLAS implementation that you can find for your machine, and use it.
-
Federico: isn't the OP asking for BLAS2? – S. Sra Dec 7 2011 at 11:35
@Suvrit: I have always heard and used $\text{BLAS}=\{\text{BLAS level 1}, \text{BLAS level 2}, \text{BLAS level 3}\}$. Or am I misunderstanding what you're saying? – Federico Poloni Dec 7 2011 at 11:40
1
Actually, as far as I know the Strassen algorithm IS viable in practice for reasonably large sized matrices (how large is "reasonably large" depends on how vectorized your FPU is, historically 100x100 was big enough). This will help the OP, since he can perform $n$ of his multiplications in time $(O(n^{2.7}),$ so $O(n^{1.7})$ per multiplication. – Igor Rivin Dec 7 2011 at 11:46
@Federico: I meant that the OP is asking for a matrix-vector multiplication, not matrix-matrix. But as Igor also notes, as do you, if the OP requires $Hv$ for several $v$'s then, yes, it makes sense to start thinking about matrix-mult. algos. – S. Sra Dec 7 2011 at 12:48
In practice, on typical desktop computers and server class machines using the x86-64 architecture, matrix-vector multiplication is limited more by memory bandwidth than floating point operations. This happens because there are no opportunities in matrix-vector multiplication for bringing data in from memory to cache and reusing it before flushing it out of the cache. Getting a matrix-vector multiply to run at full memory bandwidth is generally quite easy.
As others have pointed out, you might be able to go faster if the matrix has specialized structure (e.g. if it's a Toeplitz matrix you can do the multiplications in O(n*log(n)) time.)
-
You can use the symmetry of your matrix to cut the memory traffic in half, but it takes careful coding to get that efficiency in practice. – Brian Borchers Dec 7 2011 at 15:55
In fact, $O(n^2)$ arithmetic operations is not avoidable, in general. A general result by Winograd (see "Algebraic complexity theory" by Buergisser, Clausen, and Shokrollahi, Sect. 13.2) shows that the for a generic $m\times n$ matrix (all the entries are algebraically independent) and a generic vector one will need $O(mn)$ multiplications and additions (there is a precise formula too, but $O(mn)$ would do for our purposes). PSD matrices are symmetric, so directly it won't apply t $H$. However, you can cut $H$ into blocks: $H=\begin{pmatrix} H_{11}&H_{12}\\ H_{12}^\top&H_{22}\end{pmatrix}$. Then $H_{12}$ is generic, in general, and, writing $v$ as a block vector $v=(v^1,v^2)$ one has $b=(b^1,b^2)=(H_{11}v^1+H_{12}v^2, H_{12}^\top v^1+H_{22}v^2)$. Now Winograd's result is applicable to $H_{12}$, so you still get $O(n^2)$, as you cannot avoid computing $H_{12}v^2$ when you try to get $b$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364172220230103, "perplexity_flag": "middle"}
|
http://learningandphysics.wordpress.com/2012/04/19/longitudinal-standing-waves-in-a-spring/
|
Physics, Modeling Instruction, Educational Technology, and other stuff I find interesting
# Longitudinal Standing Waves in a Spring
Posted on April 19, 2012
In the spring of 2011 I had a pretty amazing student teacher, Mat. One of the cool things he did was to pillage his college lab equipment (my alma mater as well, St. Olaf College) to bring in a function generator that drives a speaker, producing standing waves on a string. I found out towards the end of the year that Vernier makes equipement that, though not as high end as what he brought in, does the job for about \$300 (I needed a power amplifier and speaker accessory, as I already had a Labquest and Loggerpro, either of which can act as a function generator). I had taught waves for years, but lacked this equipment which, through his example, I found made standing waves much more tangible for students.
So this year I started standing waves just as Mat did, by producing standing waves on a string, shown below, and deriving the pattern that yields that the nth harmonic is n times the fundamental frequency (n=positive integers), or
$\dpi{100} \bg_white f_{n}=nf_1$
3rd harmonic of a standing wave on a string
A node for a standing longitudinal wave on a spring
Mat had found a cool video which showed standing waves on a spring, so naturally, I wanted to replicate this for myself. I managed to do so without too much effort, and it’s pretty sweet. Nodes show up as single coils that don’t move, and both nodes and antinodes can be identified by sticking a small piece of paper in the spring. Antinodes move the paper up and down by a couple of centimeters, while at the nodes the paper remains stationary. This is better shown in the video I made to show the apparatus and how it works, linked here as well as embedded at the bottom of the post.
I figured that since the spring was fixed at one end but driven at the other that it would act like a node anti-node (NA) standing wave, similar to a pipe closed at one end and open at the other. This follows the same pattern as above except that n can only be odd; that is, only the odd harmonics are produced. Additionally, I was not certain without theory how the fundamental frequencies of the two would relate. Though I could find nodes (which was awesome!), I failed for a while to come up with a nice pattern for the resonant frequencies that mimicked NA.
A node formed near the end of the spring
Then springs started falling off, which was weird and only in some resonant cases. Next I noticed that while most of the time the nodes were not evenly spaced (as would be expected with an antinode at the driver), occasionally I found a node right at the driver, as shown at right, which I figured meant that was part of the NN pattern.
I started wondering what kind of patterns I may actually be producing and if there was theory to back them up. Could I have a node in some cases at the driver, while in other cases have an antinode? How would the fundamentals of each pattern be related?
I had looked around for a while to find a way to calculate the theoretical fundamental frequency for waves on a spring. After much searching I finally found this article, which states that the speed v of a longitudinal wave on a spring of mass ms with spring constant k and length L is
$v=L\sqrt{\frac{k}{m_{s}}}\text{ \ \ (eq 1)}$
All references I found to this equation state that it could then be used with the following equation to find the resonant frequencies of longitudinal waves on a spring;
$f_{n}=\frac{nv}{2L} \text{ for }n=1,2,3...\text{ \ \ (eqn 2)}$
which means that they assume that there is a node at both ends of the spring, such that it behaves like a standing wave on a string. Substituting equation 1 into equation 2 yeilds
$f_{n}=\frac{nv}{2L}=\frac{n\left[L\sqrt{\frac{k}{m_{s}}}\right]}{2L} \Rightarrow f_{n}=\frac{n}{2}\sqrt{\frac{k}{m_{s}}}\text{ for }n=1,2,3...\text{ \ \ (eqn 3)}$
Thus once the value of $f_{1}=\frac{1}{2}\sqrt{\frac{k}{m_{s}}}$ is established, the frequency of the nth harmonic for NN standing waves is simply
$f_{n}=nf_{1}\text{ for } n=1,2,3... \text{ \ (eqn 4)}$
This would be great, except that it didn’t fit the values that I was getting! Additionally, I noticed that I would get other standing waves in-between the ones that I thought were the harmonics I was looking for. So then I took the equation for the harmonic frequencies for a standing wave with a node at one end and an antinode at the other,
$f_{n}=\frac{nv}{4L} \text{ for }n=1,3,5...\text{ \ \ (eqn 5)}$
and substituted equation 1;
$f_{n}=\frac{nv}{4L}=\frac{n\left[L\sqrt{\frac{k}{m_{s}}}\right]}{4L} \Rightarrow f_{n}=\frac{n}{4}\sqrt{\frac{k}{m_{s}}}\text{ for }n=1,3,5...\text{ \ \ (eqn 6)}$
For my spring with mass 11.6 grams and experimentally determined spring constant 1.14 N/m, I should get a fundamental at 5 Hz for NN standing waves and 2.5 Hz for NA standing waves. The fundamentals are almost impossible, in my experience, to see due to the lack of nodes (the NN pattern should have a node at the bottom, but I had trouble getting a passible node for the fundamental), and thus I started confirming with n=2. My predictions and data are shown below;
As you can see, the node-antinode predictions work out quite well, but the node-node predictions are systematically low. I suspect this has to do with the fact that the node at the bottom of the spring is not all the way at the bottom of the spring, therefore the length of the spring needs to be adjusted in a similar way that produces end effects in pipes (Document link) changes the effective length of a pipe that is open at the end. I have to find some time to investigate this more formerly, however.
One interesting aspect of spring standing waves is that the frequencies that generate standing waves do not depend on the length of the spring. This can be shown by changing the length of the spring and observing that the standing wave remains. Note that the above mentioned end effect would take place because if the L term in equation 1 (which would be the total length of the spring) did not completely cancel out with the L term in equation 2 (where L is the effective length of the standing wave, node to node).
One obvious application of the dual pattern is organs. Organs utilize both open-open (node-node) and open-closed (node-antinode) pipes, and the relationships between their frequencies can easily be demonstrated (and calculated, though as an analogy since v is calculated differently with a spring) using this apparatus. Plus it’s really cool.
One last fun fact; It turns out that node-node AND node-antinode patterns can also be produced on a string with transverse waves; more on that in a later post. For now, check out the video of the longitudinal standing waves on a spring, below.
### Like this:
This entry was posted in Cool Physics and tagged longitudinal, spring, standing waves, Waves. Bookmark the permalink.
### One Response to Longitudinal Standing Waves in a Spring
1. Pingback: End of Year Lab Design Projects | LEARNINGANDPHYSICS
• ### Follow me on Twitter
• @gfrblxt dude, honored! 1 day ago
• @kellyoshea @MsPoodry @mhelmes @physicsramble @elbee818 sorry folks, this is why I'm not there. Fran, Yay! http://t.co/x4Yo0XQhMN 1 day ago
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519307613372803, "perplexity_flag": "middle"}
|
http://ulissesaraujo.wordpress.com/2009/04/01/cryptol-the-language-of-cryptography/
|
# Ulisses Costa Blog
## Cryptol the language of cryptography
1 04 2009
Pedro Pereira and I are working on a new project in the Masters. The second half of the Masters is composed of a single project suggested by a company. Some companies are forming partnerships in the Masters formal methods, including: the Critical software, SIG and Galois. We chose the Galois because we also are in the area of cryptography and we already knew some work of some people from this company.
The project suggested by Galois was study the Cryptol as a language of specification of cryptographic algorithms. The cipher we used for this study is the SNOW 3G (The SNOW website), later on I will talk about the specification of this cipher. In this post I am only interested to show the language.
I’m going to show you some details about the language. This post is not intend to be a exhaustive explanation of Cryptol, if you looking for that you can go directly to the manuals. This post only relates my experience, and what I like it most with the language.
## Overview
Cryptol is a high-level language that is geared to deal with low-level problems. Is a Domain-specific language to design and implement cryptographic algorithms.
This language has a high percentage of correctness of the implementation of a cipher, because it implements type inference, so we can say that a big part of the language implements correctness. This correctness is also achieved thanks to the architecture of the language – functional. We don’t have side effects – a function only return something inside is codomain.
In Cryptol we have this philosophy that says that everything is a sequence. This is very useful because we are working with low level data (array of bits), so we use sequences to represent that arrays. We can have nested sequences to have a more structured representation of data. For example, we can simply transform a 32-bit sequence in a 4 1-byte sequence.
The size of this sequences could be implemented as finite or infinite, as we going to see later in this post. Because Cryptol is a high-level language we can also implement polymorphic functions, most of the primitive functions are implemented in polymorphic mode. The way we have to navigate throw the sequences is using recursion, or sequences comprehension, and with these two techniques we can implement recurrences.
If you are a Haskell programmer you just need the next section to learn Cryptol. This language is so look a like with Haskell that even the philosophy seems to have a lot in commune.
## Types in Cryptol
The type $[32]$ means that you have a sequence of 32-bit size. All the types in Cryptol are size oriented. The unit is the $Bit$, that you can use to represent $Bool$. To represent a infinite sequence we use the reserved word $inf$, and we write: $[inf]$ to represent that.
If you want to generate a infinite sequence, we use the syntactic sugar of the sequences like that: $[1~..]$. Cryptol will infer this sequence as type
$[1~..]~:~[inf][1]$
That means this sequence have infinite positions of 1-bit words. The type inference mechanism will always optimize the size that he needs, to represent the information.
So, it infer the type of $[100~..]$ as:
$[100~..]~:~[inf][7]$
Because, it “knows” that needs only 7-bits to represent the decimal $100$. But if you need more, you can force the type of your function.
We implement polymorphism in our types, if we have:
$f~:~[a]b~\rightarrow~[a]b$
This means, that the function $f$ have polymorphism over $b$, because we say that it domain is one sequence of size $a$ of type $b$, and it codomain also. Here we could also see: $f~:~[a][b]c$ meaning that $f$ is a constant of sequences of size $b$ of type $c$, $a$ times.
So, lets talk about some primitive functions in Cryptol, and its types. The $tail$ function have the following type in Cryptol:
$tail~:~\{a~b\}~[a+1]b~\rightarrow~[a]b$
As we can see, Cryptol is so size oriented, that we can use arithmetic operators in types. We can probably infer what this function does just from it type: $tail$ works for all $a$ and $b$ such that if we have one sequence os size $a+1$ of type $b$ it returns one sequence of size $a$ of same type. In fact this function removes the first element of one sequence.
Because of this size oriented philosophy a lot of functions, that change the size of the sequences can be read just from the type.
As you can see in the following list of Cryptol primitive function:
$drop~:~\{ a~b~c \}~( fin~a ,~a~\geq~0)~\Rightarrow~(a ,[ a + b ]~c )~\rightarrow~[ b ]~c$
$take~:~\{ a~b~c \}~( fin~a ,~b~\geq~0)~\Rightarrow~(a ,[ a + b ]~c )~\rightarrow~[ a ]~c$
$join~:~\{ a~b~c \}~[ a ][ b ] c~\rightarrow~[ a * b ]~c$
$split~:~\{ a~b~c \}~[ a * b ] c~\rightarrow~[ a ][ b ]~c$
$tail~:~\{ a~b \}~[ a +1] b~\rightarrow~[ a ]~b$
## Recursion and Recurrence
Cryptol implements Recursion, just like a lot of functional languages do.
Imagine the fibonacci function definition:
It implementation in Crytol is exactly the same as defined mathematically.
```fib : [inf]32 -> [inf]32;
fib n = if n == 0 then 0 else if n == 1 then 1 else fib (n-1) + fib (n-2);```
Cryptol uses recursion to permit us to iterate throw sequences.
But, If you prefer you can implement a more functional algorithm of fibonacci function in Cryptol:
```fib : [inf]32 -> [inf]32;
fib n = fibs @ n;
where {
fibs : [inf]32;
fibs = [0 1] # [| x + y || x <- drop (1,fibs) || y <- fibs |];
};```
Here, as you can see, we define a infinite list $fibs$ of all the fibonacci numbers, by calling the $fibs$ inside the sequences comprehension $fibs$, this is called a recurrence, and you can use that too in Cryptol.
## Cryptol vs C
I’m going to show you some part of the implementation of SNOW 3G in C. This is a function called $MUL_{\alpha}$
```MULa : [8] -> [32];
MULa(c) = join ( reverse [
( MULxPOW(c, 23 :[32], 0xA9) )
( MULxPOW(c, 245:[32], 0xA9) )
( MULxPOW(c, 48 :[32], 0xA9) )
( MULxPOW(c, 239:[32], 0xA9) ) ] );
```
```/* The function MUL alpha.
Input c: 8-bit input.
Output : 32-bit output.
See section 3.4.2 for details.
\*/
u32 MULalpha(u8 c) {
return
((((u32)MULxPOW(c,23, 0xa9)) << 24 ) |
(((u32)MULxPOW(c, 245,0xa9)) << 16 ) |
(((u32)MULxPOW(c, 48,0xa9)) << 8 ) |
(((u32)MULxPOW(c, 239,0xa9)))) ;
}
```
You can see that in Cryptol we just say that we want to work with a 32-bit word, and we don’t need to do any shift to our parts of the word. We just join them together. We reverse the sequence, because Cryptol stores words in little-endian, and we want to keep the definition like the specification.
This is a very simple function, so the result in C is not so that different. But if we have a more complex function, we were going to start having a nightmare to write that in C.
## Conclusion
Well, the conclusion is that Cryptol is a language that really help to write low-level algorithms. With Cryptol the specification is formal and easier to read than other languages. A value of Cryptol is that the code can be converted to other languages, such as VHDL and C.
If you’re interested, take a look at the presentation that we did.
## References
### Information
• Date : April 1, 2009
• Tags: cipher, code, comprehension list, cryptography, Cryptol, English, functions, haskell, internet, language, linux, mathematics, msc, programming, recursion, research, security, software, types, uminho
• Categories : Uncategorized
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215491414070129, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/38305/terminology-for-different-types-of-epidemiological-variable
|
Terminology for different types of epidemiological variable
If I have a data set with $n$ patients and I have a categorical variable $X$ for each patient, I want to drop all patients whose $x$ is a particular value.
Question 1: what is the term (preferably the epidemiological term) for variable $X$? (Is it an exclusion criteria variable [I've never seen this term before, I'm just guessing], or is there a commonly used term?)
Next, I want to subset the resulting data set into two data sets according to the values of a dichotomous variable $Y$ for each patient such that all patients for which $y = 0$ will go in data set A whereas all patients for which $y = 1$ will go in data set B. I will then treat data sets A and B separately and do statistical tests within each. Following this, I would perform comparisons between A and B.
Question 2: what is the term (again preferably the epidemiological term) for variable $Y$? (I was thinking something along the lines of subsetting variable, but again, I've never seen this term.)
-
2 Answers
Let us assume we're talking about a particular categorical variable. Lets say a "residence" variable that indicates if someone lives in a private home, an apartment, a dorm or group living space, a nursing home, a prison, or other.
For your first example, I don't know that I'd call the variable anything. I'd probably end up just saying what value of the variable I ended up excluding. For example: "Subjects who were incarcerated in correctional institutions were excluded from the study."
In the second example, depending on what you're doing you might say you're "stratifying by Y". That being said, I'd never be really all that comfortable with splitting them into two entirely two data sets and analyzing one utterly without the context of the other. Unless it's just a way to partition things that really should be two separate studies - but in that case you don't need to talk about the variable, as you're really just running two separate studies whose data you happened to get in the same file.
-
In the first instance, I'd run tests within A and B separately. However, ultimately, I'd want to compare the results of A with the results of B. So, perhaps stratifying variable? I just need a single, understandable term to summarize my study design in tabular format. – jetistat001 Sep 30 '12 at 8:23
Yeah, I'd just say "stratified by Y". – EpiGrad Sep 30 '12 at 8:24
For the first question, there is no particular term for X. Exclusion criteria is a term used to refer to rules used to exclude patients from being enrolled in the trial. There may be an analogy here to the process of excluding data when a mathematical condition exists but it is a different situation and I have never heard the term exclusion criteria used in that case.
In my experience with clinical trials the analysis you are describing in the second question is most commonly called subgroup analysis. I imagine the epidemiologist use that term also.
-
The only reason I wouldn't call it a subgroup analysis is the suggestion that A & B are treated entirely separately, rather than B being a subset of A. – EpiGrad Oct 1 '12 at 3:37
The term subgroup analysis is used to define an analysis of any subset of the original data set. – Michael Chernick Oct 1 '12 at 10:16
I'm not saying its wrong, I'm saying I'd find it odd. – EpiGrad Oct 1 '12 at 10:18
Why odd? Each analysis involves a subgroup of a full larger data set. – Michael Chernick Oct 1 '12 at 10:38
Generally speaking, as I mentioned, I see "subgroup" analysis used to look at a particular group of interest as part of a greater whole. Partitioning the data into a number of mutually exclusive subgroups and comparing them, without any indication of a "whole study" analysis, feels far more like stratification than a subgroup analysis. – EpiGrad Oct 1 '12 at 10:39
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9511569142341614, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/192433/derivative-of-neural-network-function
|
# Derivative of Neural Network function
I would like to code this neural net activation function, using the C language:
$$f(x) = 1.7159 \tanh( (2/3) x )$$
and will also need to code its derivative. I've read that the derivative of $\tanh(x)$ is $\operatorname{sech}^2(x)$, but since C doesn't have a hyperbolic secant function I will need to use $cosh$, i.e. derivative of $\tanh(x)$ is $1/\cosh^2(x)$, I think.
Since my knowledge of calculus is very rusty, my best attempt for the derivative of the above function is:
$$1/\cosh^2((2/3)x)$$
Is this correct?
-
## 1 Answer
The derivative is $\frac{2*1.7159}{3} / \cosh^2 \frac{2x}{3}$ by chain rule. Note that it can be also written as $\frac{2*1.7159}{3} (1- \tanh^2 \frac{2x}{3})$ which might make slightly more sense as you only need to find $\tanh$ then (if you want to find $f(x)$ and $f'(x)$ at the same time). All the useful relations can be found in wikipedia.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.97069251537323, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/133276/normalize-x-to-0-to-10-scale-with-asymptotes-at-either-end?answertab=oldest
|
# Normalize $X$ to $0$ to $10$ scale with asymptotes at either end
I am trying to find a scaling function that mimics the gas gauge in a car. I would like to map a value to a $0$ to $10$ scale, on which I have two known points. For example:
$X_1 = 2$, $Y_1 = 2.5$, $X_2 = 4$, $Y_2 = 7.5$, $X_3 =$ __
The known points, $X_1$ and $X_2$ should always correspond to $Y$ values of $2.5$ and $7.5.$ A simple linear equation is not sufficient, since its possible for the $X_3$ value to generate a $Y$ that is greater than $10$, or less than $0.$
Can you think of a suitable function that matches the known points, but will approach a limit at $0$ and $10$? Thanks for any guidance!
-
Hi, welcome to Math.SE. I recently edited your post - feel free to click the bottom of your post to see the revision history. When TeXing/using numbers or variables in your post, you can denote them by with the money sign (shift+4) at the front and end of your text where you want to stop the TeXing, where .... is your number/variable, etc. Take a peak around at other posts to learn the syntax for things if you don't know! – Joe Apr 18 '12 at 3:02
## 2 Answers
You can use the arctangent function, which has horizontal asymptotes going to $\pm \infty$. As it ranges from $\frac {-\pi}2$ to $\frac \pi 2$ we need to rescale to get your range of $(0,10)$, so we want $10\left(\frac 1\pi \arctan X_{scaled} + \frac 12\right)$. Now all we have to do is figure out how to scale $X$. This is not too hard as $X_{scaled}=\pm 1$ gives $2.5, 7.5$, so $X_{scaled}=\frac {2(X-(X_1+X_2)/2)}{X_2-X_1}$. The final answer is $Y=10\left(\frac 1\pi \arctan \frac {2(X-(X_1+X_2)/2)}{X_2-X_1} + \frac 12\right)$
-
Another option is to use a function of the form $$\frac{10}{1+ae^{-bx}}$$ which is bound by 0 and 10 as required.
With the requirements of $(2,2.5),(4,7.5)$ we have
$$a e^{-2b} =3 \qquad \text{ and } ae^{-4b}=\frac{1}{3}$$ Dividing to eliminiate the $a$, we have $e^{2b}=9, b =\frac{\ln(9)}{2}$ and $a = 27$. This reduces to
$$\frac{10}{1+27(3^{-x})}$$
This has the benefit of being fairly linear through your data points as shown by the dashed line in the figure.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9596190452575684, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/11131/determine-if-solution-to-linear-system-exists/11152
|
Determine if solution to linear system exists
I'm trying to determine only if a solution to a linear system of equations exists. I have been using `LinearSolve`, which works fine, but it solves the system as well. Is there another more efficient method for only checking the existence of a solution?
-
2
For a one-off problem where nothing special is known about the system beforehand, actually solving the equation gives you access to the most efficient methods. In MMA, `LinearSolve` is very fast and (for general-purpose work involving non-square matrices) provides solutions an order of magnitude faster than other methods using (say) `MatrixRank`, `RowReduce`, or `Minors`. As an example of how extra info can help, if it's known the coefficient matrix is orthogonal, then you already know a solution exists. – whuber Sep 26 '12 at 19:07
There is some important discussion in the comments to my answer. – Vitaliy Kaurov Sep 27 '12 at 8:24
3 Answers
==== Update ====
Please consider important discussion in the comments.
==== Original answer ====
If the matrix m has determinant zero, then there may be either no vector, or an infinite number of vectors x which satisfy m.x==b for a particular b. This occurs when the linear equations embodied in m are not independent. If you are interested only in well-defined systems, then, generally, confirming that you have a non-zero determinant is faster:
````m = RandomReal[1, 1000 {1, 1}];
b = RandomReal[1, 1000];
````
Some timing tests:
````Mean@Table[LinearSolve[m, b]; // AbsoluteTiming, {30}][[All, 1]]
````
0.0712654
````Mean@Table[Det[m]; // AbsoluteTiming, {30}][[All, 1]]
````
0.0571327
-
2
I suspect the performance trade may fall the other way depending on the size and character of the system. The most efficient methods for solving large systems usually do not involve directly computing the determinant. (LinearSolve automatically picks an efficient method..) – george2079 Sep 26 '12 at 17:47
@george2079 the benchmark i posted is pretty general and the system is pretty large. Of course, some specific systems may show a different result. – Vitaliy Kaurov Sep 26 '12 at 17:53
3
This has two problems. For approximate matrices it is not numerically sound. Far safer is to check singular values for zeros below some tolerance. Else you can get a false positive, that is, a claim of solvability when the matrix is singular or nearly so. The other problem is that (to be cont'd) – Daniel Lichtblau Sep 27 '12 at 1:34
1
... in the exact case, e.g. integers, Det computation can be slower than actual solving. Example:In[22]:= n = 2^10; mm = RandomInteger[{-1, 1}, {n, n}]; vec = RandomInteger[{-1, 1}, n]; In[27]:= AbsoluteTiming[sol = LinearSolve[mm, vec];] Out[27]= {11.3440000, Null} In[28]:= AbsoluteTiming[Det[mm] == 0] Out[28]= {53.7370000, False} – Daniel Lichtblau Sep 27 '12 at 1:34
1
A third problem is that computing a determinant works only when there are exactly as many equations as variables. There is a generalization (implementable via `Minors`), but it is likely to be relatively inefficient (there can be a lot of minors to compute and store). – whuber Sep 27 '12 at 1:35
show 4 more comments
The upshot of Vitaliy's note to check for a zero determinant is that
1. A matrix with inexact entries that is supposed to be singular (e.g. because it is rank-deficient) might not necessarily give a determinant that is exactly zero, due to roundoff.
2. A tiny determinant does not necessarily imply that the matrix is "nearly" singular. (Conversely, just because a matrix does not have a tiny determinant doesn't mean that `LinearSolve[]` won't have a problem handling it.) See for instance this answer I wrote at scicomp.SE.
Thus, to safely determine if a matrix is singular, you can do any of two things:
1. Check if the output of `NullSpace[]` is an empty list. If its output on your matrix is `{}`, the matrix is nonsingular; otherwise, the number of null vectors it produces (the nullity) gives an indication of how rank-deficient it is.
2. Use the undocumented function `LinearAlgebra`MatrixConditionNumber[]`. Checking for singularity is as easy as seeing if its output on your matrix is $\infty$, in which case, your matrix is singular. As a bonus, if the value returned is huge, but not necessarily infinite, you still have a good warning sign that `LinearSolve[]` might treat your matrix as singular even if it isn't. See any good numerical linear algebra book (e.g. Golub/Van Loan) for details.
-
1
P.S. The nice thing about `NullSpace[]` is that it can flexibly deal with both exact and inexact matrices; IIRC it uses Gaussian elimination in the exact case, and the (safer) singular value decomposition in the inexact case. – J. M.♦ Sep 26 '12 at 23:10
Odd. If you get a solution, you know that a solution exists, isn't it? Anyway, what you can do is suppress the output by adding a `;` to your input. Then
````LinearSolve[{{4, 8}, {9, 2}}, {x, y}];
````
returns nothing, while
````LinearSolve[{{4, 8}, {9, 2}, {0, 0}}, {x, y, z}];
````
returns
````LinearSolve::nosol: Linear equation encountered that has no solution.
````
-
`LinearSolve` carries out all of the steps of solving the system when all I need is a yes/no to whether or not a solution exists. So will `LinearSolve` be the most efficient way of doing this regardless? – A. R. S. Sep 26 '12 at 17:16
1
@A.R.S. - My guess is that the steps to check if there's a solution are the same as calculating the solution. (disclaimer: I'm not a mathematician, so you may wish confirmation from the Real Guys :-)) – stevenvh Sep 26 '12 at 17:19
@A.R.S., what mathematical "magic" do you think might be done to determine whether a linear system has a solution that would be substantially different from actually attempting to find solution? Except in special cases, as others have noted, in each situation some sequence of transformations will need to be done (whether row reduction or something more sophisticated like a matrix decomposition). – murray Sep 27 '12 at 14:59
@murray - Maybe he's thinking about something like primality testing, where you can say a number is composite in a second, but may need a couple of months to find the factors. – stevenvh Sep 27 '12 at 15:03
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8802106976509094, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/18364/sketching-graphs
|
# Sketching graphs
How are graphs plotted or sketched?
If you have a graph plotting software for example mathematica or matlab, or you want to see what the graph of $e^{2x}$ looks like, how do you plot/sketch such a graph?
If you proceed by using a finite number of points, and "extrapolating" and "interpolating", then how do you know that that method is reliable? There are infinitely many smooth(or non-smooth) graphs that can be drawn through a finite number of points.
-
1
Are you asking how to use the software or what the software does? The first question seems off-topic. The answer to the second question is that convergence of PL approximations is a consequence of continuity, and we generally assume that the functions we're interested in are continuous. (This may sometimes be false, e.g. near a singularity, and even then one has to worry about numerical instability when solving differential equations, for example. But for plotting a function like e^{2x} these issues don't arise.) – Qiaochu Yuan Jan 20 '11 at 23:44
what the software does, and why hand sketched graphs are reliable. – picakhu Jan 20 '11 at 23:48
4
Not all software packages check to see that their sketches are reliable. Sometimes the sketches generated by software can be very bad -- frequently it's a poor choice of scale, or poor perspective, or the software doesn't know where the "interesting stuff" is happening. Sometimes the software doesn't notice peculiarities of the function -- vertical asymptotes can cause trouble. "Good software" tends to use adaptive methods that find reasonable bounds on things like $|f(x)|$ and $|f'(x)|$ before attempting to sketch, but it really depends. – Ryan Budney Jan 20 '11 at 23:58
This is not a simple question. Even advanced plotting software like mathematica has its issues with some nasty oscillating function graphs so there is no method yet that always works. However if your function is smooth enough you can make a reasonable interpolation with polynoms and use this to draw the function. – Listing Jan 21 '11 at 0:03
## 1 Answer
Basically (keyword: "basically"), it works like this:
````BEGIN
DECLARE x
SET x TO x1
WHILE x IS LESS THAN TO x2
DRAW LINE BETWEEN f(x) and f(x-step) // "linear interpolation"
INCREASE x BY step
END WHILE
END
````
Where step is some small number... Something like 0.01, depending on how fast you want it. $step = \lim_{x\to 0^+}$ would be "perfectly smooth", but we don't have the resources for such measures, so we're stuck with anything between $10^{50}$ and $10^{-50}$.
They won't be "infinitely smooth". That would require infinite amounts of RAM and CPU. It only has to "look" smooth. It's only limited by your RAM/CPU, time (unless you feel like waiting a few trillion years), and most importantly, your screen resolution.
More complex plotters can determine the vertical asymptotes, but I don't need to get into that.
-
I understand this, but when is it reliable? – picakhu Jan 21 '11 at 3:41
See Ryan's comment above - a good graphing package will attempt to heuristically estimate the modulus of continuity. This is not possible (non-recursive) in the black-box model, but if the function is given explicitly enough, then by bounding the derivative you can bound the modulus of continuity and so draw a meaningful graph. – Yuval Filmus Jan 21 '11 at 4:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334626197814941, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/10635/why-are-the-characters-of-the-symmetric-group-integer-valued/10638
|
## Why are the characters of the symmetric group integer-valued?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I remember one of my professors mentioning this fact during a class I took a while back, but when I searched my notes (and my textbook) I couldn't find any mention of it, let alone the proof.
My best guess is that it has something to do with Galois theory, since it's enough to prove that the characters are rational - maybe we have to find some way to have the symmetric group act on the Galois group of a representation or something. It would be nice if an idea along these lines worked, because then we could probably generalize to draw conclusions about the field generated by the characters of any group. Is this the case?
-
What exactly do you mean by "the characters of the symmetric group are integers"? They're functions on the conjugacy classes to $\mathbb{C}$. They're even indexed by partitions of integers (Well, irreducible ones are) but I'm not seeing right off in what way they can be called integers. – Charles Siegel Jan 3 2010 at 23:49
1
I meant that their values are integers. – zeb Jan 3 2010 at 23:53
You should have said "Z-valued" or "integer-valued" instead of "integers". – Chandan Singh Dalawat Jan 4 2010 at 6:46
## 5 Answers
If $g$ is an element of order $m$ in a group $G$, and $V$ a complex representation of $G$, then $\chi_V(g)$ lies in $F=\mathbb{Q}(\zeta_m)$. Since the Galois group of $F/\mathbb{Q}$ is $(\mathbb{Z}/m)^\times$, for any $k$ relatively prime to $m$ the elements $\chi_V(g)$ and $\chi_V(g^k)$ differ by the action of the appropriate element of the Galois group.
If $G$ is a symmetric group and $g$ an element as above, then $g$ and $g^k$ are conjugate: they have the same cycle decomposition. So $\chi_V(g)=\chi_V(g^k)$ whenever $(k,m)=1$, and thus $\chi_V(g)\in \mathbb{Q}$.
-
1
Simpler then I expected. I guess the general fact for arbitrary groups is that if $g^k$ is conjugate to $g$, then $\chi(g)$ is in the fixed field of $\zeta \rightarrow \zeta^k$. – zeb Jan 4 2010 at 0:42
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I just want to emphasize that this question points at the rationality theory of representations and characters that is exposed so beautifully in Chapters 12 and 13 of Serre's book Linear Representations of Finite Groups.
In particular, one has the following facts.
[Section 13.1, Corollary 1]: The following are equivalent:
(i) Every character of $G$ is $\mathbb{Q}$-valued.
(ii) Every character of $G$ is $\mathbb{Z}$-valued.
(iii) Every conjugacy class of $G$ is rational: for every $g \in G$ and positive integer $k$ prime to the order of $g$, $g^k$ is conjugate to $g$.
As noted above, since raising an element of the symmetric group $S_n$ to a power prime to its order does not change the cycle decomposition, condition (iii) holds and the implication (iii) $\implies$ (ii) answers the question. [The proof is the basic Galois-theoretic argument given in some other answers. The implication (ii) $\implies$ (iii) is deeper in that it uses the irreduciblity of the cyclotomic polynomials.]
Some others have said that the shortest or simplest proof arises from knowing that all of the irreducible representations of $S_n$ can be explicitly constructed and therefore seen to be realizable over $\mathbb{Q}$. I respectfully disagree. This is a nontrivial theorem of Young which Serre refers to but does not prove in his book (Example 1, p. 103).
Moreover, Serre explains that the condition of rationality of characters is in general weaker than rationality of representations: there are obstructions here in the Brauer group of $\mathbb{Q}$! Namely, by Maschke's Theorem the group ring $\mathbb{Q}[G]$ is semisimple, say a product of simple $\mathbb{Q}$-algebras $A_i$ which are in bijective correspondence with the irreducible $\mathbb{Q}$-representations $V_i$. By Schur's Lemma, $D_i = End_G(V_i)$ is a division algebra, and one has $A_i \cong M_{n_i}(D_i)$. Then:
[Section 12.2, Corollary]: The following are equivalent:
(i) Each $D_i$ is commutative.
(ii) Every $\mathbb{C}$-representation of $G$ is rational over the abelian number field generated by its character values.
Thus just knowing that the character table is $\mathbb{Z}$-valued is not enough. The standard example [Exercise 12.3] is the quaternion group $G =$ {$\pm 1, \pm i, \pm j, \pm k$} for which
$\mathbb{Q}[G] \cong \mathbb{Q}^4 \oplus \mathbb{H}$,
where $\mathbb{H}$ is a division quaternion algebra over $\mathbb{Q}$, ramified at $2$ and $\infty$. It corresponds to an irreducible $2$-dimensional $\mathbb{C}$-representation with rational character but which cannot be realized over $\mathbb{Q}$.
-
The quickest answer is because all of the irreducible representations of the symmetric group can be constructed over the field of rational numbers. See the wikipedia article on Young symmetrizers for example http://en.wikipedia.org/wiki/Young_symmetrizer
More generally, one can say that representations of any finite Weyl group can be constructed over the rational numbers. This is explained in one of Springer's papers: http://www.ams.org/mathscinet-getitem?mr=491988
Another reason is that one write down an explicit combinatorial formula for the values of these characters. This is the Murnaghan-Nakayama rule and can be found in many sources. One such source is Stanley's Enumerative Combinatorics volume 2, Section 7.17, and Section 7.18 for its connection to the symmetric group.
-
1
This points to an unusual property of symmetric groups and other Weyl groups which is much stronger than the fact that their characters take values in `$\mathbb{Z}$`: the representing matrices can actually be written over `$\mathbb{Q}$`. Of course, the price of this stronger statement is doing more work than just showing that character values (always algebraic integers) are rational. – Jim Humphreys Nov 21 2010 at 18:13
One way to prove that you get integers is to prove that the corresponding simple modules are defined over $\mathbb Z$ (this is of course much stronger than their having characters with values in $\mathbb Z$), and this is what you get by constructing them 'combinatorially'. This is done in G. D. James's book on the representation theory of symmetric groups, for example. There he even constructs actual matrices giving the action of elements of $S_n$ on the simple modules---the so called Young orthogonal form.
-
3
It has always seemed to be almost scary that we know so much of the symmetric groups that we can write down the representing matrices of its elements in all simple modules! – Mariano Suárez-Alvarez Jan 4 2010 at 0:02
The characters of any representation are always algebraic integers since they are sums of roots of unity. Over the symmetric group, every representation is defined over $\mathbb{Q}$, and the integers are integrally closed. Indeed, the irreducibles can be constructed by using Young projectors that use only rational numbers.
(Another way to see that every representation is defined over $\mathbb{Q}$ is that all conjugacy classes are rational; this is what Charles indicated in his answer.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336663484573364, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/67162-transpose-forumla.html
|
# Thread:
1. ## Transpose forumla.
I don't know if this is the correct topic but i#m new here so sorry if this isn't supposed to be posted here.
So anyways iv'e been stuck on these two questions for a while now so hopefully you lot can help me out here because I really need it.
Question 1
Te^2 = M^2 + T^2 <--- Make M the subject. I would be able to
do this one but because the numbers
are squared it's confused me.
Question 2
K = sq root ((a^2 + b^2)/12) <----- Make a the subect.
Please help I need it! It would also help if you showed me your working to help me with these problems in the future.
2. Originally Posted by DaBigBadBid
I don't know if this is the correct topic but i#m new here so sorry if this isn't supposed to be posted here.
So anyways iv'e been stuck on these two questions for a while now so hopefully you lot can help me out here because I really need it.
Question 1
Te^2 = M^2 + T^2 <--- Make M the subject. I would be able to
do this one but because the numbers
are squared it's confused me.
Question 2
K = sq root ((a^2 + b^2)/12) <----- Make a the subect.
Please help I need it! It would also help if you showed me your working to help me with these problems in the future.
I'm not quite sure what you mean by 'transpose'. Are T and M just normal algebraic variables? Or are they matrices?
3. Originally Posted by DaBigBadBid
I don't know if this is the correct topic but i#m new here so sorry if this isn't supposed to be posted here.
So anyways iv'e been stuck on these two questions for a while now so hopefully you lot can help me out here because I really need it.
Question 1
Te^2 = M^2 + T^2 <--- Make M the subject. I would be able to
do this one but because the numbers
are squared it's confused me.
That's a quadratic equation so you could use the quadratic formula. If, as Mush suggested, these are intended to be matrices, you can still use the quadratic formula although taking the square root of a matrix is a bit complicated.
Question 2
K = sq root ((a^2 + b^2)/12) <----- Make a the subect.
Please help I need it! It would also help if you showed me your working to help me with these problems in the future.
Square both sides to get rid of the square root: $K^2= (a^2+ b^2)/12$. Now $a^2= 12K- b^2$ so $a= \sqrt{12K- b^2$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537429809570312, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Correlated_equilibrium
|
# Correlated equilibrium
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Correlated equilibrium
A solution concept in game theory
Relationships
Superset of Nash equilibrium
Significance
Proposed by Robert Aumann
Example Chicken
In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann (1974). The idea is that each player chooses his/her action according to his/her observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy (assuming the others don't deviate), the distribution is called a correlated equilibrium.
## Formal definition
An $N$-player strategic game $\displaystyle (N,A_i,u_i)$ is characterized by an action set $\displaystyle A_i$ and utility function $u_i$ for each player $i$. When player $i$ chooses strategy $a_i \in A_i$ and the remaining players choose a strategy profile described by the $N-1$-tuple $\displaystyle a_{-i}$, then player $i$'s utility is $\displaystyle u_i(a_i,a_{-i})$.
A "strategy modification" for player $i$ is a function $\displaystyle \phi : A_i \to A_i$. That is, $\displaystyle \phi$ tells player $i$ to modify his behavior by playing action $\displaystyle \phi(a_i)$ when instructed to play $\displaystyle a_i$.
Let $\displaystyle(\Omega, \pi)$ be a countable probability space. For each player $\displaystyle i$, let $\displaystyle P_i$ be his information partition, $\displaystyle q_i$ be $\displaystyle i$'s posterior and let $\displaystyle s_i:\Omega\rightarrow A_i$, assigning the same value to states in the same cell of $\displaystyle i$'s information partition. Then $\displaystyle((\Omega, \pi),P_i)$ is a correlated equilibrium of the strategic game $\displaystyle (N,A_i,u_i)$ if for every player $i$ and for every strategy modification $\phi$:
$\displaystyle\sum_{\omega \in \Omega} q_i(\omega)u_i(s_i, s_{-i}) \geq \sum_{\omega \in \Omega} q_i(\omega)u_i(\phi(s_i), s_{-i})$
In other words, $\displaystyle((\Omega, \pi),P_i)$ is a correlated equilibrium if no player can improve his expected utility via a strategy modification.
## An example
| | | |
|-------------------|------|-------------|
| | Dare | Chicken out |
| Dare | 0, 0 | 7, 2 |
| Chicken out | 2, 7 | 6, 6 |
| A game of Chicken | | |
Consider the game of chicken pictured to the right. In this game two individuals are challenging each other to a contest where each can either dare or chicken out. If one is going to Dare, it is better for the other to chicken out. But if one is going to chicken out it is better for the other to Dare. This leads to an interesting situation where each wants to dare, but only if the other might chicken out.
In this game, there are three Nash equilibria. The two pure strategy Nash equilibria are (D, C) and (C, D). There is also a mixed strategy equilibrium where each player Dares with probability 1/3.
Now consider a third party (or some natural event) that draws one of three cards labeled: (C, C), (D, C), and (C, D), with the same probability, i.e. probability 1/3 for each card. After drawing the card the third party informs the players of the strategy assigned to them on the card (but not the strategy assigned to their opponent). Suppose a player is assigned D, he would not want to deviate supposing the other player played their assigned strategy since he will get 7 (the highest payoff possible). Suppose a player is assigned C. Then the other player will play C with probability 1/2 and D with probability 1/2. The expected utility of Daring is 0(1/2) + 7(1/2) = 3.5 and the expected utility of chickening out is 2(1/2) + 6(1/2) = 4. So, the player would prefer to Chicken out.
Since neither player has an incentive to deviate, this is a correlated equilibrium. Interestingly, the expected payoff for this equilibrium is 7(1/3) + 2(1/3) + 6(1/3) = 5 which is higher than the expected payoff of the mixed strategy Nash equilibrium.
## Learning correlated equilibria
One of the advantages of correlated equilibria is that they are computationally less expensive than are Nash equilibria. This can be captured by the fact that computing a correlated equilibrium only requires solving a linear program whereas solving a Nash equilibrium requires finding its fixed point completely.[1] Another way of seeing this is that it is possible for two players to respond to each other's historical plays of a game and end up converging to a correlated equilibrium.[2]
## References
1. Paul W. Goldberg and Christos H. Papadimitriou, "Reducibility Among Equilibrium Problems", ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, 2005.
2. Foster, Dean P and Rakesh V. Vohra, "Calibrated Learning and Correlated Equilibrium" Games and Economic Behaviour (1996)
• Aumann, Robert (1974) Subjectivity and correlation in randomized strategies. Journal of Mathematical Economics 1:67-96.
• Aumann, Robert (1987) Correlated Equilibrium as an Expression of Bayesian Rationality. Econometrica 55(1):1-18.
• Fudenberg, Drew and Jean Tirole (1991) Game Theory, MIT Press, 1991, ISBN 0-262-06141-4
• Leyton-Brown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary Introduction, San Rafael, CA: Morgan & Claypool Publishers, ISBN 978-1-59829-593-1 . An 88-page mathematical introduction; see Section 3.5. Free online at many universities.
• Osborne, Martin J. and Ariel Rubinstein (1994). A Course in Game Theory, MIT Press. ISBN 0-262-65040-1 (a modern introduction at the graduate level)
• Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, New York: Cambridge University Press, ISBN 978-0-521-89943-7 . A comprehensive reference from a computational perspective; see Sections 3.4.5 and 4.6. Downloadable free online.
• Éva Tardos (2004) Class notes from Algorithmic game theory (note an important typo) [1]
• Iskander Karibzhanov. MATLAB code to plot the set of correlated equilibria in a two player normal form game
• Noam Nisan (2005) Lecture notes from the course Topics on the border of Economics and Computation (lowercase u should be replaced by u_i) [2]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9035037159919739, "perplexity_flag": "middle"}
|
http://catbear.wordpress.com/category/m208-pure-mathematics/
|
# TMA Machine
The adventures of a mathematics-loving Open University student
## Homomorphisms, group actions, and maths fatigue
Filed under: M208: Pure mathematics — 4 Comments
July 25, 2010
So it turns out that GTB2 (Homomorphisms) is quite a tough one as well! Wonderful stuff, just very taxing, especially in Section 4. I’m not sure why, but I had the worst trouble with exercises involving groups of complex numbers. It’s bizarre, because I don’t remember having much of a problem with complex number generally in the past, but having to find the kernel and images of isomorphisms from $\mathbb{C}$ to $\mathbb{C}$ seems to really trip me up. And even worse, the associated TMA question was full of that stuff! Hopefully there won’t be as many $\mathbb{C}$ related questions on the exam…
GTB3, the unit about group actions, was quite gruelling too. I struggled like crazy with the exercises about group actions on 3D objects like cubes, I just seem to have a no knack for visualising and mentally “rotating” the things! And that bit in the DVD segment about rings of coloured beads and chessboards had my head spinning. I still find group theory one of the most interesting topics in M208, and in maths in general, but I just wish I was a bit better at it!
Weirdly, I’ll be really quite glad when M208 is finished, which is something I absolutely couldn’t have imagined myself saying six months ago. Even though I’ve had slightly heavier course loads in the past (in terms of points/credits), I feel like this year has demanded the most actual effort, and to be honest I feel intellectually knackered! So once M208 is over and done with, I’ve decided to take a year or two out to recharge my batteries. Ideally I’d like to pick up my studies again once the maths courses have been shifted over to autumn starts (assuming that still goes ahead), as that kind of timeframe would be much more convenient for me. Or I’ll just buy the M381/M336 textbooks from the OUW shop and study them at my own pace. Probably the latter since it’s the cheapest option, given how financially tight things are likely to be for the next few years!
## Conjugacy
Filed under: M208: Pure mathematics
June 24, 2010
Wow, GTB1 Conjugacy was a lot more gruelling than I expected! I really thought that I’d sail through this unit, given how much I enjoyed Group Theory Block A, but I actually thought the middle sections of the unit contained some of the toughest material I’ve encountered so far. Perhaps I’m just very out of practice, but it really did seem like a step up in complexity – and in hindsight, I’m quite happy about that, because now that I’ve finished the unit I feel like I’ve had a little taste of more intense and esoteric group theory work. And I’m still interested in doing M336: Groups and geometry at some point in the future, so this unit can’t have been that bad!
I struggled a bit with the associated TMA questions, though. I tend to find anything involving visualising the symmetries of three-dimensional shapes quite difficult, and without giving too much away about that particular question, I can safely say that TMA05 gave me quite a workout in that respect. I also fell foul of my usual “misreading the question, and consequently trying to solve a much harder problem than it actually is” issue – I spent about an hour bashing my head against a particular part of Question 1, only to realise that I’d omitted one important word from my reading of the question, which meant that I could prove the required result in about 10 minutes. Arrgh! I hope I can manage a more careful reading of the exam questions than I seem to do for TMA questions!
So now I’m making a start on GTB2 Homomorphisms, which seems a bit more straightforward than GTB1 – at least so far! Who knows what lurks in the later sections…
## Continuity
Filed under: M208: Pure mathematics — 3 Comments
May 24, 2010
I finally finished Analysis Block A today, and oddly enough I’m a bit sad to see it go. Unit AA4: Continuity was a lot more fun that I expected, and I particularly enjoyed the section about the Intermediate Value Theorem. The bit in the DVD programme about antipodal points blew my mind – I was convinced that it was impossible to prove that you can always find two antipodal points on the Earth’s equator that have the same temperature, right up until they explained why it must be true. I even made Alex come over and watch that section, it impressed me that much! (The similar examples involving balancing a table on bumpy ground, and equally distributing sugar on pancakes, were lots of fun too.)
TMA04 is wrapped up now, and I’m proud to say that this will be my first ever LaTex-produced assignment! Well, I should say Lyx-produced, because no matter how much I tried to get to grips with LaTex editors like TeXnicCenter, I just couldn’t get the hang of it, so I ended up resorting to a more WYSIWYG-style program instead.
I really enjoyed using Lyx to type up this assignment, so I reckon I’ll be sticking with this approach in future (though I wouldn’t rule out going back to TeXnicCenter or a similar editor – perhaps when I’ve gotten thoroughly used to Lyx itself). And I’d definitely recommend Lyx for anyone who’s interested in moving away from word-processing assignments; I think Lyx has just enough WYSIWYG elements to make it a relatively easy program to learn to use.
So now, at last, I can finally get on with Group Theory Block B – it’s been a long time, so let’s hope I haven’t forgotten everything from Block A!
## Sequences & Series
Filed under: M208: Pure mathematics
May 11, 2010
I found myself a bit underwhelmed by Unit AA2, to be honest. It was interesting, and I really enjoyed the part of the associated DVD program about approximating pi by taking exterior and interior polygons around a circle, but overall the unit didn’t quite grab me in the same way as the chapter in MS221 about sequences.
Unit AA3 was much more satisfying, and in a way it felt like AA2 was really just an introduction to the ideas we needed for AA3. For me, working with limits feels a lot more rewarding when it’s part of an attempt to prove that an infinite series converges/diverges. And the behaviour of infinite series is quite an interesting topic in itself – especially counter-intuitive results like the fact that $\sum_{n=0}^{\infty } 1/n^2$ is convergent but $\sum_{n=0}^{\infty } 1/n$ is divergent. I really do love results that make me go “but.. but how??” like that.
Still, even though AA3 was more enjoyable than AA2, I think the Analysis Block A has been a bit of a let-down compared to the first half of the course. I’m quite looking forward to unit AA4, as it’s about continuity, and I’m quite eager to find out how on earth continuity can be formally defined (since the only definition I’ve ever come across is the informal “you can draw the graph without picking your pen up” one), but my main aim at the moment is to make it through Analysis Block A so that I can enjoy the sweet, sweet goodness of Group Theory Block B. There are only three units of it, but I’m going to savour every page!
## Numbers
Filed under: M208: Pure mathematics
April 15, 2010
I finally finished the TMA question associated with Unit AA1: Numbers today, and it’s quite a relief! I feel like I’ve been working on AA1 for absolutely ages. It’s not a bad unit, by any means, but for some reason my motivation levels have been a lot lower than usual lately, so I’ve barely been putting any time into M208 for the last few weeks.
The trickiest bit of AA1, for me, was the whole Proving Inequalities section. I love proof by induction, I really do, but I seemed to have a great deal of trouble with all the exercises involving it in AA1. I’ve also realised that if the exercise doesn’t explicitly state which method of proof I should use, I generally opt for entirely the wrong choice. I spent ages agonising over a particular problem that I was trying to solve using induction, and then realised when I finally relented and re-read the appropriate bit of the unit, that it was actually possible to give a direct proof fairly easily. I’m pretty good at over-complicating things, it seems!
I got TMA02 back through the post this week, which is probably what gave me the impetus I needed to finally finish this unit and the TMA question about it. I managed 92% on TMA02, which I’m pretty much okay with – I wouldn’t have been surprised if it had come back much lower than that, since I generally feel like a bit of an imposter on this course.
Anyway, now it’s time to make a start on Unit AA2: Sequences. I remember really enjoying the bits about sequences in MS221, so hopefully this unit won’t take me another three weeks to work through!
## Eigenvectors
Filed under: M208: Pure mathematics — 1 Comment
March 28, 2010
Hyperboloid by fdecomite
I finally finished the Linear Algebra block this week, which I’m very happy about. It’s not that I didn’t enjoy it, but I feel like I’ve been working through it for years, not months!
The final unit was about eigenvectors, with a big section about using eigenvectors to recognise the various kinds of non-degenerate conics and quadric surfaces from their equations. It was a bit taxing, but I really enjoyed that section. I find it weirdly satisfying to go through the big drawn-out process of writing the equation in matrix form, diagonalising the matrix, changing coordinate systems, etc, because when you’ve finished all the detective work you get to shout “It’s an elliptic paraboloid!” or whatever it turns out to be. (I promise I won’t actually shout this in the exam.)
Hyperbolic paraboloids by fish2000
I think quadric surfaces are just lovely to look at and think about, quite apart from the fun of identifying them via eigenvectors. I especially like hyperboloids, like the one pictured above, and hyperbolic paraboloids like the ones pictured to the right. Apparently Pringles crisps are also in the shape of hyperbolic paraboloids, so they’re definitely my favourite quadrics! If I was a tutor, I’d make sure to bring several tubes of Pringles to the Linear Algebra day school – I’m sure being bribed with crisps would make even the toughest bit of matrix algebra more pleasant!
Pringles: a legitimate part of any linear algebra study session.
## Linear transformations
Filed under: M208: Pure mathematics — 2 Comments
March 16, 2010
Unit LA4 has been the most challenging one yet for me. I’ve been working through it on and off for a few weeks now, and it’s been heavy going! Still, the topic itself is very interesting, much more so that the material on linear transformations in MS221.
I think the main reason for that is the fact MS221 concentrates on linear transformations of the plane, so I’m used to thinking of linear transformations as just being rotations, reflections, translations. Now, I love group theory and I love working with symmetries, but M208 has shown me that linear transformations are much more than that.
I mean, we’re all used to linear transformations that map points in $\mathbb{R}^2$ to other points in $\mathbb{R}^2$, that’s basically what we got in MS221. And it’s fairly straightforward to imagine mapping points in $\mathbb{R}^3$, say, to points in $\mathbb{R}^2$ – like projecting the 2D shadow of a 3D object, or the 3D shadow of a 4D hypercube. But mapping a point in $\mathbb{R}^n$ to a polynomial in $P_n$? That’s crazy!
And then there’s the wonderful Dimension Theorem:
$\textup{Let } t:V \rightarrow W \textup{ be a linear transformation. Then}$
$\dim \textup{Im}(t) + \dim \ker (t) = \dim V.$
Very simple to state, and very useful if you know two out of the three dimensions it refers to. Perhaps I’m easily amused, but I love the fact that if the image of t and the domain of t have the same dimension, then we know that the kernel of t must be the set containing only the zero vector. And if the kernel only contains the zero vector, then we also know that t is one-one! Awesome!
Given how much trouble I had wrapping my head around the various results and strategies in this unit, I was expecting the associated TMA question to be quite gruelling, but it turned out to be fairly manageable. Not easy, but not as fiendish as they could have made it, I think. Or perhaps its fiendishness is subtle and will only reveal itself once I get my marked assignment back…
Anyway, this week I’ll be moving on to Unit LA5, the final linear algebra unit. It’s called Eigenvectors, but I’ve had a peek at the related TMA question, and that seems to be about quadrics – I wonder what the relationship between eigenvectors and classifying quadrics is. Hopefully it won’t take me another month to work though this unit and find out!
## Vector spaces
Filed under: M208: Pure mathematics — 3 Comments
February 23, 2010
Longest. Unit. Ever. Or at least it feels like it, because I’ve been working through Unit LA3 on and off for about a month now! I was out of action with a cold for a couple of weeks in the middle, and when I came back to it I found that whatever understanding of vector spaces I had in the first place had completely disappeared. Perhaps I sneezed out the relevant brain cells…
I think the thing I had trouble with the most was spanning sets – particularly proving that a particular set spans a given vector space. I found the Vector Spaces chapter of Paul Dawkins’ excellent Linear Algebra class notes really helpful (though of course you’ve got to beware of the different notation used – I don’t want to incur the wrath of my tutor by using non-M208 notation!).
Linear independence took a bit to sink in, but I think I’ve just about got the hang of it, and bizarrely enough I found the later sections about orthogonal and orthonormal bases much easier than the earlier stuff. I’m quite intrigued by the remark at the end of the unit about orthogonality and polynomials – the idea of two polynomials being orthogonal to each other is very hard for me to get my head around, but I’d be interested in doing some more work on it. I wonder if it will turn up in any of the level 3 pure maths courses, or whether it’s more of an “applied” topic. The unit also mentioned that orthogonal polynomials are important in mathematical physics, so perhaps Alex will end up encountering them in one of his future courses.
In other news, I got my marked TMA01 Part 1 back last week, which came in at a nice 91%. I just hope I can manage the same kind of mark for Part 2. I didn’t get a great deal of feedback on Part 1 (which is understandable, since it’s only two questions), but my tutor did make a vague suggestion that I would be better off not word-processing my assignments in future. Unfortunately for her, I’ve already got TMA02 word-processed and printed out, ready to go! To be honest, I much prefer word-processing maths assignments – I hand-wrote all my TMAs for MST121, and had to scan or photocopy each one if I wanted a back-up copy in case the original got lost in the post. Very tedious indeed! So I’m not keen to go back to hand-writing assignments any time soon – they’ll have to pry my copy of MathType Lite from my cold, dead hands!
## Linear equations and matrices
Filed under: M208: Pure mathematics
February 1, 2010
Now this is more like it! I wasn’t very impressed with unit LA1, but LA2 was much more my cup of tea. I enjoy working with matrices, especially using row reduction to solve systems of simultaneous equations. I love being able to quickly find a solution for what initially looks like an intimidating monster of a system, just by using the row reduction strategy. Unfortunately there’s lots of room for silly arithmetic errors when doing row-reduction, so I’m finding that I need to double-, triple- and quadruple-check my work at the moment.
I’m not sure why it never occurred to me before, but during LA2 I found myself suddenly wondering who had invented matrix methods for solving simultaneous equations, and I was really surprised to find that the technique has been known for around 2000 years! That’s pretty mindblowing, I think.
I was a bit disappointed with the shortness of the question on LA2 in TMA03 – the questions on LA1 were worth 20 marks altogether, and the question on LA3 is worth 25 marks, but the LA2 question is a relatively modest 10 marks. I was looking forward to a bit more of a substantial workout, but I guess that the matrix-related material is mostly just revision of topics from MS221 and MST121, so I can’t really complain that they didn’t spend enough time assessing it.
The next unit, LA3, is about vector spaces – which I know absolutely nothing about, so I’m really looking forward to it! Who knows, it could even be a topic that becomes one of my favourite areas of maths, like group theory suddenly did towards the end of MS221. That’s one of the things I love about studying, and particularly about studying maths; there’s so much wonderful stuff out there that I currently don’t even know exists, just waiting to be explored!
## Vectors and conics
Filed under: M208: Pure mathematics — 1 Comment
January 23, 2010
It looks like the M208 honeymoon is finally over – I actually didn’t enjoy Unit LA1 very much at all. It was a bit of a strange unit, made up of an introduction to coordinate geometry, a couple of sections on vectors, and then a bit about conics. Perhaps I’m missing something, but there didn’t really seem to be much connection between the vector bits and the conics stuff, so it felt a bit weird to suddenly go from learning about dot products to messing about with parabolas.
I found it quite hard to get back into working with conics, even though they were covered in almost as much detail in MS221 last year; in fact, oddly enough, I had less trouble getting used to vectors again, and the last time I studied those was in 2007! I also made a complete mess of all the “draw the intersection of these two planes in $\mathbb{R}^3$” exercises – you’d think that drawing what are essentially two overlapping parallelograms would be fairly straightforward, but my sketches end up looking like Cubist interpretations of the planes. Hopefully there won’t be a sketching-planes-in-$\mathbb{R}^3$ question in the exam…
Anyway, I’m looking forward Unit LA2 – it’s about matrices, so hopefully it will be more my kind of thing!
Blog at WordPress.com. | Theme: Motion by volcanic.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9620780944824219, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Combination
|
# Combination
"Combin" redirects here. For the mountain massif, see Grand Combin.
For other uses, see Combination (disambiguation).
In mathematics a combination is a way of selecting several things out of a larger group, where (unlike permutations) order does not matter. In smaller cases it is possible to count the number of combinations. For example given three fruit, say an apple, orange and pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally a k-combination of a set S is a subset of k distinct elements of S. If the set has n elements the number of k-combinations is equal to the binomial coefficient
$\binom nk = \frac{n(n-1)\ldots(n-k+1)}{k(k-1)\dots1},$
which can be written using factorials as $\frac{n!}{k!(n-k)!}$ whenever $k\leq n$, and which is zero when $k>n$. The set of all k-combinations of a set S is sometimes denoted by $\binom Sk\,$.
Combinations can refer to the combination of n things taken k at a time without or with repetitions.[1] In the above example repetitions were not allowed. If however it was possible to have two of any one kind of fruit there would be 3 more combinations: one with two apples, one with two oranges, and one with two pears.
With large sets, it becomes necessary to use more sophisticated mathematics to find the number of combinations. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960.
Knuth gives a thorough treatment of this topic in The Art of Computer Programming.[2]
## Number of k-combinations
3-element subsets of a 5-element set
Main article: Binomial coefficient
The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by C(n, k), or by a variation such as $C^n_k$, ${}_nC_k$, ${}^nC_k$ or even $C_n^k$ (the latter form is standard in French, Russian, and Polish texts[citation needed]). The same number however occurs in many other mathematical contexts, where it is denoted by $\tbinom nk$ (often read as "n choose k"); notably it occurs as a coefficient in the binomial formula, hence its name binomial coefficient. One can define $\tbinom nk$ for all natural numbers k at once by the relation
$\textstyle(1+X)^n=\sum_{k\geq0}\binom nk X^k,$
from which it is clear that $\tbinom n0=\tbinom nn=1$ and $\tbinom nk=0$ for k > n. To see that these coefficients count k-combinations from S, one can first consider a collection of n distinct variables Xs labeled by the elements s of S, and expand the product over all elements of S:
$\textstyle\prod_{s\in S}(1+X_s);$
it has 2n distinct terms corresponding to all the subsets of S, each subset giving the product of the corresponding variables Xs. Now setting all of the Xs equal to the unlabeled variable X, so that the product becomes (1 + X)n, the term for each k-combination from S becomes Xk, so that the coefficient of that power in the result equals the number of such k-combinations.
Binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to (1 + X)n, one can use (in addition to the basic cases already given) the recursion relation
$\binom nk=\binom{n-1}{k-1}+\binom{n-1}k,\text{ for }0<k<n,$
which follows from (1 + X)n = (1 + X)n − 1(1 + X); this leads to the construction of Pascal's triangle.
For determining an individual binomial coefficient, it is more practical to use the formula
$\binom nk = \frac{n(n-1)(n-2)\cdots(n-k+1)}{k!}.$
The numerator gives the number of k-permutations of n, i.e., of sequences of k distinct elements of S, while the denominator gives the number of such k-permutations that give the same k-combination when the order is ignored.
When k exceeds n/2, the above formula contains factors common to the numerator and the denominator, and canceling them out gives the relation
$\binom nk = \binom n{n-k},\text{ for }0 \le k \le n.$
This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of k-combinations by taking the complement of such a combination, which is an (n − k)-combination.
Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember:
$\binom nk = \frac{n!}{k!(n-k)!},$
where n! denotes the factorial of n. It is obtained from the previous formula by multiplying denominator and numerator by (n − k)!, so it is certainly inferior as a method of computation to that formula.
The last formula can be understood directly, by considering the n! permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements. There are many duplicate selections: any combined permutation of the first k elements among each other, and of the final (n − k) elements among each other produces the same combination; this explains the division in the formula.
From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions:
$\binom nk = \binom n{k-1} \frac {n-k+1}k,\text{ for }k>0$,
$\binom nk = \binom {n-1}k \frac n{n-k},\text{ for }{k<n}$,
$\binom nk = \binom {n-1}{k-1} \frac nk,\text{ for }n,k>0$.
Together with the basic cases $\tbinom n0=1=\tbinom nn$, these allow successive computation of respectively all numbers of combinations from the same set (a row in Pascal's triangle), of k-combinations of sets of growing sizes, and of combinations with a complement of fixed size n − k.
### Example of counting combinations
As a concrete example, one can compute the number of five-card hands possible from a standard fifty-two card deck as:
${52 \choose 5} = \frac{52\times51\times50\times49\times48}{5\times4\times3\times2\times1} = \frac{311,875,200}{120} = 2,598,960.$
Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required:
$\begin{alignat}{2}{52 \choose 5} &= \frac{52!}{5!47!} \\ &= \frac{52\times51\times50\times49\times48\times\cancel{47!}}{5\times4\times3\times2\times\cancel{1}\times\cancel{47!}} \\ &= \frac{52\times51\times50\times49\times48}{5\times4\times3\times2} \\ &= \frac{(26\times\cancel{2})\times(17\times\cancel{3})\times(10\times\cancel{5})\times49\times(12\times\cancel{4})}{\cancel{5}\times\cancel{4}\times\cancel{3}\times\cancel{2}} \\ &= {26\times17\times10\times49\times12} \\&= 2,598,960.\end{alignat}$
Another alternative computation, almost equivalent to the first, is based on writing
${n \choose k} = \frac { ( n - 0 ) }1 \times \frac { ( n - 1 ) }2 \times \frac { ( n - 2 ) }3 \times \cdots \times \frac { ( n - (k - 1) ) }k,$
which gives
${52 \choose 5} = \frac{52}1 \times \frac{51}2 \times \frac{50}3 \times \frac{49}4 \times \frac{48}5 = 2,598,960.$
When evaluated as 52 ÷ 1 × 51 ÷ 2 × 50 ÷ 3 × 49 ÷ 4 × 48 ÷ 5, this can be computed using only integer arithmetic. The reason that all divisions are without remainder is that the intermediate results they produce are themselves binomial coefficients.
Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation:
$\begin{align} {52 \choose 5} &= \frac{n!}{k!(n-k)!} = \frac{52!}{5!(52-5)!} = \frac{52!}{5!47!} \\ &= \tfrac{80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000}{120\times258,623,241,511,168,180,642,964,355,153,611,979,969,197,632,389,120,000,000,000} \\ &= 2,598,960. \end{align}$
### Enumerating k-combinations
One can enumerate all k-combinations of a given set S of n elements in some fixed order, which establishes a bijection from an interval of $\tbinom nk$ integers with the set of those k-combinations. Assuming S is itself ordered, for instance S = {1,2, ...,n}, there are two natural possibilities for ordering its k-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to S will not change the initial part of the enumeration, but just add the new k-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with k-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the k-combination at a given place i in the enumeration can be computed easily from i, and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics.[3][4]
There are many ways to enumerate k combinations. One way is to visit all the binary numbers less than $2^n$. Chose those numbers having k nonzero bits. The positions of these 1 bits in such a number is a specific k-combination of the set {1,...,n}. [5]
## Number of combinations with repetition
See also: Multiset coefficient
Bijection between 3-subsets of a 7-set (left)
and 3-multisets with elements from a 5-set (right)
So this illustrates that $\textstyle {7 \choose 3} = \left(\!\!{5 \choose 3}\!\!\right)$.
A k-combination with repetitions, or k-multicombination, or multiset of size k from a set S is given by a sequence of k not necessarily distinct elements of S, where order is not taken into account: two sequences of which one can be obtained from the other by permuting the terms define the same multiset. In other words, the number of ways to sample k elements from a set of n elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). If S has n elements, the number of such k-multicombinations is also given by a binomial coefficient, namely by the reversely expressed, rising binomial coefficient, as opposed to the direct, falling binomial coefficient:
$\left(\!\!\!\binom{n}{k}\!\!\!\right)=\binom{n+k-1}{k} = \frac{(n+k-1)!}{(n-1)!\,k!}={n(n+1)(n+2)\cdots(n+k-1)\over k!},$
or, letting r = n = f - (k-1) and f = r + (k-1),
$\left(\!\!\!\binom{r}{k}\!\!\!\right)=\left(\!\!\!\binom{r}{r-1}\!\!\!\right)=\frac{(r+k-1)!}{(r-1)!\,k!} =\frac{f!}{(f-k)!\,k!}=\binom{f}{f-k}=\binom{f}{k},$
$=\frac{1\cdot2\cdot3\cdots(r-1)\;\cdot r\!\cdots(f-1)f} {1\cdot2\cdot3\cdots(r-1)\;\times1\cdot2\cdot3\cdots k\;}= {r(r+1)\cdots(f-1)f\over1\cdot2\cdot3\cdots k};$
(the case where both r and k are zero is special; the correct value 1 (for the empty 0-multicombination) is given by left hand side $\tbinom{-1}0$, but not by the right hand side $\tbinom{-1}{-1}$). This follows from a clever representation of such combinations with just two symbols (see Stars and bars (combinatorics)).
### Example of counting multicombinations
For example, if you have ten types of donuts (n = 10) on a menu to choose from and you want three donuts (k = 3), the number of ways to choose can be calculated as
$\left(\!\!\!\binom{10}{3}\!\!\!\right) = \binom{10+3-1}3 = \binom{12}{3} = \frac{10\times11\times12}{1\times2\times3} = 220.$
The analogy with the k-combination case can be stressed by writing the numerator as a rising power
$\binom{n + k - 1}{k} = \frac{n(n+1)\cdots(n+k-1)}{k!}.$
There is an easy way to understand the above result. Label the elements of S with numbers 0, 1, ..., n − 1, and choose a k-combination from the set of numbers { 1, 2, ..., n + k − 1 } (so that there are n − 1 unchosen numbers). Now change this k-combination into a k-multicombination of S by replacing every (chosen) number x in the k-combination by the element of S labeled by the number of unchosen numbers less than x. This is always a number in the range of the labels, and it is easy to see that every k-multicombination of S is obtained for one choice of a k-combination.
A concrete example may be helpful. Suppose there are 4 types of fruits (apple, orange, pear, banana) at a grocery store, and you want to buy 12 pieces of fruit. So n = 4 and k = 12. Use label 0 for apples, 1 for oranges, 2 for pears, and 3 for bananas. A selection of 12 fruits can be translated into a selection of 12 distinct numbers in the range 1,...,15 by selecting as many consecutive numbers starting from 1 as there are apples in the selection, then skip a number, continue choosing as many consecutive numbers as there are oranges selected, again skip a number, then again for pears, skip one again, and finally choose the remaining numbers (as many as there are bananas selected). For instance for 2 apples, 7 oranges, 0 pears and 3 bananas, the numbers chosen will be 1, 2, 4, 5, 6, 7, 8, 9, 10, 13, 14, 15. To recover the fruits, the numbers 1, 2 (not preceded by any unchosen numbers) are replaced by apples, the numbers 4, 5, ..., 10 (preceded by one unchosen number: 3) by oranges, and the numbers 13, 14, 15 (preceded by three unchosen numbers: 3, 11, and 12) by bananas; there are no chosen numbers preceded by exactly 2 unchosen numbers, and therefore no pears in the selection. The total number of possible selections is
$\binom{4+12-1}{12} = \left(\!\!\!\binom{4}{12}\!\!\!\right) = \binom{15}{12} = \left(\!\!\!\binom{13}{3}\!\!\!\right) = \binom{15}{3} = \frac{13\times14\times15}{1\times2\times3} = 455.$
## Number of k-combinations for all k
The number of k-combinations for all k is the number of subsets of a set of n elements. There are several ways to see that this number is 2n. In terms of combinations, $\sum_{0\leq{k}\leq{n}}\binom nk = 2^n$, which is the sum of the nth row (counting from 0) of the binomial coefficients in Pascal's triangle. These combinations (subsets) are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to 2n - 1, where each digit position is an item from the set of n.
Given 3 cards numbered 1 to 3, there are 8 distinct combinations (subsets), including the empty set:
$| \{ \{\} ; \{1\} ; \{2\} ; \{3\} ; \{1, 2\} ; \{1, 3\} ; \{2, 3\} ; \{1, 2, 3\} \}| = 2^3 = 8$
Representing these subsets (in the same order) as base 2 numbers:
0 - 000
1 - 001
2 - 010
4 - 100
3 - 011
5 - 101
6 - 110
7 - 111
## Probability: sampling a random combination
There are various algorithms to pick out a random combination from a given set or list. Rejection sampling is extremely slow for large sample sizes. One way to select a k-combination efficiently from a population of size n is to iterate across each element of the population, and at each step pick that element with a dynamically changing probability of $\frac{k-\mathrm{\#\,samples\ chosen}}{n-\mathrm{\#\,samples\ visited}}$.
## References
1. Erwin Kreyszig, Advanced Engineering Mathematics, John Wiley & Sons, INC, 1999
2.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 42, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9142825603485107, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/99991/atiyah-bott-shapiro-orientation
|
## Atiyah-Bott-Shapiro Orientation
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Dear community,
there are so-called orientation maps $a:MSpin\to ko$ and $b:MSpin^c \to k$, "defined" in ABS's paper "Clifford modules". Unfortunately I am not familiar with representation theory.
Let $c:MSpin \to MSpin^c$ resp. $d:ko \to k$ denote the obvious maps given by considering a spin- as a spin$^c$-manifold resp. complexification.
Is it true that $b\circ c=d\circ a$?
-
## 1 Answer
A point in the $n$th space of $MSpin$ is an $n$-dimensional manifold equipped with a spin structure. In other words, it is a manifold equipped with a bundle of bimodules between the Clifford algebra of $\mathbb R^n$ and the Clifford algebra of $TM$. That bundle of bimodules is called the spinor bundle of $M$.
Similarly, a point in the $n$th space of $MSpin^c$ is an $n$-dimensional manifold equipped with a bundle of bimodules between the Clifford algebra of $\mathbb C^n$ and the Clifford algebra of $TM\otimes_{\mathbb R}\mathbb C$.
The map $MSpin\to MSpin^c$ is given by complexifying the bimodule.
A point in $n$th space of $KO$ (allow me to take non-connective $K$-theory - the result for connective $K$-theory then follows readily) is given by a real Hilbert space equipped with an action of $Cliff(\mathbb R^n)$, and an odd skew-adjoint clifford-linear Fredholm operator.
Similarly, a point in the $n$th space of $K$ is given by a complex Hilbert space with an action of $Cliff(\mathbb C^n)$, and an odd skew-adjoint clifford-linear Fredholm operator.
The map $KO\to K$ is again complexification.
The ABS orientation (which is not constructed in ABS) sends a spin manifold $M$, now also equipped with a metric, to the Hilbert space of $L^2$ sections of the spinor bundle, equipped the the obvious $Cliff(\mathbb R^n)$-action. The Fredholm operator is the Dirac operator constructed from (the connection associated to) the metric and the $Cliff(TM)$-action.
The spin-c version is identical. It is then obvious by construction that the diagram you asked about is commutative.
I should say that the above argument is completely hand-wavy...
I actually don't know in which paper/textbook the ABS orientation is defined as a map of spectra (and I would like to know -- so if someone knows, please tell me). My guess is that, regardless of the approach taken, once you see the definition, it is completely obvious that the diagram is commutative.
-
1
In A symmetric ring spectrum representing KO-theory, Topology 40 (2001), 299-308, Michael Joachim says that he gives an explicit model of $MSpin\to KO$ in his thesis. I have not seen his thesis; but you might want to ask him about it. – Charles Rezk Jun 20 at 16:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255629181861877, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/8745/does-the-casimir-effect-allow-to-change-the-lifetime-of-a-radiating-atom?answertab=votes
|
# Does the Casimir effect allow to change the lifetime of a radiating atom?
Is it true that a spontaneously light-emitting atom changes its lifetime if it is put between two parallel plates that are so near that they attract each other through the Casimir effect?
Thus: does the Casimir effect, and the changes in the vacuum state it induces, influence spontanenous emission?
If so, who measured this effect for the first time? Where can I read about it?
-
## 2 Answers
This is true. The simple explanation is this: For calculating the decay rate of an excited state, you use Fermi's Golden Rule, which involves the matrix element $$|\langle f | V | i \rangle|^2$$ where $f$ and $i$ denote the final and initial state, respectively.
Since the final state contains the electron in its groundstate together with a photon created by this decay, the nature of your cavity determines what the matrix element will be: For example, if your cavity forbids standing waves of the emission frequency, decay is suppressed. The study of these effects goes under the name of Cavity QED
I would not say that this is due to the Casimir effect. Rather, this effect and the Casimir effect are both due to the boundary conditions created by your plates.
I don't exactly know who first studied this. I suggest consulting a review article such as this one.
-
Presence of something else next to an excited atom influences the lifetime of the excited state. Any such presence is described with some additional interaction energy.
In case of a cavity QED, you can get suppression of radiation rate: $e^{-\gamma_1 t}$ with smaller $\gamma_1$ due to suppression of the corresponding part of the electromagnetic spectrum of the whole system.
In case of a neighboring similar atom in the ground state, you can obtain a decrease of the life-time due to additional independent channel of the excitation deactivation - a resonance transition, for example. So the resulting decay rate will be $e^{-(\gamma_1+\gamma_2) t}$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405965209007263, "perplexity_flag": "head"}
|
http://nrich.maths.org/5018
|
### Geoboards
This practical challenge invites you to investigate the different squares you can make on a square geoboard or pegboard.
### Polydron
This activity investigates how you might make squares and pentominoes from Polydron.
### Multilink Cubes
If you had 36 cubes, what different cuboids could you make?
# Intersection Sums Sudoku
by Henry Kwok
### The Rules of "Intersection Sums Sudoku"
Like the standard Sudoku, this Sudoku variant consists of a grid of nine rows and nine columns subdivided into nine 3$\times$3 subgrids. Like the standard Sudoku, it has two basic rules:
1. Each column, each row, and each box (3$\times$3 subgrid) must have the numbers 1 to 9.
2. No column, row or box can have two squares with the same number.
Like other Sudokus published by NRICH, this puzzle can be solved with the help of the numbers in the top parts of certain squares. These numbers are the sums of the digits in all the squares horizontally and vertically adjacent to the square.
#### A Short Demonstration
The square in the bottom left corner of this Sudoku contains the number 3. 3 is the sum of the digits in the two adjacent squares, which therefore must contain the digits 1 and 2.
In the beginning, we do not know whether we should put 1 or 2 in the square (8,1) or in the square (9,2). If we put 1 in the square (9,2) and 2 in the square (8,1), we have to put 3 in the square (8,3) and 2 in the square (9,4) because of the small clue-number 6 in the square (9,3). If we put 2 in the square (9,2) and 1 in the square (8,1), we still have to put 3 in the square (8,3) and 1 in the square (9,4). We find that 3 will go to the square (8,3) regardless of where we put the rest of the numbers.
At least the answer for one square is confirmed. That's not too bad after all. Sooner or later, we shall be able to obtain the answers for the squares (8,1), (9,2) and (9,4) as we try to solve the rest of the puzzle.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.899597704410553, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/24506/ultrafilters-arising-from-keisler-shelah-ultrapower-characterisation-of-elementar/24554
|
## Ultrafilters arising from Keisler-Shelah ultrapower characterisation of elementary equivalence
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In model theory, two structures $\mathfrak{A}, \mathfrak{B}$ of identical signature $\Sigma$ are said to be elementarily equivalent ($\mathfrak{A} \equiv \mathfrak{B}$) if they satisfy exactly the same first-order sentences w.r.t. $\Sigma$. An astounding theorem giving an algebraic characterisation of this notion is the so-called Keisler-Shelah isomorphism theorem, proved originally by Keisler (assuming GCH) and then by Shelah (avoiding GCH), which we state in its modern strengthening (saying that only a single ultrafilter is needed):
$\mathfrak{A} \equiv \mathfrak{B} \ \iff \ \exists \mathcal{U} \text{ s.t. } (\Pi_{i\in\mathcal{I}} \ \mathfrak{A})/\mathcal{U} \cong (\Pi_{i\in\mathcal{I}} \ \mathfrak{B})/\mathcal{U},$
where $\mathcal{U}$ is a non-principal ultrafilter on, say, $\mathcal{I} = \mathbb{N}$. That is, two structures are elementarily equivalent iff they have isomorphic ultrapowers.
My question is the following (admittedly rather vague): Does anyone know of constructions in which an ultrafilter is chosen by an appeal to this characterisation and then used for other means? An example of what I have in mind would be something like this (using the fact that any two real closed fields are elementarily equivalent w.r.t. the language of ordered rings): In order to perform some construction $C$ I choose'' a non-principal ultrafilter $\mathcal{U}$ on $\mathbb{N}$ by specifying it as a witness to the following isomorphism induced by Keisler-Shelah:
$\mathbb{R}^\mathbb{N}/\mathcal{U} \cong \mathbb{R}_{alg}^\mathbb{N}/\mathcal{U},$
where $\mathbb{R}_{alg}$ is the field of real algebraic numbers. So the construction $C$ should be dependent upon the fact that $\mathcal{U}$ is a non-principal ultrafilter bearing witness to the Keisler-Shelah isomorphism between some ultrapower of the reals and the algebraic reals, resp.
Also, a follow-up question: Let's say I'd like to solve'' the above isomorphism for $\mathcal{U}$. Are there interesting things in general known about the solution space, e.g., the set of all non-principal ultrafilters bearing witness to the Keisler-Shelah isomorphism for two fixed elementarily equivalent structures such as $\mathbb{R}$ and $\mathbb{R}_{alg}$? What machinery is useful in investigating this?
-
## 2 Answers
Under the Continuum Hypothesis, your solution space is all nonprincipal ultrafilters. This is because under CH, the ultrapower $M^N/U$ of a mathematical structure $M$ of size at most continuum does not actually depend on the (nonprincipal) ultrafilter $U$. One can see this by using the fact that the ultrapower will be saturated, and so one can run a back-and-forth argument to achieve the isomorphism. In particular, it follows under CH that any $U$ will witness your desired isomorphism for $R^N/U\cong (R_{alg})^N/U$. (See Corollary 6.1.2 in Chang-Keisler's book Model Theory.)
A similar fact holds for larger cardinals and larger structures under GCH, but here, one needs an additional assumption on the ultrafilter. Namely, Theorem 6.1.9 in Chang-Keisler asserts that if $2^\alpha=\alpha^+$ and $A$ and $B$ are two structures of size at most $\alpha^+$, then they are elementarily equivalent if and only if $\Pi_DA\cong\Pi_D B$ for any $\alpha^+$-good incomplete ultrafilter $D$ on $\alpha$. The proof uses the same saturation idea, and this establishes the Keisler-Shelah theorem in the case that GCH holds.
Chang-Keisler states (page 393-394) that it is open whether the assertion of Theorem 6.1.9 stated above holds under $\neg CH$.
-
This is so helpful, thanks so much! – Grant Olney Passmore May 13 2010 at 20:55
Thanks for accepting. Meanwhile, I'd still like to hear how flexible the choice of U is when CH fails... – Joel David Hamkins May 13 2010 at 21:03
I'm very interested in the not-CH case as well, though the question seems like it might be difficult. If we do eventually get a nice answer for the set of solutions in the context of not-CH, what is your feeling on the right machinery for examining structure in the solution space? Is the Rudin-Blass ordering on ultrafilters here the standard machinery? – Grant Olney Passmore May 13 2010 at 21:56
1
I've realized that the saturation+back-and-forth argument also gives you a quick proof of your instance of the Keisler-Shelah theorem in the case that CH holds, since the two ultrapowers will be elementary equivalent and saturated of size continuum. When CH fails, I'm not sure what happens, but the class of ultrafilters is intensely studied. The Rudin-Keisler order is equivalent to commutative diagrams factoring the ultrapower maps, so this may interest you. – Joel David Hamkins May 13 2010 at 22:37
Ah beautiful! And thank you for suggesting the Rudin-Keisler order + pointing out that equivalence. Again, this is very helpful. – Grant Olney Passmore May 13 2010 at 23:45
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As you might expect, things are consistently much more interesting if $CH$ fails. This has been explored by Shelah in a fascinating series of papers "Vive la difference I - III". For example, it is consistent that there is a nonprincipal ultrafilter $\mathcal{U}$ on $\omega$ such that if $(R_{n})$ and $(S_{n})$ are sequences of discrete rank 1 valuation rings having countable residue fields, then any isomorphism $\varphi: \prod_{\mathcal{U}}R_{n} \to \prod_{\mathcal{U}}S_{n}$ is an ultraproduct of isomorphisms $f_{n}: R_{n} \to S_{n}$. In particular, $\mathcal{U}$-almost all $R_{n}$ are isomorphic to the corresponding $S_{n}$ and so the Ax-Kochen isomorphism theorem doesn't hold with respect to $\mathcal{U}$.
If you are only interested in ultraproducts of fixed structures $A$, $B$, then I should mention that it is also consistent that there exists an ultrafilter $\mathcal{A}$ on $\omega$ such that if $A$ and $B$ are countable structures which satisfy the strong independence property, then the corresponding $\mathcal{A}$-ultraproducts are isomorphic iff $A \cong B$.
-
This is fascinating and extremely helpful. Thanks so much! – Grant Olney Passmore May 13 2010 at 23:44
I didn't know about the connection between these questions and the strong independence property! Is this "recent" (well, mod the fact you wrote this answer two years ago) or is this part of the Vive la différance I-III series? – Andrés Villaveces Mar 16 2012 at 4:35
Yes, it is part of the Vive la différance I-III series. – Simon Thomas Mar 16 2012 at 13:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134781360626221, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/67295?sort=oldest
|
## Question on nef and big divisors
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $L$ is an effective divisor and $H$ is ample (On a smooth 3-fold) such that $L+H$ is nef.
Then show that $L+H$ is big ( $(L+H)^3 > 0$) ?
This was claimed in a paper, without proof. So I assume it should be well-known. I am not sure if the restriction on dimension is necessary or not.
-
## 2 Answers
This is indeed true. See for example Lemma 2.60 in Kollar-Mori Birational Geometry of algebraic varieties. In particular, it is shown that a Cartier divisor $D$ is big if and only if $mD \sim A + E$ for some ample divisor $A$ and effective divisor $E$. This is also proven in Corollary 2.2.7 in Lazarsfeld's Positivity of in Algebraic Geometry I.
Anyway, they make no assumptions on the dimension or singularities of the ambient variety, you also don't need the nef assumption.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
2 basic comments:
1) $L+H$ big is not equivalent to $(L+H)^3>0$
example: $\mathbb P^2$ blown up at a point, $H=$ any ample and $L=mE$ where $E=$exceptional curve and $m\gg 0$.
2) this is the proof (as mentioned above, it can be found in books and it doesn't require $H+L$ nef):
$h^0(X,a(L+H))\ge h^0(X,aH)\ge \frac { a^n}{n!} H^n + \mathcal O (a^{n-1})$
where $n=\dim X$ and $a\gg 0$ (simple consequence of Riemann-Roch and Serre vanishing).
$\Rightarrow L+H$ is big
-
5
But the OP assumes that $D=L+H$ is nef, and in that case bigness is equivalent to $D^3>0$. – J.C. Ottem Jun 9 2011 at 12:01
1
Good comment... – Mohammad F.Tehrani Jun 9 2011 at 14:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9172254800796509, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/63635/compute-cos0
|
# Compute $\cos'(0)$
I know that $\cos'(0)$ is $0$, but my work follows:
$$\begin{align}\cos'(0) &= \lim_{\Delta x\to 0}\frac{f(0+\Delta x) - f(0)}{\Delta x}\\ &= \frac{\cos(0+\Delta x) - \cos(0)}{\Delta x}\\ &= \frac{\cos(0) - \cos(0)}{0} = \frac{0}{0}\end{align}$$
Where have I gone wrong? What can I do to show that it equals 0?
(Note: No derivative rules are allowed in my calculus class yet, just difference quotients)
-
1
It should say $f(0 + \Delta x) - f(0)$ in the first line. You want to show that the difference in function values over a small interval $\Delta x$, divided by the width of this interval (i.e. the "average" increase in $f$ on this interval) converges to some value $L = f'(0)$ as $\Delta x \to 0$. Also, to get something useful, you should first write out the formula with $\Delta x$, and only when you get something clean enough, let $\Delta x \to 0$. You lost the limit too soon and did not simplify anything. – TMM Sep 11 '11 at 19:08
Thanks for clearing that up, I copied it down incorrectly. Even with 0 instead of $\Delta x$, I still end up with $\frac{0}{0}$ – BKaylor Sep 11 '11 at 19:11
1
What esle did you do wrong? You "dropped" the lim in front, with no reason. Then you set $\Delta x$ equal to $0$ with no reason. But you will still need to know some non-trivial property of trig functions. – GEdgar Sep 11 '11 at 19:12
## 2 Answers
You can't just substitute in 0 for the limit - that loses all important information and leaves you with an indeterminate form. You must be wittier - usually, with $\cos$ and $\sin$ and their derivatives (even with difference quotients), one ends up using the following inequalities.
$$\cos A - \cos B = -2 \sin {\frac{1}{2} (A + B)} \cos {\frac{1}{2} (A + B)}$$
$$\cos(x) = \sin(\frac{\pi}{2} - x)$$
Or you could use the difference of $\sin$ angles too, depending on how you tackle the proof. The key is to not get rid of the $\Delta x$ too early - evaluate the limit only when you come across a form that you know how to evaluate.
I think that gives you a next step, right?
-
@BKaylor One may also apply the double-angle formulas: $\cos(2\theta) = 1-2 \sin^2 \theta$. (Of course, this can also be derived from the formulas given in this answer.) – Srivatsan Sep 11 '11 at 19:24
1
The main things I'm supposed to use are algebraic limit manipulations ( like lim f(x) + g(x) = lim f(x) + lim g(x) ) And "special" trig limits that we've worked out in class. We've been told that we don't need to prove the two limits $\lim_{x\to 0}\frac{\sin(x)}{x}$ and $\lim_{x\to 0}\frac{1- \cos(x)}{x}$, just to assume they equal 1 and 0 respectively. I think the proof $\lim_{\Delta x\to 0}\frac{[\lim_{\Delta x\to 0} \cos(0 + \Delta x)] - \cos(\Delta x)}{\Delta x} = 0$ Should work... thanks for pointing me in the right direction! – BKaylor Sep 11 '11 at 19:40
– Srivatsan Sep 11 '11 at 19:42
@Sri: Ah, you're right. I wasn't sure how you'd go from e.g. $\cos(\alpha + \beta) = \cos(\alpha)\cos(\beta) - \sin(\alpha)\sin(\beta)$ and others to $\cos(2x) = 1 - 2\sin^2(x)$, but it's quite easy after all ;) – TMM Sep 11 '11 at 19:55
@BKaylor: One limit is enough, no need for two! So you can use $\cos'(0) = \lim_{\Delta x \to 0} \frac{\cos(0+\Delta x) - \cos(0)}{\Delta x} = \lim_{\Delta x \to 0} \frac{\cos(\Delta x) - 1}{\Delta x} = 0$ and you are done. – TMM Sep 11 '11 at 19:57
Because it is easier to type, I will write $h$ instead of $\Delta x$. We want to find $$\lim_{h \to 0} \frac{\cos h -1}{h}.$$ Multiply "top" and "bottom" by $\cos h+1$. We want $$\lim_{h \to 0} \frac{(\cos h -1)(\cos h+1)}{h(\cos h+1)}.$$ On top we now have $-\cos^2 h +1$, that is, $-\sin^2 h$. So we want $$\lim_{h \to 0} \frac{-\sin^2 h}{h(\cos h+1)}\qquad\text{that is,}\qquad \lim_{h \to 0} \left(\frac{\sin h}{h}\cdot \frac{-\sin h}{\cos h+1}\right).$$
Finally, let $h \to 0$. We are allowed to use the fact that $\lim_{h\to 0}\frac{\sin h}{h}=1$. And it is clear that $\lim_{h\to 0}\frac{-\sin h}{\cos h +1}=0$. So our limit is $0$.
Comment: The idea used above is related to the process of "rationalizing the numerator," which you may have seen already, for example in finding the derivative of $\sqrt{x}$ at $x=3$ from the definition of the derivative.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476636052131653, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/213392/using-functions-in-maple
|
# Using functions in Maple
How do approach this problem?
Write a function points which takes a list L of non negative integers and returns a list of pairs $[x, y]$ where $1 ≤ x ≤$ the length of $L$ and $1 ≤ y ≤ L[x]$.
-
1
Why ask in a math forum instead of in a programming forum? – GEdgar Oct 15 '12 at 1:54
## 1 Answer
This is easily done as a nested loop, ie. one loop within another, where the one or both of the end-points of the inner loop will depend upon the current value of the outer loop.
You can use a pair of for-do loops, but then you have to be creative about where to store the list [x,y] that gets generated each time through the inner loop (or be inefficient in memory and repeatedly concatenate with a list of previous pairs). It's easier and efficient to accumulate instead an expression sequence of all the [x,y] pairs using a pair of nested calls to the seq command rather than a pair of for-do loops.
The inner loop can determine y, since that loop has to run from 1 to L[x] where a particular x would be known. The outer loop can determine x, since the restrictions on x don't involve y.
The length of a list L is obtained with the command nops(L).
Experiment with a pair of nested seq calls like the following, to get an idea of how they can form a longer sequence of results. Your goal is to nest such a pair in the right order and to find the particular ranges i=a..b and j=c..d where some of the a,b,c,d of the inner seq may depend upon i or j of the outer seq.
seq( seq( [i,j], i=4..j ), j=1..6 );
Notice that if the right end-point of a range is less that the left end-point then that range is empty: it produces nothing. This comes in handy. For example, the following produces NULL.
seq( i, i=4..2 );
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8942129611968994, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/renormalization
|
# Tagged Questions
Renormalization is an ensemble of techniques which serves to treat the infinities which appear in quantum field theory or statistical mechanics.
1answer
59 views
### Beta-function non-zero at classical level?
In Jaume Gomis's lecture 5 on CFT at Perimeter Institute, he says (at 27:40 minute mark) that the beta function, classically, of the $m^2$ parameter in massive $\lambda \phi^4$ theory is \beta(m^2) ...
0answers
67 views
### Setting of renormalization scale in field theory calculations
In dimensional regularization an arbitrary mass parameter $\mu$ must be introduced in going to $4-\epsilon$ dimensions. I am trying to understand to what extent this parameter can be eliminated from ...
1answer
71 views
### Can Divergences in Nonrenormalizable Theories Always Be Absorbed by (An Infinite Number of) Counterterms?
For example, consider the $\phi^3$ theory in $d=8$, with Lagrangian: $\mathcal{L}=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\frac{1}{2}m^{2}\phi^{2}-\frac{1}{3!}\lambda_{3}\phi^{3}$. In 8 ...
1answer
63 views
### Renormalizibility by power counting
When testing a theory for its renormalizability, in practice one always calculates the mass dimension of the coupling constants $g_i$. If $[g_i]>0$ for any $i$ the theory is not renormalizable. I ...
1answer
58 views
### Under what conditions are the renormalization group equations “reversible”?
As I understand it, the renormalization group is only a semi-group because the coarse graining part of a renormalization step consisting of Summing / integrating over the small scales (coarse ...
2answers
126 views
### Renormalization condition: why must be the residue of the propagator be 1
In on-shell scheme, one of the renormalization conditions is that the propagator, say, a scalar theory $$\frac{1}{p^2+m^2-\Sigma(p^2)-i\epsilon}$$ must have a unit residue at the pole of ...
0answers
22 views
### What is the relationship between complex time singularities and UV fixed points?
In this paper it is described how the turbulent kinetic energy spectrum and the flatness (a measure for intermittency) are governed by the position of the (dominant) singularities of the solutions of ...
2answers
77 views
### What constant varies in the fine structure constant?
Using the renormalization group approach, coupling constants are "running". If we apply this to the fine structure (coupling) constant, we do know that, e.g., at energies around the Z mass, \alpha ...
1answer
86 views
### physical importance of regularization in QFT?
The standard lore in QFT is that one must work with renormalised fields, mass, interaction etc. So we must work with "physical" or renormalised quantities and all our ignorance with respect to its ...
0answers
48 views
### Does the number of left handed chiral quark superfields always equal half the number of quark flavours?
In Weinberg's "The Quantum Theory of Fields Vol III" page 267 we're told that $n_f = 2N_f$. Where $n_f$ are the number of flavours and $N_f$ is the number of left chiral quark superfields (or the ...
1answer
80 views
### Infinite Energy of Point Charges (in the context of classical field theories)
In the context of classical physics,is there any renormalization method to avoid infinite energy of point charges?
1answer
76 views
### Degree of divergence of a Feynman diagram
I am studying the degrees of divergence of Feynman diagrams. I feel that I miss something but I don't really understand what. Please apologize if this question is silly. Anyway. As an introduction to ...
0answers
94 views
### Beta function of pure $SU(N_\text{c})$ Yang-Mills theory
What is the dependence of the beta function of pure $SU(N_\text{c})$ Yang-Mills theory on the number of colors? I guess ...
1answer
137 views
### One-loop $\phi^4$ theory in $d = 3$
I'm trying to calculate the 1 loop correction to the propagator in massless $\phi^4$ theory, in $d = 3$, just for fun. The diagram just looks like a straight line with a circle touching tangently to ...
1answer
148 views
### Why is Einstein gravity not renormalizable at two loops or more?
(I found this related Phys.SE post: Why is GR renormalizable to one loop?) I want to know explicitly how it comes that Einstein-Hilbert action in 3+1 dimensions is not renormalizable at two loops or ...
1answer
113 views
### What does it mean to integrate out fields from a theory?
I've done a fair bit of reading on this subject and I'm still confused about the basic principle of integrating out fields in QFT. When we have a function of 2 fields a and b, f(a,b), and we integrate ...
2answers
93 views
### Dimensional Regularization involving $\epsilon^{\mu\nu\alpha\beta}$
Is it possible to dimensionally regularize an amplitude which contains the totally antisymmetric Levi-Civita tensor $\epsilon^{\mu\nu\alpha\beta}$? I don't know if it's possible to define ...
1answer
100 views
### Why is $R^2$ gravity not unitary?
I have often heard that $R^2$ gravity (as studied by Stelle) is renormalisable but not unitary. My question is: what is it that causes the theory to suffer from problems with unitarity? My naive ...
1answer
94 views
### Symmetries in Wilsonian RG (2)
This question is related to the paper http://arxiv.org/abs/1204.5221 and is a continuation of the previous question Symmetries in Wilsonian RG In the liked paper why do the equalities in equation ...
0answers
75 views
### Dimensional regularization and IR divergences and scale invariance
I want to know if dimensional regularization has any issues if the theory has IR divergences or is scale invariant. Does dimensional regularization see "all" kinds of divergences? I mean - what ...
1answer
75 views
### What states are satisfying an entropic area law and why do they satisfy it? More specificly why do matrix product states satisfy it?
I am currently reading some papers concerning the question why the density matrix renormalization group (DMRG) method is working well for simulating one dimensional systems and bad for higher ...
0answers
41 views
### Divergent sum in lightcone quantization of bosonic string theory
I had the following question regarding lightcone quantization of bosonic strings - The normal ordering requirement of quantization gives us this infinite sum $\sum_{n=1}^\infty n$. This is regularized ...
1answer
260 views
### Symmetries in Wilsonian RG
I wanted to know if there is a theorem that in writing a Lagrangian if one missed out a term which preserves the (Lie?) symmetry of the other terms and is also marginal then that will necessarily be ...
1answer
83 views
### Radial quantization and infrared divergences
I am reading Ginspard lectures "Applied CFT" http://arxiv.org/abs/hep-th/9108028 which is not my first material on the subject. He tries to motivates radial quantization on the reason that ...
0answers
142 views
### Regulator-scheme-independence in QFT
Are there general conditions (preservation of symmetries for example) under which after regularization and renormalization in a given renormalizable QFT, results obtained for physical quantities are ...
1answer
96 views
### Soft Mass and Physical Mass in Softly-broken SUSY
In softly broken SUSY, the bare mass parameters may be specified at e.g. the GUT scale, and then we can run these down to another scale using RGEs, similar in form to the RGEs for gauge couplings, ...
0answers
109 views
### Zeta regularization gone bad
This may sound as a mathematical question, but it should be very familiar to physicists. I am trying to perform an expansion of the function $$f(x) = \sum_{n=1}^{\infty} \frac{K_2(nx)}{n^2 x^2},$$ for ...
1answer
147 views
### Renormalization: Why is only a finite number of counter-terms allowed?
I have a question please about renormalization in QFT. Why a renormalizable theory requires only a finite number of counter-terms?
2answers
91 views
### Can't find the mass scale; calculation using the modified minimal subtraction scheme and dimensional regularisation
I am taking a course on quantum field theory where there is some confusion regarding the renormalisation scheme we are using (and a corresponding one in my mind). Apparently the lecturer meant MS-bar ...
0answers
77 views
### Difference between a Fixed Point and a Limit Point in implementations of the Renormalization Group (RNG) in Large Eddy Simulation (LES) model
In the introduction of this paper, it is explained that and how the application of a dynamic subrid scale model for turbulence into a large eddy simulation (LES) model corresponds to doing one ...
1answer
184 views
### Why do irrelevant operators require infinitely many counterterms?
As far as I understand it, in the Wilsonian picture of renormalization, we view a theory as having some fixed cutoff and bare couplings, and integrate out high-momentum modes to understand what ...
1answer
105 views
### Renormalization Group and Ising with d=1 and D=1
I have a question about the results of RG on Ising model. I know it's possible to obtain two couple of relations $K'(K)$, $q(K')$ $K(K')$, $q(K)$ between the coupling costants. My problem arise ...
1answer
150 views
### Can modern twistor methods to calculate scattering amplitudes be applied to renormalization group calculations?
As explained for example in this article by Prof. Strassler, modern twistor methods to calculate scattering amplitudes have already been proven immensely helpful to calculate the standard model ...
0answers
72 views
### Drawing the RG flow diagram
In real-space renormalization group how does one find the complete RG flow exactly, (not schematically)? I understand it needs to be done on a computer. For example, I have the ising model on a ...
3answers
476 views
### Why is gravity so hard to unify with the other 3 fundamental forces?
Electricity and magnetism was unified in the 19th century, and unification of electromagnetism with the weak force followed suit, bringing into play the electroweak force. I've been told that ...
0answers
93 views
### Does the Standard Model have a Landau pole?
I have seen the statement that the Standard Model has a Landau pole, or at least it its believed that it does at $\sim 10^{34}$ GeV. Has this actually been proven (at least in perturbation theory, as ...
3answers
98 views
### Effective operator in four-fermion interaction
In one book, I have got the following lines which I found myself unable to understand what is effective operator? The paragraph is given below: The weak interaction describes nuclear beta decay, ...
0answers
58 views
### Dimensional transmutation in Gross-Neveu vs others
Firstly I don't know how generic is dimensional transmutation and if it has any general model independent definition. Is dimensional transmutation in Gross-Neveau somehow fundamentally different ...
0answers
67 views
### Exact Beta Functions in Statistical Mechanics
I'm looking for analytically solvable models in statistical mechanics (classical or quantum) or related areas such as solid state physics in which the beta function for a certain renormalization ...
1answer
157 views
### Defining a CFT using beta-functions
Won't it be correct to define a CFT as a QFT such that the beta-function of all the couplings vanish? But couldn't it be possible that the beta-function of a dimensionful coupling vanishes but it ...
1answer
67 views
### What is the physical meaning of this simplification to calculate the effective coupling constants for a Gaussian model with quartic interactions?
To calculate the effective coupling constants $u'_2(q)$ and $u'_4(q)$ of the effective Hamiltinian eq (4.9) of this paper H' = -\frac{1}{2}\int\limits_q u'_2(q)\sigma'_q\sigma'_{-q} - ...
0answers
65 views
### A question on charge renormalization in QED
Let us work with charge renormalization in QED. Consider 2-point photon correlation function $\Pi_2(q^2)$ at one loop level. We normalize the coupling constant at $q^2=0$ (point of normalization). ...
0answers
32 views
### Good reference for renormalization [duplicate]
Possible Duplicate: Are there books on Regularization at an Introductory level? I am looking for a good introduction to renormalization (group). I have read several books about it but never ...
0answers
60 views
### CP-violation in weak and strong sectors
There is a possible CP-violating term in the strong sector of the standard model proportional to $\theta_\text{QCD}$. In the absence of this term, the strong interactions are CP-invariant. In the ...
0answers
94 views
### Confused by renormalization [duplicate]
Possible Duplicate: Suggested reading for renormalization (not only in QFT) I'm trying to learn QFT. I don't quite understand why renormalization works. If you are calculating a Feynman ...
0answers
98 views
### Nonpertubative renormalization in quantum field theory versus statistical physics
I am trying to work my head around how renormalization works for quantum field theory. Most treatments cover perturbative renormalization theory and I am fine with this approach. But it is not the ...
1answer
106 views
### Divergence in Supergravity
I'm not familiar with supergravity so here's my question: I've heard in talks that if one finds divergence for five-loop 4-graviton scattering amplitudes in five dimensions this translates to a ...
0answers
49 views
### Are irrelevant terms in the Kahler potential always irrelevant, even at strong coupling?
I've been reading about the duality cascade in Strassler's TASI '03 lectures (hep-th/0505153). He reminds us of the non-renormalization theorem theorem for the superpotential so that the beta ...
0answers
179 views
### Renormalization group evolution equations and ill-posed problems
There is a class of observables in QFT (event shapes, parton density functions, light-cone distribution amplitudes) whose the renormalization-group (RG) evolution takes the form of an ...
2answers
171 views
### What's the difference between divergences that can be corrected and those that can't
I'm confused by renormalization . If a lagrangian has a term with negative mass dimension , why can't the divergences be absorbed into lagrangian coefficients? What's the difference between ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8992881178855896, "perplexity_flag": "middle"}
|
http://maths.dept.shef.ac.uk/maths/sem_week.html?id=1765
|
School of Mathematics and Statistics (SoMaS)
You are here: Home / Departments / SoMaS / Research / Seminars this week
Seminars this week
20130517T130000 20130517T135000 SP2RC seminar (Dr Hannah Schunker) Speaker: Dr Hannah Schunker (Max Planck Institute for Solar System Research (Katlenburg-Lindau, Germany))\n Home page: https://www.mps.mpg.de/homes/schunker/Elegant/Welcome.html\n Title: Prospects for inferring the subsurface structure of sunspots using helioseismology \n URL: http://maths.dept.shef.ac.uk/maths/sem_week.html?id=2018\n
Prospects for inferring the subsurface structure of sunspots using helioseismology
Friday, 17 May at 13:00
LT 9
Abstract
> One goal of local helioseismology is to elicit three-dimensional information about the subsurface structure of sunspots. The physical quantities include sound-speed perturbations and magnetic fields. This information can be used to constrain sunspot models. Helioseismology involves solving both the forward and inverse problem. Traditionally, the inverse problem is solved assuming a linear relationship between the magnitude of the subsurface perturbation and the effect on the waves. We use three-dimensional numerical MHD simulations of wave propagation to explore the seismic effect of various perturbations to a reference sunspot model. These perturbations include modifications to the Wilson depression, subsurface sound-speed enhancements, and subsurface magnetic field changes. We comment on the possibility of measuring such perturbations on the Sun, and on the validity of using linear inversions.
20130521T130000 20130521T135000 Pure Maths Postgraduate Seminar (Tao Lu) Speaker: Tao Lu (Sheffield)\n Title: Invariants of Lie Algebras \n URL: http://maths.dept.shef.ac.uk/maths/sem_week.html?id=2020\n
Invariants of Lie Algebras
Tuesday, 21 May at 13:00
Hicks Room J11
Abstract
The determination of invariants of Lie algebras is motivated by the important role played by these elements in Physics and in representation theory. For semisimple Lie algebras, the invariants were determined long ago. But for non-semisimple Lie algebras, invariants have only been determined when the dimensions are low. Let g be a finite dimensional Lie algebra. Our interest is the centre of the enveloping algebra U(g), which has a close relation with the invariant subalgebra of the symmetric algebra S(g).
In this talk I will give the definition of Lie algebras, enveloping algebras, and the invariants. Then we recall the classical results for semisimple Lie algebras, and introduce the Duflo iosmorphism, giving a relationship between the invariants of S(g) and U(g). Finally I will introduce a method to compute the invariants by giving examples, and give some comments and criteria on the polynomiality of the centre of U(g).
Arrangements
20130522T140000 20130522T145000 Applied Maths Colloquium (Alan Zinober) Speaker: Alan Zinober (Sheffield)\n Home page: http://a-zinober.staff.shef.ac.uk/\n Title: A Brief History of Optimal Variational Problems and Some Recent Research\n URL: http://maths.dept.shef.ac.uk/maths/sem_week.html?id=1969\n
A Brief History of Optimal Variational Problems and Some Recent Research
Wednesday, 22 May at 14:00
LT10
Abstract
The Calculus of Variations was initiated in the 17th Century and forms a basic foundation of modern optimal (maximising or minimising) variational problems, nowadays often called optimal control. An introduction to the Calculus of Variations with some sample examples will be presented. This will include the Euler-Lagrange and Hamiltonian formulation together with the associated final boundary value conditions. A numerical shooting method can be used to solve the resulting Two Point Boundary Value Problem (TPBVP), a set of differential equations. There are many interesting applications including the optimal spending of capital, reservoir control, maintenance and replacement policy of vehicles and machinery, optimal delivery of medicines, drug bust strategies, study for examinations and optimal presentation of a lecture like this one. A new non-classical class of variational problems has been motivated by research on the non-linear revenue problem in the field of economics. This class of problem can be set up as a maximising problem in the Calculus of Variations (CoV) or Optimal Control. However, the state value at the final fixed time, $y(T)$, is {em a priori} unknown and the integrand to be maximised is also a function of the unknown $y(T)$. This is a non-standard CoV problem that has not been studied before. New final value costate boundary conditions will be presented for this CoV problem and some results will be shown.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8526575565338135, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/119176/about-writing-of-mathematical-papers-closed
|
## About writing of mathematical papers [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is not it better to use a mathematical language for writing mathematical papers. For example, use the logic of mathematical symbols more tnan additional comment. Is it feasible?
-
7
Do you want your papers to be read by humans or by robots? – ayanta Jan 17 at 15:17
6
I'm not sure this question is a good fit for MO. As a general rule, I think that if one can explain underlying concepts without resorting to a lot of symbolism, then one should do so. Think of a pleasant discussion with a mathematician on a hike up a mountain, without benefit of pencil and paper, just talking. After all, we are humans, not computers. – Todd Trimble Jan 17 at 15:21
6
The $\Rightarrow$ symbol does not mean the same as "then", "so", "hence" or "therefore". It means that the thing on the right is a consequence of the thing on the left. It does not carry the vital point which is that you believe both statements are true. – James Cranch Jan 17 at 15:44
2
I would use $\exists$ if I were writing in some setting where the expression using the symbol is being viewed as a mathematical object in its own right, as in logic. Otherwise I would avoid it in favor of "There exists". – Adam Epstein Jan 17 at 15:46
4
It is in fact the opposite. You should favor a clear explanation in words over symbols. Use symbols only when it's not possible or too complicated to express what you want to say precisely enough in words. – Deane Yang Jan 17 at 15:59
show 4 more comments
## 1 Answer
The answer to your question is "No".
In Halmos's "How to write mathematics" (which you can find by googling), he refers to using symbols rather than words as "writing in code". It might make the writer's task easier but that is irrelevant, because it makes the reader's life considerably harder.
Halmos's article is a very good guide.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432993531227112, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/electromagnetic-radiation+general-relativity
|
# Tagged Questions
4answers
313 views
### Can a photon get emitted without a receiver?
It is generally agreed upon that electromagnetic waves from an emitter does not have to connect to a receiver, but how can we be sure this is a fact? The problem is that we can never observe non ...
1answer
618 views
### Why is light described by a null geodesic?
I'm trying to wrap my head around how geodesics describe trajectories at the moment. I get that for events to be causally connected, they must be connected by a timelike curve, so free objects must ...
2answers
609 views
### Does a charged particle accelerating in a gravitational field radiate?
A charged particle undergoing an acceleration radiates photons. Let's consider a charge in a freely falling frame of reference. In such a frame, the local gravitational field is necessarily zero, ...
1answer
198 views
### Experimental proof of gravitational redshift of light
Has the gravitational red shift been proven for electromagnetic waves only or also for a single photon?
2answers
262 views
### Effect of gravitation on light
Einstein predicted that the gravitational force can act on light. This was verified in one solar eclipse that light from a star near to the sun's disc bent due to Sun's gravity as predicted. Since ...
4answers
545 views
### Redshifting of Light and the expansion of the universe
So I have learned in class that light can get red-shifted as it travels through space. As I understand it, space itself expands and stretches out the wavelength of the light. This results in the light ...
3answers
1k views
### Do two beams of light attract each other in general theory of relativity?
In general relativity, light is subject to gravitational pull. Does light generate gravitational pull, and do two beams of light attract each other?
3answers
362 views
### Light bending by black holes
In the center of our milky way, it is assumed that a black hole exists with a mass of $\approx 4\times 10^6$ times our sun's mass. How much light bending (in degrees) would arise for stars that are in ...
1answer
399 views
### Low frequency electromagnetic waves in General Relativity
I am becoming familiar with the Geometric Optics approximation in General Relativity which (to summarise) says that EM waves follow null geodesics under the geometric optics approximation. In the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243927597999573, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Basis_functions
|
# Basis function
(Redirected from Basis functions)
In mathematics, a basis function is an element of a particular basis for a function space. Every continuous function in the function space can be represented as a linear combination of basis functions, just as every vector in a vector space can be represented as a linear combination of basis vectors.
In numerical analysis and approximation theory, basis functions are also called blending functions, because of their use in interpolation: In this application, a mixture of the basis functions provides an interpolating function (with the "blend" depending on the evaluation of the basis functions at the data points).
## Examples
### Polynomial bases
The collection of quadratic polynomials with real coefficients has {1, t, t2} as a basis. Every quadratic polynomial can be written as a1+bt+ct2, that is, as a linear combination of the basis functions 1, t, and t2. The set {(1/2)(t-1)(t-2), -t(t-2), (1/2)t(t-1)} is another basis for quadratic polynomials, called the Lagrange basis.
### Fourier basis
Sines and cosines form an (orthonormal) Schauder basis for square-integrable functions. As a particular example, the collection:
$\{\sqrt{2}\sin(n\pi x) \; | \; n\in\mathbb{N} \} \cup \{\sqrt{2} \cos(n\pi x) \; | \; n\in\mathbb{N} \} \cup\{1\}$
forms a basis for L2(0,1).
## References
• Ito, Kiyoshi (1993). Encyclopedic Dictionary of Mathematics (2nd ed. ed.). MIT Press. p. 1141. ISBN 0-262-59020-4.
## See also
Basis (linear algebra) (Hamel basis) Schauder basis (in a Banach space) Dual basis Biorthogonal system (Markushevich basis)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8306498527526855, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/260226/yes-or-no-real-analysis-continuity-compactness?answertab=votes
|
# Yes or No, Real Analysis, continuity, compactness
Am I correct over statements below?
1. The limsup and liminf of the sequence $n^2$ (meaning $1,4,9,16,\dots$) are equal. T
2. Every bounded sequence has at most one convergent subsequence. F
3. Are the following characteristic functions Riemann integrable on the interval $[0,1]$?
• $\chi_{\left[0,\frac12\right]}$ yes
• $\chi_{\Bbb Q}$ no
• $\chi_C$, where $C$ is the Cantor set yes
• $\chi_{\Bbb R-\Bbb Q}$ no
• $\chi_{\left\{\frac1n:n\in\Bbb N\right\}}$ no
4. No continuous function $f:\Bbb R\to\Bbb R$ can have a minimum value. (False)
5. Let $I_1\supset I_2\supset I_3\supset\dots$ be a nested sequence of closed intervals in $\Bbb R$ whose lengths form a decreasing sequence converging to $0$. Choose points $a_n\in I_n$ for each $n$. Then the sequence $a_n$ converges, (I think it’s true)
6. Consider a function $f:\Bbb R\to\Bbb R$. Which of the following statements are true?
• If $f$ is continuous, then it maps every compact set onto a compact set? yes
• If $f$ maps every compact set onto a compact set, then it is continuous. no
• If $f$ is continuous, then it maps every connected set onto a connected set? yes
• Is it true that if $f$ maps every connected set onto a connected set, then it is continuous. no
• Is it true that if $f$ is continuous, then it maps every open set onto an open set? yes
• If $f$ maps every open set onto an open set, then it is continuous. yes
(The original image from which this is copied is here.)
-
2
Copying a page from a book is not the way questions are asked on this site! – Fabian Dec 16 '12 at 20:24
Sorry, there are questions I got partially correct on a online quiz. It didn't specify which ones are correct or wrong. So I type in word. Put it as a image and ask here. – Victoria J. Dec 16 '12 at 20:28
1
Please don’t vandalize the question. – Brian M. Scott Dec 16 '12 at 20:44
@Fabian: A close look at the image made it obvious that this was from an online exercise of some sort; you can even see where Enter was hit after the answers were typed in. It would have been more readable had Moriah copied it out, but it clearly isn’t a case of copying a page from a book. It’s a perfectly good question. – Brian M. Scott Dec 16 '12 at 20:46
## 1 Answer
The last two parts of (6) are wrong. First, the constant function $f(x)=0$ is continuous, but the only open set that it maps to an open set is $\varnothing$. For the other example, define an equivalence relation $\sim$ on $\Bbb R$ by $x\sim y$ iff $x-y\in\Bbb Q$. For each $x\in\Bbb R$ the $\sim$-equivalence class of $x$ is $x+\Bbb Q=\{x+q:q\in\Bbb Q\}$, which is clearly dense in $\Bbb R$. Since each $\sim$-equivalence class is countable, there are $|\Bbb R|$ of them, so there is a bijection $\varphi$ from $\Bbb R/\sim=\{x+\Bbb Q:x\in\Bbb R\}$, the set of $\sim$-equivalence classes, to $\Bbb R$. Now define
$$f:\Bbb R\to\Bbb R:x\mapsto\varphi(x+\Bbb Q)\;.$$
Every open interval in $\Bbb R$ contains a member of each $\sim$-equivalence class, so $f$ maps each open interval of $\Bbb R$ onto $\Bbb R$, which is an open set. Thus, $f$ takes open sets to open sets, but $f$ is certainly not continuous.
The last part of (3) is also wrong: $\chi_{\left\{\frac1n:n\in\Bbb N\right\}}$ is bounded and has only countably many points of discontinuity, so it’s Riemann integrable.
-
Thanks a lot!! I am a new user so I constantly mess things up. And the misunderstanding make me feel sad. Then my reputation drop so I can't even vote for a great answer. So I erased them to avoid getting a negative reputation...I really appreciate your help and effort! Thank you again! – Victoria J. Dec 16 '12 at 21:19
@Moriah: You’re welcome. One tip: in general people like it better if you type in the question, rather than link to an image (unless, of course, the question requires some sort of picture to make sense). I really don’t understand why so many people downvoted the question, though; if they’d actually looked at it, they’d have seen that it was a perfectly good one. – Brian M. Scott Dec 16 '12 at 21:22
I got what you said about the last two parts of (6). But I have one last question about last part of (3). I remember one theorem said that one function is Riemann integrable as long as its discontinuity forms a set of measure zero. We did't discuss Measure theory in detail in class. However, I used that theorem and I thought the interval [0,1]- the sequence 1/n is not countable. So it must not form a measure zero set. Then I got wrong. But where exactly did I get wrong? – Victoria J. Dec 16 '12 at 21:23
1
@Moriah: The set $\left\{\frac1n:n\in\Bbb N\right\}$ is countable, and the set of discontinuities is $\{0\}\cup\left\{\frac1n:n\in\Bbb N\right\}$, which is also countable, so both are sets of measure $0$. – Brian M. Scott Dec 16 '12 at 21:25
M.Scott: ! Thanks a lot. I am not quite clear about "the set of discontinuity" before – Victoria J. Dec 16 '12 at 21:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470462203025818, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/tags/navier-stokes/info
|
# Tag info
## About navier-stokes
The Navier-Stokes equations describe fluid flows in continuum mechanics.
## When to Use the Tag and aims of this description
Use the tag navier-stokes when asking questions about fluid flows as modeled by the Navier-Stokes equations. Hopefully, this description may give a basis for unified notation in discussions. Perhaps it will even help people to formulate questions in a clearer way.
## Introduction
The Navier-Stokes equations model fluid flows based on the continuum-mechanics hypothesis: the molecular nature of matter is ignored. This motivates the use of differential equations to express basic mechanical principles. When reasoning in terms of "particles" in this context, one should understand "a small amount" of matter (much larger than the size of molecules, but still small enough for them to be "infinitesimal" with respect to the differential mathematics involved), not molecules.
In what follows, bold symbols denote vectors (2D or 3D) and the usual differential operators are used without explanation. The following quantities are used throughout:
• $\boldsymbol{u}$: the velocity field,
• $p$: the pressure field,
• $\rho$: the density of the fluid,
• $E$: the total energy of the flow,
• $\boldsymbol{q}$: the heat flux,
• $\sigma$: the Cauchy stress tensor, expressing the internal forces that neighbouring particles of fluid exert upon each other,
• $\mu$: the dynamic viscosity (assumed constant throughout),
• $\boldsymbol{f}$: external (volumic) forcing term (for example, the acceleration of gravity $\boldsymbol{g}$).
## General formulation
We need to respect three principles:
• Mass conservation: no matter is created nor destroyed,
• The rate of change of momentum of a fluid particle is equal to the force applied to it (Newton's second law)
• Energy conservation: it is neither created nor destroyed.
In all generality, these may be expressed as:
Mass conservation: $\partial_t \rho + \nabla \cdot (\rho \boldsymbol{u}) = 0$
Balance of momentum: $\rho (\partial_t \boldsymbol{u} + (\boldsymbol{u}\cdot\nabla) \boldsymbol{u}) = \nabla \cdot \sigma + \rho \boldsymbol{f}$
Energy conservation: $\partial_t E + \nabla \cdot (\boldsymbol{u} E + \sigma \boldsymbol{u} - \boldsymbol{q}) = 0$
## Incompressible fluids
In a context where we ignore heat phenomena (isothermal fluid), assume constant density of the fluid as well as a linear relation between stress and strain, $\sigma = -p \mathbb{I} + \mu ( \nabla \boldsymbol{u} + (\nabla \boldsymbol{u})^T)$ (Newtonian fluid), the important conservation laws are those for mass and momentum (i.e., Newton's second law), which are given by
Mass conservation: $\nabla \cdot \boldsymbol{u} = 0$
Balance of momentum: $\rho (\partial_t \boldsymbol{u} + (\boldsymbol{u}\cdot \nabla)\boldsymbol{u}) = -\nabla p + \mu \Delta \boldsymbol{u} + \rho \boldsymbol{f}$
The pressure acts as a means to enforce the incompressibility (divergence-free) condition represented by the mass conservation equation; it does not have the same meaning as in the compressible case. Energy is not conserved in this context, it is dissipated by the viscous nature of the fluid (internal friction) and lost as heat (which we don't "track" in this context).
These are perhaps the most commonly encountered form of the Navier-Stokes equations. They are famously the subject of a Clay Mathematics Institute Millennium Prize.
## Compressible fluids
When the fluid is compressible, the density becomes a field to be solved for, and we need an additional equation for the system, provided by the energy conservation principle. The exact form depends on the nature of the fluid, more exactly it's thermodynamic behaviour.
The more general formulation reads:
Mass conservation: $\partial_t \rho + \nabla \cdot (\rho \boldsymbol{u}) = 0$
Balance of momentum: $\rho (\partial_t \boldsymbol{u} + (\boldsymbol{u}\cdot \nabla)\boldsymbol{u}) = -\nabla p + \mu \Delta \boldsymbol{u} + (\mu/3 + \mu^v)\nabla(\nabla\cdot \boldsymbol{u})+ \rho \boldsymbol{f}$
where $\mu^v$ is the bulk viscosity coefficient.
Conservation of energy may be expressed in various ways depending on the fluid, a general discussion would require to delve into thermodynamic considerations which are outside the scope of this article. [Simple example?]
## Remarks
As in all physical problems, to obtain a unique and physically reasonable solution one must know the initial conditions and the conditions at all boundaries. An example of boundary conditions are the noslip boundary conditions, which require the fluid to adhere to the boundary: $\boldsymbol{u}\vert_{\text{boundary}} = \boldsymbol{v}_{\text{boundary}}$
The equations above are expressed in physical dimensions. It is possible to rescale time and space and normalize the velocity and pressure fields in a number of ways to get rid of the physical constants or to make a specific new one appear. The best known formulation involves the Reynolds number: $$\boldsymbol{u}(\boldsymbol{x},t), p(\boldsymbol{x},t) \mapsto U~\boldsymbol{u}(\boldsymbol{x}/L,~t~U/L),\rho U^2~p (\boldsymbol{x}/L,~t~U/L)$$ with $L, U$ a reference length and velocity (of a moving body, for example). This leads to writing the balance of momentum equation for incompressible flow (neglecting the forcing term) as: $$\partial_t \boldsymbol{u} + (\boldsymbol{u}\cdot \nabla)\boldsymbol{u} = -\nabla p + \frac{1}{\mathrm{Re}} \Delta \boldsymbol{u}$$ where $$\mathrm{Re} = \frac{\rho L U}{\mu}$$ is the Reynolds number, expressing the ratio of inertial to viscous effects. The higher the number, the bigger the influence of inertial effects. In the infinite Reynolds number limit, we recover the Euler equations. High Reynolds numbers flows tend to exhibit turbulence.
## Prerequisites to Navier-Stokes
Phys: Newtonian Mechanics; Classical Mechanics; Continuum Mechanics; ...
Math: Partial Differential Equations (PDE); ...
## Recommended books
Batchelor, G.K., An introduction to fluid dynamics, Cambridge University Press (1967)
Chorin, A.J. and Marsden, J.E., A mathematical introduction to fluid mechanics, Springer (1993)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8857804536819458, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/90558/is-prod-mathbbr-mathbbr-mathbbr-mathbbr?answertab=active
|
# Is $\prod_{\mathbb{R}}\mathbb{R} = \mathbb{R}^\mathbb{R}$?
(If the title is unclear, I'm looking at infinite cartesian product of $\mathbb{R}$ indexed by $\mathbb{R}$.)
I thought that I had reasoned this rather well, as follows:
$\mathbb{R}^\mathbb{R} = \{f\mid f:\mathbb{R}\rightarrow\mathbb{R}\}$. Note that this includes functions whose range is not $\mathbb{R}$, $\mathbb{R}$ here is just the codomain. Now consider an element $\textbf{x} \in \prod_{\mathbb{R}}\mathbb{R}$. This is an "uncountably infinite sequence," so to speak, so it does represent a function, where the index indicates the $x$ coordinate and the value at that index indicates the $y$ coordinate. (Consider a constant $x_i = 1 \;\forall\; i\in\mathbb{R}$, a straight line.) But this does not account for functions with vertical asymptotes, or functions which are not surjective (e.g. $f(x) = x^2$), so we conclude that $\prod_{\mathbb{R}}\mathbb{R} \subsetneq \mathbb{R}^\mathbb{R}$.
If we union in $\{+\infty, -\infty\}$, then we can get functions with vertical asymptotes, and if we allow coordinates $x_i = \emptyset$, then we get surjective functions as well. (If that even makes sense semantically).
This seems to make sense to me, and I find it a satisfying answer. However, I was reading this wikipedia article which states that:
An uncountable product of metric spaces need not be metrizable. For example, $\mathbb{R}^\mathbb{R}$ is not first-countable and thus isn't metrizable.
Which implicitly states that $\prod_{\mathbb{R}}\mathbb{R} = \mathbb{R}^\mathbb{R}$. So what am I missing? I feel like it must be something rather obvious.
-
Why would $\prod_\mathbb R\mathbb R$ be metrizable? – Asaf Karagila Dec 11 '11 at 20:15
2
"But this does not account for functions with vertical asymptotes, or functions which are not surjective." Why? (Neither construction accounts for functions with vertical asymptotes, and both account for functions which are not surjective.) – Qiaochu Yuan Dec 11 '11 at 20:15
5
It seems the problem may be in your definition of uncountable products. By definition, $\prod_\mathbb{R}\mathbb{R}=\mathbb{R}^\mathbb{R}$. – M Turgeon Dec 11 '11 at 20:18
1
You do understand that $X^Y$ means "all functions with domain $Y$ and image contained in $X$", right? It does not mean "all function with domain contained in $Y$ and image $X$", which seems to be what you are imagining in your argument. – Arturo Magidin Dec 11 '11 at 20:22
Something Qiaochu alluded to: functions with vertical asymptotes are not functions on $\mathbb{R}$. The domain is strictly smaller. – M Turgeon Dec 11 '11 at 20:22
## 1 Answer
Your argument about the product starts well, but then it goes off the rails. Why do you say that "it doesn't account for functions that are not surjective"? Or "functions with vertical asymptotes"? It certainly doesn't include functions that are not defined on all of $\mathbb{R}$, but then neither does $\mathbb{R}^{\mathbb{R}}$. As for surjectivity, nothing in the definition implies surjectivity: the "constant tuple" that is $1$ everywhere is an element of $\prod \mathbb{R}$; what makes you think this is surjective?
Remember the very definition of a direct product: if $\{X_i\}_{i\in I}$ is a family of sets, then $$\prod_{i\in I}X_i = \Bigl\{ f\colon I\to \cup X_i \,\Bigm|\, f(i)\in X_i\text{ for each }i\in I\Bigr\}.$$ And if $X$ and $Y$ are sets, then $$X^Y = \{f\colon Y\to X\}$$ that is, all functions with domain all of $Y$ and with $f(y)\in X$ for all $y\in Y$.
So $$\prod_{r\in\mathbb{R}}\mathbb{R} = \Bigl\{ f\colon \mathbb{R}\to\mathbb{R}\,\Bigm|\, f(i)\in\mathbb{R}\text{ for each }i\in\mathbb{R}\Bigr\} = \mathbb{R}^{\mathbb{R}}.$$
-
2
What is it that he should "also remember?" – analysisj Dec 11 '11 at 20:26
3
"Also, Arturo, remember to proof-read and delete partial sentences that you began and then decided would be better elsewhere..." – Arturo Magidin Dec 11 '11 at 20:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487894177436829, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/18572-stability-solution.html
|
# Thread:
1. ## Stability of a solution
Dear all,
I have a question on the stability of a solution.
Let a(t), b(t) and c(t) be continuous functions of t over the interval $[0,\infty)$. Assume (x,y) = $(\phi(t), \psi(t))$ is a solution of the system
$\dot{x} = -a^2(t)y + b(t), \dot{y} = a^2(t)x + c(t)$
Show that this solution is stable.
I rearranged the system to get
$\frac{d}{dt}\binom{x}{y} = \binom{-a^2(t)y + b(t)}{a^2(t)x + c(t)}<br /> = \left( \begin{array}{cc} 0 & -a^2(t) \\ a^2(t) & 0 \end{array} \right)\binom{x}{y} + \binom{b(t)}{c(t)}$
I've only dealt with constant coefficient linear systems before, so I'm having trouble with this question.
Let $A = \left( \begin{array}{cc} 0 & -a^2(t) \\ a^2(t) & 0 \end{array} \right)$.
I'm not sure if I can treat A like a constant matrix, i.e. take the determinant of A to be $a^4(t)$ and trace(A) = 0 by treating t as constant. Also, what role does the $\binom{b(t)}{c(t)}$ play here?
Thank you.
Regards,
Rayne
2. You are attempting to use Lyapunov's theorem. Let's find the stationary solution. Solve $0= -a^2(t)y + b(t), 0= a^2(t)x + c(t)$ to get $y_0(t)= b(t)/a^2(t), \ x_0(t)= -c(t)/a^2(t)$. Goes without saying that $a(t)\neq 0$, or we get a trivial system.
Now let's rename: $<br /> \frac{d}{dt}\binom{x}{y} = \binom{f_1(x,y)}{f_2(x,y)}$. By Lyapunov's theorem, if all eigenvalues of the Jacobian matrix $\binom{\frac{\partial f_1}{\partial x} \ \ \frac{\partial f_1}{\partial y}}{\frac{\partial f_2}{\partial x} \ \ \frac{\partial f_2}{\partial y}}$ evaluated at $(x_0(t),y_0(t))$ have non-positive real parts, the solution is stable. But this is just as you say $\left( \begin{array}{cc} 0 & -a^2(t) \\ a^2(t) & 0 \end{array} \right)$, so the characteristic polynomial is $\lambda^2+a^4(t)\Rightarrow Re(\lambda)=0$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9235123991966248, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/152528/does-weak-convergence-in-sobolev-spaces-imply-pointwise-convergence?answertab=oldest
|
# Does weak convergence in Sobolev spaces imply pointwise convergence?
I encounter a problem when reading Struwe's book Variational Methods (4th ed). On page 38, it is assumed that $\|u_m\|$ is a minimizing sequence for a functional $E$, i.e. $E(u_m)\rightharpoonup I$ in $L^p(\mathbb{R}^n)$,
and then it assume in addition that
$u_m\rightharpoonup u$ weakly in $H^{1,2}(\mathbb{R}^n)$ and pointwise almost everywhere.
My question is
why the pointwise convergence assumption is reasonable? Since $\mathbb R^n$ is not compact, the embedding theorem is not obviously valid.
Thanks in advance.
-
## 1 Answer
For sufficiently small $p$ (more precisely: $p<2n/(n-2)$ for $n\ge 3$ or $p$ arbitrary otherwise) the space $H^{1,2}(\Omega)$ is compactly embedded in $L^p$ for $\Omega \subset\subset \mathbb{R}^n$ with sufficiently regular boundary (take balls of increasing radius tending to infinity). This implies strong $L^p$ convergence of a subsequence, hence pointwise a.e, on each such $\Omega$, hence a.e.
(If $u_k$ converges pointwise a.e on each open set with compact closure it obviously converges pointwise almost everywhere. You may need to countably often further subsubsequence to make this work, but who cares?).
-
Minor point: the boundary of $\Omega$ should not be too weird for embedding to be valid. Lipschitz boundary is enough, and of course we can exhaust $\mathbb R^n$ by balls. – user31373 Jun 1 '12 at 17:38
@LeonidKovalev that's correct, thanks for pointing that out. Of course one would take balls of increasing integer radius, say. Sloppy me. – user20266 Jun 1 '12 at 18:16
@Thomas, minor nitpick: If I'm not mistaken, you get $u_m\to u$ strongly in $L^p$, and then a subsequence converges a.e. (Actually you could just take $p=2$.) – Hendrik Vogt Jun 2 '12 at 15:52
@HendrikVogt you have to take subsequences all the time, that's correct, also in the step mentioned by you. And yes, quite obviously $2 < 2n/(n-2)$ for $n\ge3$. Again, sloppy me :-) – user20266 Jun 2 '12 at 16:36
@Thomas: Well, in the first step you don't need a subsequence, that was my point. – Hendrik Vogt Jun 2 '12 at 17:06
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354678392410278, "perplexity_flag": "middle"}
|
http://gilkalai.wordpress.com/2011/02/21/the-ac0-prime-number-conjecture/?like=1&source=post_flair&_wpnonce=1c947633de
|
Gil Kalai’s blog
## The AC0 Prime Number Conjecture
Posted on February 21, 2011 by
## Möbius randomness and computational complexity
Last spring Peter Sarnak gave a thought-provoking lecture in Jerusalem. (Here are the very interesting slides of a similar lecture at I.A.S.)
Here is a variation of the type of questions Peter has raised.
The $AC_0$ Prime Number Conjecture: The correlation between the Möbius function and any function f from the natural numbers to {-1,1}, so that f can be described by a bounded depth polynomial size circuit, tends to zero!
Updates: (March) The conjecture made it to major league blogs as a very interesting blog post describing the conjecture appeared on Dick Lipton’s blog, and let to some interesting discussion. I posted two related Mathoverflow problems . The first
is about the discrete Fourier coefficients of the Mobius function
and the second
is about Walsh-Fourier coefficients of the Mobius function.
Ben Green have posted an answer to my Mathoverflow question indicating an affarmative solution for the $AC^0$ Prime Number Conjecture. The proof is based on combining the Linial-Mansour-Nisan theorem with Results and techniques by Glyn Harman and Imre Kátai, (From the paper Primes with preassigned digits. II. Acta Arith. 133 (2008), 171–184.)
Ben have written some rough notes on this a short paper giving the solution here. (An earlier note by Ben assumed GRH and the full paper contain the extra work is needed to prove it unconditionally.) This is a very nice result. Ben is also optimistic that showing that all Walsh-Fourier functions are orthogonal to Mobius (which is a more general result) is within reach by combining the above result with results by Mauduit-Rivat.
Of course, the next logical step is the ACC[2]-Prime number conjecture. This includes as a very special case the question if all Walsh-Fourier functions are orthogonal asymptotically to the Mobius function. The “Walsh-Fourier” functions are high degree monomials over the real but they can be considered as linear functions over Z/2Z. For that, replace the values {-1,+1} by {0,1} both in the domain and range of our Boolean functions? What about low degree polynomials instead of linear polynomials? If we can extend the results to polynomials over Z/2Z of degree at most polylog (n) this will imply by a result of Razborov Mobius randomness for AC0[2] circuits. (This is interesting also under GRH).
(April) Jean Bourgain proved (private communication) that for every Walsh function \$W_S\$ we have $\sum_{m=1}^{X}\mu(m) W_S(m) \le X \cdot e^{-(\log X)^{1/10}}.$ In other words, $\hat \mu(S) \le e^{-(\log X)^{1/10}}.$
Jean also showed that under GRH $\sum_{m=1}^X\mu(m)W_S(m) \le X^{1-(c/(\log\log X)^2)}.$ In other words, $\hat \mu(S) \le X^{-(c/\log \log X)^2}.$ This result suffices to show under GRH the “monotone prime number conjecture.”
While the questions about polylog degree polynomials over Z/2Z and about AC0(2) circuits are still open, Jean Bourgain was able to prove Mobius randomness for AC0(2) circuits of certain sublinear size.
Jean also noted that to show that the Mobius function itself is non-approximable by a AC0[2) circuit (namely you cannot reach correlation say of 0.99) can easily be derived from Razborov Smolensky theorem, since an easy computation shows that $\mu(3x)^2$ has correlation >0.8 with the 0(mod 3) function. So we have a simple argument showing that (for $n$ large) an ACC(2) function (or even a polylog degree polynomial)) cannot have correlation larger than 0.99 with the Mobius function, but showing that it cannot have correlation larger than 0.01 with the Mobius function seems at present very difficult. (More update: as it turned out, this last observation can be found already in Allender-Saks-Shparlinski’s paper.)
Let me state the conjecture more precisely. Let $f(n)$ be a function from the natural numbers to {-1,+1}. Let $\mu$ be the classical Möbius function. Namely, $\mu(n)$ is 0 unless $n$ is a square-free integer and $\mu (n)=(-1)^r$ if $n$ is the product of $r$ distinct primes.
Consider the sum $\mu(f)$ = $\sum_{i=1}^N f(i)\mu (i)$.
Conjecture: For fixed $d$ and $c$ there is a function $u(n;d,c)$ such that $u(n;d,c)/2^n$ tends to zero as $n$ tends to infinity, and such that the following assertion holds:
Consider a function $f$ from $\{1,2,\dots,2^n-1\}$ to {-1,1}, which can be described by a depth-$d$ size $n^c$ Boolean circuit C. (I.e., $f(i)$ is defined by applying the circuit to the binary digits of $i$.)
Then $\mu(f) \le u(n;d,c)$.
When $f$ is the constant one function this is just the prime number theorem. Trying to extend the prime number theorem via functions defined by low complexity classes seems interesting and within reach.
This conjecture looks to me as a good topic for open discussion which may benefit from casual chat between people familiar with Boolean functions in $AC_0$ and people familiar with standard methods related to the prime number theorem.
## Mobius randomness and the class P.
Peter’s lecture started (see the picture above) with the analogous much more difficult question involving the complexity class P. (In fact, he challenged the audience to find a counterexample.) Finding a function in P with bounded-away-from-zero correlation with the Möbius function is in the direction of a polynomial algorithm for factoring – a problem which is entirely beyond our horizon. Peter then moved to describe other ways to study “Möbius randomness” which are based not on computational complexity but rather on certain notions of randomness (or, more precisely, of determinism) that arise in ergodic theory.
## A reminder about bounded depth Boolean circuits
### 1. Boolean functions
A Boolean function of n variables is simply a function where the variables take the values +1 and -1 and the value of itself is also either +1 or-1.
### 2. Boolean circuits
A Boolean circuit is a gadget that computes Boolean functions. It is built from inputs, gates and an output. We can think about these circuits as follows: On level 0 there are the variables. On level 1 there are gates acting on the variables. On level 2 there are gates acting on the outputs of the gates on level 1. And on level is a single gate leading to the output of the circuit.
The depth of the circuit is this number . The size of the circuit is the total number of gates. The gates perform Boolean operations: They can take an input bit and negate it. They can take several input bits and compute their OR – this means that the output will be ’1′ iff one of them is ’1′. They can take several input bits and compute their AND – this means that the output will be ’-1′ iff one of them is ‘-1′.
### 3. $AC_0$
$AC_0$ is the class of “bounded depth polynomial size circuits”. Namely, these are classes of circuits for which the depth \$d\$ is bounded by a constant and the size is bounded by a fixed polynomial $n^c$ of the number of variables. Much is known about Boolean functions described by such functions.
### Like this:
This entry was posted in Algebra and Number Theory, Computer Science and Optimization and tagged Analytic number theory, Bounded depth circuits, Circuit complexity, Peter Sarnak, Prime number theorem. Bookmark the permalink.
### 13 Responses to The AC0 Prime Number Conjecture
1. reference says:
Allender Saks Shparlinski: detecting squarefree numbers is hard for AC0
• Gil Kalai says:
Here is the link http://web.science.mq.edu.au/~igor/SF_ACo-STACS.ps Indeed it looks highly relevant.
2. Kea says:
Well, remember that the Mobius function is a fermionic operator for states obeying the Pauli exclusion principle. Without a Mobius numerator in a zeta function series, we have the ordinary (bosonic gas) Riemann zeta function. So the Mobius operator is a basic supersymmetry transformation for zeta functions. In contrast, you mention characteristic functions, for a topos like Set, which obeys classical logic. Is the Mobius inversion of f bounded as required? From a physics perspective this is an axiomatic question for quantum gravity, and unlikely to be solved within set theory. It would be very, very interesting if it was!
• Prof. George Purdy says:
Dear Kea,
(Is this a New Zealand bird?)
Would you please elaborate? Does the Mobius function occur in quantum theory? I suppose one could look this up. I like your question about whether Summation mu(k)f(k) is bounded.
George
• Prof. George Purdy says:
I’m sorry. I meant your question as to whether the Mobius inversion of f is bounded. Of course Sigma mu(k)f(k) must be Omega(sqrt(x)).
George
• Kea says:
The early papers on this topic are from around 1990. Today, Connes et al use noncommutative geometry (which is supposed to reformulate QFT) to study zeta functions. The Riemann gas is itself simple enough, and defines \$\zeta\$ as a physical partition function. This is where the Mobius function must be inserted to make the physics fermionic, rather than bosonic.
Now Connes et al take partition functions straight from standard 20th century physics, with no modification. I believe that physical ideas from quantum gravity are needed to understand how these zeta functions can arise from motivic cohomology (and generalizations). One aspect of this is something that I think of as Constructive Number Theory. Since the right language for gravity is higher dimensional topos theory (not necessarily a la Lurie), one cannot just take the reals or complex numbers as given. One must carefully choose their axioms in terms of physical operators using non standard analysis, say through introducing the surreals.
In the end, the use of the Mobius function in physics remains as stated, because there is supersymmetry between fermions and bosons (not the stringy kind though). The problem is the physical construction of the complex numbers, which takes us outside set theory. In other words, I don’t believe that the questions here are that well posed.
3. Pingback: The Depth Of The Möbius Function « Gödel’s Lost Letter and P=NP
4. John Sidles says:
Gil please let me say that I read, admire, and enjoy Combinatorics and More very much. Quite often (as with Möbius functions) the subject matter is too deep for me to see any immediate engineering applications … still I store these ideas away, because they are beautiful, and in the confident expectation that someday their utility will be manifest. So thank you for the many wonderfully creative posts on this wonderful weblog.
• Gil Kalai says:
Many thanks, John
5. Pingback: The Collatz Conjecture : Unsolved but Useless | Gaurav Happy Tiwari
6. Pingback: The Bourne Factor « Gödel’s Lost Letter and P=NP
7. Pingback: Do Gaps Between Primes Affect RSA Keys? « Gödel’s Lost Letter and P=NP
8. Pingback: Efficiently computable function as a counter-example to Sarnak's Mobius conjecture | Q&A System
• ### Blogroll
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8888372778892517, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/4618/how-do-i-estimate-the-parameters-of-an-maq-process/4620
|
# How do I estimate the parameters of an MA(q) process?
It is relatively easy to estimate the parameters of an autoregressive $AR(p)$ process. How do I do with a moving average $MA(q)$ process?
-
1
Why the vote to close as off-topic? Time series analysis is very important in stat-arb/HFT trading. – quant_dev Nov 25 '12 at 14:02
In my opinion, it is too elementary to be on topic. Every textbook on time series analysis covers this. – Ryogi Nov 26 '12 at 20:42
## 1 Answer
Estimating $MA(q)$ models is significantly harder than $AR(p)$ models. Eviews, MATLAB and R can use multiple algorithms which are all based on some form of maximum likelihood estimation. You can look at the source of MATLAB and R or the excellent Eviews documentation.
However, I strongly advise against rolling your own since efficient and well tested algorithms are widely available.
For the interested, this paper describes the method (with code) used by the R arima package. You can see from the abstract the method it is quite complicated.
-
Thanks. I know mature libraries are the way to go in production applications, but I am simply interested in this problem. – quant_dev Nov 25 '12 at 14:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9191123843193054, "perplexity_flag": "middle"}
|
http://avatarsearch.com/work
|
# Work Research Materials
This page contains a list of user images about Work which are relevant to the point and besides images, you can also use the tabs in the bottom to browse Work news, videos, wiki information, tweets, documents and weblinks.
Work Images
Rihanna - Take A Bow
Music video by Rihanna performing Take A Bow. YouTube view counts pre-VEVO: 66288884. (C) 2008 The Island Def Jam Music Group.
Key & Peele: Substitute Teacher
A substitute teacher from the inner city refuses to be messed with while taking attendance.
Rihanna - Unfaithful
Music video by Rihanna performing Unfaithful. (C) 2006 The Island Def Jam Music Group #VEVOCertified on Feb. 15, 2012. http://vevo.com/certified http://youtu...
Celebrities Read Mean Tweets #2
Jimmy Kimmel Live - Celebrities Read Mean Tweets #2 Jimmy Kimmel Live's YouTube channel features clips and recaps of every episode from the late night TV sho...
Draw My Life - Ryan Higa
So i was pretty hesitant to make this video... but after all of your request, here is my Draw My Life video! Check out my 2nd Channel for more vlogs: http://...
Adele - Rolling In The Deep
Music video by Adele performing Rolling In The Deep. (C) 2010 XL Recordings Ltd. #VEVOCertified on July 25, 2011. http://www.vevo.com/certified http://www.yo...
Rihanna - Stay ft. Mikky Ekko
Download "Stay" from Unapologetic now: http://smarturl.it/UnapologeticDlx Music video by Rihanna performing Stay ft. Mikky Ekko. © 2013 The Island Def Jam Mu...
Avril Lavigne - When You're Gone
Music video by Avril Lavigne performing When You're Gone. YouTube view counts pre-VEVO: 696566 (C) 2007 RCA/JIVE Label Group, a unit of Sony Music Entertain...
David Guetta - Just One Last Time ft. Taped Rai
"Just One Last Time" feat. Taped Rai. Available to download on iTunes including remixes of : Tiësto, HARD ROCK SOFA & Deniz Koyu http://smarturl.it/DGJustOne...
PEOPLE ARE AWESOME 2011
Subscribe for new compilations every Friday! ▻ http://bit.ly/failarmy Facebook ▻ http://facebook.com/failarmyy Twitter ▻ http://twitter.com/RealFailArmy Down...
YOLO (feat. Adam Levine & Kendrick Lamar)
YOLO is available on iTunes now! http://smarturl.it/lonelyIslandYolo New album coming soon... Check out the awesome band the music in YOLO is sampled from Th...
Most Annoying People On The Internet
Don't be these people. Mapoti See Bloopers and Behind-The-Scenes Here!: http://youtu.be/dfpo7uXwJnM Huge thank you and shout out to Dtrix: http://www.youtube...
Skrillex & Damian "Jr. Gong" Marley - Make It Bun Dem [OFFICIAL VIDEO]
Buy the track here: http://atlr.ec/TZ8yBf Directed by Tony T. Datis.
MACKLEMORE & RYAN LEWIS - CAN'T HOLD US FEAT. RAY DALTON (OFFICIAL MUSIC VIDEO)
Macklemore & Ryan Lewis present the official music video for Can't Hold Us feat. Ray Dalton. Can't Hold Us on iTunes: https://itunes.apple.com/us/album/cant-...
"Mechanical work" redirects here. For other uses of "Work" in physics, see Work (electrical) and Work (thermodynamics).
Work
A baseball pitcher does positive work on the ball by applying a force to it over the distance it moves while in his grip.
Common symbol(s): W
SI unit: joule (J)
Derivations from other quantities: W = F · d W = τ θ
Classical mechanics
Branches
Formulations
Fundamental concepts
Core topics
Scientists
In physics, a force is said to do work when it acts on a body so that there is a displacement of the point of application, however small, in the direction of the force. Thus a force does work when it results in movement.[1]
The term work was introduced in 1826 by the French mathematician Gaspard-Gustave Coriolis[2][3] as "weight lifted through a height", which is based on the use of early steam engines to lift buckets of water out of flooded ore mines. The SI unit of work is the newton-metre or joule (J).
The work done by a constant force of magnitude F on a point that moves a distance d in the direction of the force is the product,
$W = Fd.$
For example, if a force of 10 newton (F = 10 N) acts along point that travels 2 metres (d = 2 m), then it does the work W = (10 N)(2 m) = 20 N m = 20 J. This is approximately the work done lifting a 1 kg weight from ground to over a person's head against the force of gravity. Notice that the work is doubled either by lifting twice the weight the same distance or by lifting the same weight twice the distance.
## Units
The SI unit of work is the joule (J), which is defined as the work expended by a force of one newton through a distance of one metre.
The dimensionally equivalent newton-metre (N·m) is sometimes used as the measuring unit for work, but this can be confused with the unit newton-metre, which is the measurement unit of torque. Usage of N·m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton metres is a torque measurement, or a measurement of energy.[4]
Non-SI units of work include the erg, the foot-pound, the foot-poundal, the kilowatt hour, the litre-atmosphere, and the horsepower-hour. Due to work having the same physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU and Calorie, are utilized as a measuring unit.
## Work and energy
Work is closely related to energy. The conservation of energy states that the change in total internal energy of a system equals the added heat, minus the work performed by the system (see the first law of thermodynamics),
$\delta E = \delta Q - \delta W.$
Also, from Newton's second law for rigid bodies it can be shown that work on an object is equal to the change in kinetic energy of that object,
$W = \Delta KE$.
The work of forces generated by a potential function is known as potential energy and the forces are said to be conservative. Therefore work on an object moving in a conservative force field is equal to minus the change of potential energy of the object,
$W = -\Delta PE.$
These formulas demonstrate that work is the energy associated with the action of a force, so work subsequently possesses the physical dimensions, and units, of energy.
The work/energy principles discussed here are identical to Electric work/energy principles.
## Constraint forces
Constraint forces determine the movement of components in a system, constraining the object within a boundary (in the case of a slope plus gravity, the object is stuck to the slope, when attached to a taut string it cannot move in an outwards direction to make the string any 'tauter'). They eliminate all movement in the direction of the constraint, thus constraint forces do not perform work on the system, as the velocity of that object is constrained to be 0 parallel to this force, due to this force.
For example, the centripetal force exerted inwards by a string on a ball in uniform circular motion sideways constrains the ball to circular motion restricting its movement away from the center of the circle. This force does zero work because it is perpendicular to the velocity of the ball.
Another example is a book on a table. If external forces are applied to the book so that it slides on the table, then the force exerted by the table constrains the book from moving downwards. The force exerted by the table supports the book and is perpendicular to its movement which means that this constraint force does not perform work.
The magnetic force on a charged particle is F = qv × B, where q is the charge, v is the velocity of the particle, and B is the magnetic field. The result of a cross product is always perpendicular to both of the original vectors, so F ⊥ v. The dot product of two perpendicular vectors is always zero, so the work W = F · v = 0, and the magnetic force does not do work. It can change the direction of motion but never change the speed.
## Mathematical calculation
For moving objects, the quantity of work/time is calculated as distance/time, or velocity. Thus, at any instant, the rate of the work done by a force (measured in joules/second, or watts) is the scalar product of the force (a vector), and the velocity vector of the point of application. This scalar product of force and velocity is classified as instantaneous power. Just as velocities may be integrated over time to obtain a total distance, by the fundamental theorem of calculus, the total work along a path is similarly the time-integral of instantaneous power applied along the trajectory of the point of application.[5]
Work is the result of a force on a point that moves through a distance. As the point moves, it follows a curve X, with a velocity v, at each instant. The small amount of work δW that occurs over an instant of time δt is calculated as
$\delta W = \mathbf{F}\cdot\mathbf{v}\delta t,$
where the F.v is the power over the instant δt. The sum of these small amounts of work over the trajectory of the point yields the work,
$W = \int_{t_1}^{t_2}\mathbf{F} \cdot \mathbf{v}dt = \int_{t_1}^{t_2}\mathbf{F} \cdot {\tfrac{d\mathbf{x}}{dt}}dt =\int_C \mathbf{F} \cdot d\mathbf{x},$
where C is the trajectory from x(t1) to x(t2). This integral is computed along the trajectory of the particle, and is therefore said to be path dependent.
If the force is always directed along this line, and the magnitude of the force is F, then this integral simplifies to
$W = \int_C Fds$
where s is distance along the line. If F is constant, in addition to being directed along the line, then the integral simplifies further to
$W = \int_C Fds = F\int_C ds = Fd$
where d is the distance travelled by the point along the line.
This calculation can be generalized for a constant force that is not directed along the line, followed by the particle. In this case the dot product F·dx = Fcosθdx, where θ is the angle between the force vector and the direction of movement,[5] that is
$W = \int_C \mathbf{F} \cdot d\mathbf{x} = Fd\cos\theta.$
In the notable case of a force applied to a body always at an angle of 90 degrees from the velocity vector (as when a body moves in a circle under a central force), no work is done at all, since the cosine of 90 degrees is zero. Thus, no work can be performed by gravity on a planet with a circular orbit (this is ideal, as all orbits are slightly elliptical). Also, no work is done on a body moving circularly at a constant speed while constrained by mechanical force, such as moving at constant speed in a friction-less ideal centrifuge.
Calculating the work as "force times straight path segment" would only apply in the most simple of circumstances, as noted above. If force is changing, or if the body is moving along a curved path, possibly rotating and not necessarily rigid, then only the path of the application point of the force is relevant for the work done, and only the component of the force parallel to the application point velocity is doing work (positive work when in the same direction, and negative when in the opposite direction of the velocity). This component of force can be described by the scalar quantity called scalar tangential component ($\scriptstyle F\cos\theta$, where $\scriptstyle \theta$ is the angle between the force and the velocity). And then the most general definition of work can be formulated as follows:
Work of a force is the line integral of its scalar tangential component along the path of its application point.
### Torque and rotation
A torque results from equal, and opposite forces, acting on two different points of a rigid body. The sum of these forces cancel, but their effect on the body is the torque T. The work of the torque is calculated as
$\delta W = \mathbf{T}\cdot\vec{\omega}\delta t,$
where the T.ω is the power over the instant δt. The sum of these small amounts of work over the trajectory of the rigid body yields the work,
$W = \int_{t_1}^{t_2}\mathbf{T}\cdot\vec{\omega}dt.$
This integral is computed along the trajectory of the rigid body with an angular velocity ω that varies with time, and is therefore said to be path dependent.
If the angular velocity vector maintains a constant direction, then it takes the form,
$\vec{\omega}= \dot{\phi}\mathbf{S},$
where φ is the angle of rotation about the constant unit vector S. In this case, the work of the torque becomes,
$W = \int_{t_1}^{t_2}\mathbf{T}\cdot\vec{\omega}dt = \int_{t_1}^{t_2}\mathbf{T}\cdot \mathbf{S}\frac{d\phi}{dt}dt = \int_C\mathbf{T}\cdot \mathbf{S} d\phi,$
where C is the trajectory from φ(t1) to φ(t2). This integral depends on the rotational trajectory φ(t), and is therefore path-dependent.
If the torque T is aligned with the angular velocity vector so that,
$\mathbf{T}=\tau\mathbf{S},$
and both the torque and angular velocity are constant, then the work takes the form,[6]
$W = \int_{t_1}^{t_2}\tau \dot{\phi}dt = \tau(\phi_2-\phi_1).$
A force of constant magnitude and perpendicular to the lever arm
This result can be understood more simply by considering the torque as arising from a force of constant magnitude F, being applied perpendicularly to a lever arm at a distance r, as shown in the figure. This force will act through the distance along the circular arc s=rφ, so the work done is
$W=Fs = Fr\phi .$
Introduce the torque τ=Fr, to obtain
$W=Fr\phi=\tau\phi,$
as presented above.
Notice that only the component of torque in the direction of the angular velocity vector contributes to the work.
## Work and potential energy
The scalar product of a force F and the velocity v of its point of application defines the power input to a system at an instant of time. Integration of this power over the trajectory of the point of application, C=x(t), defines the work input to the system by the force.
### Path dependence
Therefore, the work done by a force F on an object that travels along a curve C is given by the line integral:
$W = \int_C \mathbf{F} \cdot \mathrm{d}\mathbf{x} = \int_{t_1}^{t_2}\mathbf{F}\cdot \mathbf{v}dt,$
where x(t) defines the trajectory C and v is the velocity along this trajectory. In general this integral requires the path along which the velocity is defined, so the evaluation of work is said to be path dependent.
The time derivative of the integral for work yields the instantaneous power,
$\frac{dW}{dt} = P(t) = \mathbf{F}\cdot \mathbf{v} .$
### Path independence
If the work for an applied force is independent of the path, then the work done by the force is evaluated at the start and end of the trajectory of the point of application. This means that there is a function U (x), called a "potential," that can be evaluated at the two points x(t1) and x(t2) to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is
$W = \int_C \mathbf{F} \cdot \mathrm{d}\mathbf{x} = \int_{\mathbf{x}(t_1)}^{\mathbf{x}(t_2)} \mathbf{F} \cdot \mathrm{d}\mathbf{x} = U(\mathbf{x}(t_1))-U(\mathbf{x}(t_2)).$
The function U(x) is called the potential energy associated with the applied force. Examples of forces that have potential energies are gravity and spring forces.
In this case, the partial derivative of work yields
$\frac{\partial W}{\partial \mathbf{x}} = -\frac{\partial U}{\partial \mathbf{x}} = -\big(\frac{\partial U}{\partial x}, \frac{\partial U}{\partial y}, \frac{\partial U}{\partial z}\big) = \mathbf{F},$
and the force F is said to be "derivable from a potential."[7]
Because the potential U defines a force F at every point x in space, the set of forces is called a force field. The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity V of the body, that is
$P(t) = -\frac{\partial U}{\partial \mathbf{x}} \cdot \mathbf{v} = \mathbf{F}\cdot\mathbf{v}.$
Examples of work that can be computed from potential functions are gravity and spring forces.
### Work by gravity
Gravity F=mg does work W=mgh along any descending path
Gravity exerts a constant downward force F=(0, 0, W) on the center of mass of a body moving near the surface of the earth. The work of gravity on a body moving along a trajectory X(t) = (x(t), y(t), z(t)), such as the track of a roller coaster is calculated using its velocity, v=(vx, vy, vz), to obtain
$W=\int_{t_1}^{t_2}\mathbf{F}\cdot\mathbf{v}dt = \int_{t_1}^{t_2}W v_z dt = W\Delta z.$
where the integral of the vertical component of velocity is the vertical distance. Notice that the work of gravity depends only on the vertical movement of the curve X(t).
Forces in springs assembled in parallel.
### Work by a spring
A horizontal spring exerts a force F=(kx, 0, 0) that is proportional to its deflection in the x direction. The work of this spring on a body moving along the space curve X(t) = (x(t), y(t), z(t)), is calculated using its velocity, v=(vx, vy, vz), to obtain
$W=\int_0^t\mathbf{F}\cdot\mathbf{v}dt =\int_0^tkx v_x dt = \frac{1}{2}kx^2.$
For convenience, consider contact with the spring occurs at t=0, then the integral of the product of the distance x and the x-velocity, xvx, is (1/2)x2.
## Work-energy principle
The principle of work and kinetic energy (also known as the work-energy principle) states that the work done by all forces acting on a particle (the work of the resultant force) equals the change in the kinetic energy of the particle.[8] This definition can be extended to rigid bodies by defining the work of the resultant torque and rotational kinetic energy.
The work W done by the resultant force on a particle equals the change in the particle's kinetic energy $E_k$,[6]
$W=\Delta E_k=\tfrac12mv_2^2-\tfrac12mv_1^2$,
where $v_1$ and $v_2$ are the speeds of the particle before and after the change and m is its mass.
### Overview
The derivation of the work-energy principle begins with Newton's second law and the resultant force on a particle which includes forces applied to the particle and constraint forces imposed on its movement. Computation of the scalar product of the forces with the velocity of the particle evaluates the instantaneous power added to the system.[9]
Constraints define the direction of movement of the particle by ensuring there is no component of velocity in the direction of the constraint force. This also means the constraint forces do not add to the instantaneous power. The time integral of this scalar equation yields work from the instantaneous power, and kinetic energy from the scalar product of velocity and acceleration. The fact the work-energy principle eliminates the constraint forces underlies Lagrangian mechanics.[10]
This section focusses on the work-energy principle as it applies to particle dynamics. In more general systems work can change the potential energy of a mechanical device, the heat energy in a thermal system, or the electrical energy in an electrical device. Work transfers energy from one place to another or one form to another.
### Derivation for a particle moving along a straight line
In the case the resultant force F is constant in both magnitude and direction, and parallel to the velocity of the particle, the particle is moving with constant acceleration a along a straight line.[11] The relation between the net force and the acceleration is given by the equation F = ma (Newton's second law), and the particle displacement d can be expressed by the equation
$d = \frac{v_2^2 - v_1^2}{2a}$
which follows from $v_2^2 = v_1^2 + 2ad$ (see Equations of motion).
The work of the net force is calculated as the product of its magnitude and the particle displacement. Substituting the above equations, one obtains:
$W = Fd = mad = ma \left(\frac{v_2^2 - v_1^2}{2a}\right) = \frac{mv_2^2}{2} - \frac{mv_1^2}{2} = \Delta {E_k}$
In the general case of rectilinear motion, when the net force F is not constant in magnitude, but is constant in direction, and parallel to the velocity of the particle, the work must be integrated along the path of the particle:
$W = \int_{t_1}^{t_2} \mathbf{F}\cdot \mathbf{v}dt = \int_{t_1}^{t_2} F \,v dt = \int_{t_1}^{t_2} ma \,v dt = m \int_{t_1}^{t_2} v \,{dv \over dt}\,dt = m \int_{v_1}^{v_2} v\,dv = \tfrac12 m (v_2^2 - v_1^2) .$
### General derivation of the work-energy theorem for a particle
For any net force acting on a particle moving along any curvilinear path, it can be demonstrated that its work equals the change in the kinetic energy of the particle by a simple derivation analogous to the equation above. Some authors call this result work-energy principle, but it is more widely known as the work-energy theorem:
$W = \int_{t_1}^{t_2} \mathbf{F}\cdot \mathbf{v}dt = m \int_{t_1}^{t_2} \mathbf{a} \cdot \mathbf{v}dt = \frac{m}{2} \int_{t_1}^{t_2} \frac{d v^2}{dt}\,dt = \frac{m}{2} \int_{v^2_1}^{v^2_2} d v^2 = \frac{mv_2^2}{2} - \frac{mv_1^2}{2} = \Delta {E_k}$
The identity $\textstyle \mathbf{a} \cdot \mathbf{v} = \frac{1}{2} \frac{d v^2}{dt}$ requires some algebra. From the identity $\textstyle v^2 = \mathbf{v} \cdot \mathbf{v}$ and definition $\textstyle \mathbf{a} = \frac{d \mathbf{v}}{dt}$ it follows
$\frac{d v^2}{dt} = \frac{d (\mathbf{v} \cdot \mathbf{v})}{dt} = \frac{d \mathbf{v}}{dt} \cdot \mathbf{v} + \mathbf{v} \cdot \frac{d \mathbf{v}}{dt} = 2 \frac{d \mathbf{v}}{dt} \cdot \mathbf{v} = 2 \mathbf{a} \cdot \mathbf{v}$.
The remaining part of the above derivation is just simple calculus, same as in the preceding rectilinear case.
### Derivation for a particle in constrained movement
In particle dynamics, a formula equating work applied to a system to its change in kinetic energy is obtained as a first integral of Newton's second law of motion. It is useful to notice that the resultant force used in Newton's laws can be separated into forces that are applied to the particle and forces imposed by constraints on the movement of the particle. Remarkably, the work of a constraint force is zero, therefore only the work of the applied forces need be considered in the work-energy principle.
To see this, consider a particle P that follows the trajectory X(t) with a force F acting on it. Isolate the particle from its environment to expose constraint forces R, then Newton's Law takes the form
$\mathbf{F} + \mathbf{R} =m\ddot{\mathbf{X}},$
where m is the mass of the particle.
#### Vector formulation
Note that n dots above a vector indicates its nth time derivative. The scalar product of each side of Newton's law with the velocity vector yields
$\mathbf{F}\cdot\dot{\mathbf{X}} = m\ddot{\mathbf{X}}\cdot\dot{\mathbf{X}},$
because the constraint forces are perpendicular to the particle velocity. Integrate this equation along its trajectory from the point X(t1) to the point X(t2) to obtain
$\int_{t_1}^{t_2} \mathbf{F}\cdot\dot{\mathbf{X}} dt = m\int_{t_1}^{t_2}\ddot{\mathbf{X}}\cdot\dot{\mathbf{X}}dt.$
The left side of this equation is the work of the applied force as it acts on the particle along the trajectory from time t1 to time t2. This can also be written as
$W = \int_{t_1}^{t_2} \mathbf{F}\cdot\dot{\mathbf{X}} dt = \int_{\mathbf{X}(t_1)}^{\mathbf{X}(t_2)} \mathbf{F}\cdot d\mathbf{X}.$
This integral is computed along the trajectory X(t) of the particle and is therefore path dependent.
The right side of the first integral of Newton's equations can be simplified using the following identity
$\frac{1}{2}\frac{d}{dt}(\dot{\mathbf{X}}\cdot \dot{\mathbf{X}}) = \ddot{\mathbf{X}}\cdot\dot{\mathbf{X}},$
(see product rule for derivation). Now it is integrated explicitly to obtain the change in kinetic energy,
$\Delta K = m\int_{t_1}^{t_2}\ddot{\mathbf{X}}\cdot\dot{\mathbf{X}}dt = \frac{m}{2}\int_{t_1}^{t_2}\frac{d}{dt}(\dot{\mathbf{X}}\cdot \dot{\mathbf{X}}) dt = \frac{m}{2}\dot{\mathbf{X}}\cdot \dot{\mathbf{X}}(t_2) - \frac{m}{2}\dot{\mathbf{X}}\cdot \dot{\mathbf{X}} (t_1) = \frac{1}{2}m \Delta \mathbf{v^{2}} ,$
where the kinetic energy of the particle is defined by the scalar quantity,
$K = \frac{m}{2}\dot{\mathbf{X}}\cdot \dot{\mathbf{X}} =\frac{1}{2}m{\mathbf{v^{2}}}$
#### Tangential and normal components
It is useful to resolve the velocity and acceleration vectors into tangential and normal components along the trajectory X(t), such that
$\dot{\mathbf{X}}=v \mathbf{T},\quad\mbox{and}\quad \ddot{\mathbf{X}}=\dot{v}\mathbf{T} + v^2\kappa \mathbf{N}.$
where
$v=|\dot{\mathbf{X}}|=\sqrt{\dot{\mathbf{X}}\cdot\dot{\mathbf{X}}}.$
Then, the scalar product of velocity with acceleration in Newton's second law takes the form
$\Delta K = m\int_{t_1}^{t_2}\dot{v}vdt = \frac{m}{2}\int_{t_1}^{t_2}\frac{d}{dt}v^2 dt = \frac{m}{2}v^2(t_2) - \frac{m}{2}v^2(t_1),$.
where the kinetic energy of the particle is defined by the scalar quantity,
$K = \frac{m}{2} v^2 = \frac{m}{2} \dot{\mathbf{X}}\cdot\dot{\mathbf{X}}.$
The result is the work-energy principle for particle dynamics,
$W = \Delta K. \!$
This derivation can be generalized to arbitrary rigid body systems.
### Moving in a straight line (skid to a stop)
Consider the case of a vehicle moving along a straight horizontal trajectory under the action of a driving force and gravity that sum to F. The constraint forces between the vehicle and the road define R, and we have
$\mathbf{F} + \mathbf{R} =m\ddot{\mathbf{X}}.$
For convenience let the trajectory be along the X-axis, so X=(d,0) and the velocity is V=(v, 0), then R.V=0, and F.V=Fxv, where Fx is the component of F along the X-axis, so
$F_x v = m\dot{v}v.$
Integration of both sides yields
$\int_{t_1}^{t_2}F_x v dt = \frac{m}{2}v^2(t_2) - \frac{m}{2}v^2(t_1).$
If Fx is constant along the trajectory, then the integral of velocity is distance, so
$F_x (d(t_2)-d(t_1)) = \frac{m}{2}v^2(t_2) - \frac{m}{2}v^2(t_1).$
As an example consider a car skidding to a stop, where k is the coefficient of friction and W is the weight of the car. Then the force along the trajectory is Fx =-kW. The velocity v of the car can be determined from the length s of the skid using the work-energy principle,
$kWs = \frac{W}{2g} v^2,\quad\mbox{or}\quad v = \sqrt{2ksg}.$
Notice that this formula uses the fact that the mass of the vehicle is m=W/g.
Lotus type 119B gravity racer at Lotus 60th celebration.
Gravity racing championship in Campos Novos, Santa Catarina, Brazil, 8 September 2010.
### Coasting down a mountain road (gravity racing)
Consider the case of a vehicle that starts at rest and coasts down a mountain road, the work-energy principle helps compute the minimum distance that the vehicle travels to reach a velocity V, of say 60 mph (88 fps). Rolling resistance and air drag will slow the vehicle down so the actual distance will be farther than if these forces are neglected.
Let the trajectory of the vehicle following the road be X(t) which is a curve in three-dimensional space. The force acting on the vehicle that pushes it down the road is the constant force of gravity F=(0,0,W), while the force of the road on the vehicle is the constraint force R. Newton's second law yields,
$\mathbf{F} + \mathbf{R} =m\ddot{\mathbf{X}}.$
The scalar product of this equation with the velocity, V=(vx, vy,vz), yields
$W v_z = m\dot{V}V,$
where V is the magnitude of V. The constraint forces between the vehicle and the road cancel from this equation because R.V=0, which means they do no work. Integrate both sides to obtain
$\int_{t_1}^{t_2}W v_z dt = \frac{m}{2}V^2(t_2) - \frac{m}{2}V^2(t_1).$
The weight force W is constant along the trajectory and the integral of the vertical velocity is the vertical distance, therefore,
$W \Delta z = \frac{m}{2}V^2.$
Recall that V(t1)=0. Notice that this result does not depend on the shape of the road followed by the vehicle.
In order to determine the distance along the road assume the downgrade is 6%, which is a steep road. This means the altitude decreases 6 feet for every 100 feet traveled—for angles this small the sin and tan functions are approximately equal. Therefore, the distance s in feet down a 6% grade to reach the velocity V is at least
$s=\frac{\Delta z}{0.06}= 8.3\frac{V^2}{g},\quad\mbox{or}\quad s=8.3\frac{88^2}{32.2}\approx 2000\mbox{ft}.$
This formula uses the fact that the weight of the vehicle is W=mg.
## Work of forces acting on a rigid body
The work of forces acting at various points on a single rigid body can be calculated from the work of a resultant force and torque. To see this, let the forces F1, F2 ... Fn act on the points X1, X2 ... Xn in a rigid body.
The trajectories of Xi, i=1,...,n are defined by the movement of the rigid body. This movement is given by the set of rotations [A(t)] and the trajectory d(t) of a reference point in the body. Let the coordinates xi i=1,...,n define these points in the moving rigid body's reference frame M, so that the trajectories traced in the fixed frame F are given by
$\mathbf{X}_i(t)= [A(t)]\mathbf{x}_i + \mathbf{d}(t)\quad i=1,\ldots, n.$
The velocity of the points Xi along their trajectories are
$\mathbf{V}_i = \vec{\omega}\times(\mathbf{X}_i-\mathbf{d}) + \dot{\mathbf{d}},$
where ω is the angular velocity vector obtained from the skew symmetric matrix
$[\Omega] = \dot{A}A^T,$
known as the angular velocity matrix.
The small amount of work by the forces over the small displacements δri can be determined by approximating the displacement by δr=vδt so
$\delta W = \mathbf{F}_1\cdot\mathbf{V}_1\delta t+\mathbf{F}_2\cdot\mathbf{V}_2\delta t + \ldots + \mathbf{F}_n\cdot\mathbf{V}_n\delta t$
or
$\delta W = \sum_{i=1}^n \mathbf{F}_i\cdot (\vec{\omega}\times(\mathbf{X}_i-\mathbf{d}) + \dot{\mathbf{d}})\delta t.$
This formula can be rewritten to obtain
$\delta W = (\sum_{i=1}^n \mathbf{F}_i)\cdot\dot{\mathbf{d}}\delta t + (\sum_{i=1}^n (\mathbf{X}_i-\mathbf{d})\times\mathbf{F}_i) \cdot \vec{\omega}\delta t = (\mathbf{F}\cdot\dot{\mathbf{d}}+ \mathbf{T}\cdot \vec{\omega})\delta t,$
where F and T are the resultant force and torque applied at the reference point d of the moving frame M in the rigid body.
## References
1. Jammer, Max (1957). Concepts of Force. Dover Publications, Inc. p. 167; footnote 14. ISBN 0-486-40689-X.
2. ^ a b Resnick, Robert and Halliday, David (1966), Physics, Section 1-3 (Vol I and II, Combined edition), Wiley International Edition, Library of Congress Catalog Card No. 66-11527
3. ^ a b Hugh D. Young and Roger A. Freedman (2008). University Physics (12th ed.). Addison-Wesley. p. 329. ISBN 978-0-321-50130-1.
4. Andrew Pytel, Jaan Kiusalaas (2010). Engineering Mechanics: Dynamics - SI Version, Volume 2 (3rd ed.). Cengage Learning,. p. 654. ISBN 9780495295631.
## Bibliography
• Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed. ed.). Brooks/Cole. ISBN 0-534-40842-7.
• Tipler, Paul (1991). Physics for Scientists and Engineers: Mechanics (3rd ed., extended version ed.). W. H. Freeman. ISBN 0-87901-432-6.
News
Documents
Don't believe everything they write, until confirmed from WEBSITE REPORTED site.
### What is WEBSITE REPORTED?
It's a social web research tool
that helps anyone exploring anything.
### Updates:
Stay up-to-date. Socialize with us!
We strive to bring you the latest
from the entire web.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 67, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9098293781280518, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/264440/what-are-the-a-b-c-parameters-of-this-ellipse-formula?answertab=oldest
|
# What are the A,B,C parameters of this ellipse formula?
I am looking at
$$A(x − h)^2 + B(x − h)(y − k) + C(y − k)^2 = 1$$
This is a rotating ellipse formula, where $h,k$ are the centroid of the ellipse. I have tried looking around for $A,B,C$ parameters, and I see that they are from Quadratic formula. But to be frank, I want to see how $A,B,C$ impact the orientation of the ellipse and I am looking for a more visual explanation for this question.
-
2
In the future, you may want to hold off on accepting an answer for a while, just so you don't need to keep changing your accepted answer as new answers are posted. :) – Rahul Narain Dec 24 '12 at 7:32
## 3 Answers
I interpreted the question as asking for intuition about the parameters directly. So here is an alternative explanation. Of course, for doing anything useful with the ellipse, the eigenvalue/eigenvector perspective in Robert's answer is usually the most valuable, as it is coordinate-independent and more mathematically natural.
The ellipse is centered at the point $(h,k)$. It passes through the four points $(h\pm a,k)$ and $(h,k\pm c)$, where $a = 1/\sqrt A$ and $c = 1/\sqrt C$. That is, $A$ and $C$ tell you where the ellipse meets the horizontal and vertical lines through its center.
It's hard to say anything quantitative about the effect of $B$ without involving eigenvalues and eigenvectors, as in Robert's answer. Qualitatively, it controls how the ellipse deviates from being axis-aligned.
Above are three ellipses, each with $A=1/4$ and $C=1$, and with varying values of $B$. As you can see, all of them have the same span along the horizontal axis, $a = 1/\sqrt{1/4} = 2$, and similarly in the vertical, $c = 1/\sqrt 1 = 1$. When $B = 0$ (blue), the ellipse is axis-aligned. When it is positive ($B = 1/2$, maroon), the ellipse is oriented like a "\"; when it is negative ($B = -1/2$, yellow), it is oriented like a "/".
-
For the blue ellipse, ($B = 0$) why does its horizontal axis shorter than the other two ellipses? Is this correct? – Karl Dec 24 '12 at 7:39
Yes, the value of $a = 1/\sqrt A$ is not the half-width of the bounding box enclosing the ellipse, it is the point where the ellipse meets the horizontal axis. Observe that all three ellipses meet the horizontal axis $y=0$ at the same point. – Rahul Narain Dec 24 '12 at 7:42
I've edited the image; maybe this makes it clearer. – Rahul Narain Dec 24 '12 at 7:45
+1 (to Robert also). With a little bit of imagination one can `see' here that when $B\to-1+$ or $B\to1-$ (keeping $A,C$ fixed) the ellipse becomes more and more elongated and in a sense approach the union of two parallel lines. – Jyrki Lahtonen Dec 24 '12 at 10:37
The axes of the ellipse are in the directions of the eigenvectors of the $2 \times 2$ symmetric matrix $$M = \pmatrix{A & B/2\cr B/2 & C\cr}$$ which corresponds to the quadratic form: $$A x^2 + B x y + C y^2 = (x\ y) M \pmatrix{x\cr y\cr}$$ The semi-major and semi-minor axes are then the $-1/2$ power of the eigenvalues.
Thus suppose you want an ellipse with centroid at the origin (any other centroid, of course, can be obtained by translation), semi-axes $a$ and $b$ with the major axis at angle $\theta$ counterclockwise from the positive $x$ axis. The eigenvectors can be taken to be $\pmatrix{\cos \theta\cr \sin \theta\cr}$ and $\pmatrix{-\sin \theta\cr \cos \theta\cr}$ for eigenvalues $1/a^2$ and $1/b^2$ respectively, and then $$M = \pmatrix{\cos \theta & -\sin \theta\cr \sin \theta & \cos \theta\cr} \pmatrix{1/a^2 & 0\cr 0 & 1/b^2\cr} \pmatrix{\cos \theta & \sin \theta\cr -\sin \theta & \cos \theta\cr}$$ We get $A = \dfrac{\cos^2 \theta}{a^2} + \dfrac{\sin^2 \theta}{b^2}$, $B = \left(\dfrac{1}{a^2} - \dfrac{1}{b^2}\right) \sin(2\theta)$, $C = \dfrac{\sin^2 \theta}{a^2} + \dfrac{\cos^2 \theta}{b^2}$.
-
I would like to explain the transforms on a picture.
$$\frac{x'^2}{a^2}+\frac{y'^2}{b^2}=1$$
$$x=x'.\cos \alpha - y'. \sin \alpha$$
$$y=x'.\sin \alpha + y'. \cos \alpha$$
We can get easily the result above from the picture
$$\cos \alpha. x=\cos \alpha (x'.\cos \alpha - y'. \sin \alpha)$$
$$\sin \alpha. y=\sin \alpha (x'.\sin \alpha + y'. \cos \alpha)$$
$$x'=x.\cos \alpha + y. \sin \alpha$$
$$-\sin \alpha. x=\cos \alpha (x'.\cos \alpha - y'. \sin \alpha)$$
$$\cos \alpha. y=\sin \alpha (x'.\sin \alpha + y'. \cos \alpha)$$
$$y'=-x.\sin \alpha + y. \cos \alpha$$
$$\frac{(x.\cos \alpha + y. \sin \alpha)^2}{a^2}+\frac{(-x.\sin \alpha + y. \cos \alpha)^2}{b^2}=1$$
$$(\frac{\cos^2 \alpha }{a^2}+\frac{\sin^2 \alpha }{b^2})x^2+2(\frac{\cos \alpha \sin \alpha }{a^2}-\frac{\cos \alpha \sin \alpha }{b^2})xy+(\frac{\sin^2 \alpha }{a^2}+\frac{\cos^2 \alpha }{b^2})y^2=1$$
$$(\frac{\cos^2 \alpha }{a^2}+\frac{\sin^2 \alpha }{b^2})x^2+ \sin 2\alpha (\frac{1 }{a^2}-\frac{1}{b^2})xy+(\frac{\sin^2 \alpha }{a^2}+\frac{\cos^2 \alpha }{b^2})y^2=1$$
$$Ax^2+ Bxy+Cy^2=1$$
$x=X-h$
$y=Y-k$
$$A(X-h)^2+ B(X-h)(Y-k)+C(Y-k)^2=1$$
$$A=\frac{\cos^2 \alpha }{a^2}+\frac{\sin^2 \alpha }{b^2}$$
$$B=\sin 2\alpha (\frac{1 }{a^2}-\frac{1}{b^2})$$
$$C=\frac{\sin^2 \alpha }{a^2}+\frac{\cos^2 \alpha }{b^2}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 21, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9097386002540588, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/81415?sort=oldest
|
## What is growth of ass. algebra with 3 generators and relation a1a2a3 + a2a3a1 +a3a1a2 - a1a3a2 - a2a1a3 -a3a2a1 ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider ass. algebra with 3 generators a1 a2 a3 and relation: a1a2a3 + a2a3a1 +a3a1a2 - a1a3a2 - a2a1a3 -a3a2a1 = 0.
i.e. $$\sum_{ s \in S_3} (-1)^{sgn (s)} a_{s(1)} a_{s(2)} a_{s(3 )} = 0.$$
Informal questions: how far this algebra is from commutative polynomial algebra k[a1 a2 a3] ? What is known about this algebra ?
Formal question: what is the Hilbert series of this algebra ?
====
Some reformulations of the defining condition:
Consider 3 Grassman variables $\psi_i$ i = 1,2,3. Define $$\psi = a_1 \psi_1 + a_2 \psi_2 + a_3 \psi_3$$
The condition above is equivalent to $$\psi ^3 =0$$.
For commutative algebras we clearly have $\psi ^2=0$. So this algebra extends the commutative ones in certain sense. So I wonder how far it is away from commutative one ?
Yet another way to reformulate the defining relation is the following:denote by "M" 3* matrix:
M =
a1 a1 a1
a2 a2 a2
a3 a3 a3
The condition above is the same as $det^{column} M =0$ where column-determinant is used i.e. first take elements from the first column, second from the second and so forth.
======
Some example.
Consider $E_{ij}$ - "elementary matrices" i.e. n*n matrix with zeros everywhere except position (i,j) where we put 1.
Take for example $E_{11} , E_{21} , E_{31}$,
Observation: they satisfy the above relation.
More generally one can take $E_{i p} , E_{j p} , E_{k p}$ (important the second index is the same).
This means that the algebra above admits a homomorphism to universal enveloping of $E_{11} , E_{21} , E_{31}$. Universal enveloping algebras are very close to commuttive (at least their size is the same). So it suggests that in general such algebra is close to commutative, but probably this is wrong...
======
It seems Roland Berger discusses similar alegbras at section 3 of
http://arxiv.org/abs/0801.3383
as far as I can understand he proves that such algebra is N-Koszul (i.e. generalization of Koszul duality to non-quadratic algebras). But I cannot get far in his theory.
-
## 3 Answers
Put a term order on your (noncommutative) monomials such that $a_i a_j > a_j a_i$ for $i \lt j$. So the leading term of your equation is $a_1 a_2 a_3$. A basis for your ring is (noncommutative) monomials not divisible by $a_1 a_2 a_3$. In other words, a basis for the degree $n$ part of your algebra is length $n$ sequences of $1$'s, $2$'s and $3$'s which don't contain the sequence $123$. The rest of this post is the combinatorial task of counting the number of such sequences.
Let $A_n$ be the number of such sequences ending in $1$. Let $B_n$ be the number of such sequences ending in $12$. Let $C_n$ be the number of such sequences not in the other two classes (including the empty sequence). Then,
$$A_n = A_{n-1}+ B_{n-1} + C_{n-1}$$ $$B_n = A_{n-1}$$ $$C_n = A_{n-1} + B_{n-1} + 2 C_{n-1} + [n=0]$$
Where $[n=0]$ is $1$ if $n=0$ and is $0$ otherwise. So ```$$\begin{pmatrix} A_n \\ B_n \\ C_n \\ \end{pmatrix}
= \begin{pmatrix}
1 & 1& 1 \\
1 & 0 & 0 \\
1 & 1 & 2 \\
\end{pmatrix}^n
\begin{pmatrix} 0 \\ 0 \\ 1 \\ \end{pmatrix}$$``` The total number of terms of degree $n$ is $A_n+B_n+C_n$, so ```$$\begin{pmatrix} 1 & 1 & 1 \end{pmatrix}
\begin{pmatrix}
1 & 1& 1 \\
1 & 0 & 0 \\
1 & 1 & 2 \\
\end{pmatrix}^n
\begin{pmatrix} 0 \\ 0 \\ 1 \\ \end{pmatrix}$$```
The spectral radius of this matrix is $\approx 2.87939$, so your Hilbert series grow exponentially. That is very different from a commutative ring, whose Hilbert series will grow polynomially.
With a little hacking around with Mathematica, I get that the Hilbert series is $\frac{1}{1-3x+x^3} = 1+3x+9x^2+26x^3+75 x^4 + 216 x^5 + 622 x^6 + \cdots$. Does that match your data?
-
1
How do you show that the images of these monomials are linearly independent in this algebra? – mt Nov 20 2011 at 16:02
1
Let $\Delta$ be the defining cubic. Suppose for contradiction that the standard monomials are not linearly independent in degree $n$. So we have $\sum b_i w_i = \sum c_i u_i \Delta v_i$, where $w_i$ are standard monomials, $b_i$ and $c_i$ are nonzero constants and the $u_i$ and $v_i$ are monomials. Continued... – David Speyer Nov 20 2011 at 16:25
2
This is the "standard" noncommutative Groebner basis argument. As always, I am terrible for references. If you are good at references, please post one! – David Speyer Nov 20 2011 at 16:27
2
Say $u_1=123$ (I'll drop the $a$s). Then if $v_1=123123$ we have in $u_1\Delta v_1$ the monomial $u_1 123 v_1 = 123\cdot 123 \cdot 123123$, and you say this can't appear elsewhere. But if $u_2=123123$, bigger than $u_1$ in the lex order, and $v_2=123$, then a term in $u_2\Delta v_2$ is $123123 \cdot 123 \cdot 123$. Or am I misunderstanding? – mt Nov 20 2011 at 16:35
2
Certainly it can. Take each of those $123$'s and replace them by $213+132-231-312+321$, then expand out to get $25$ terms. – David Speyer Nov 20 2011 at 16:54
show 5 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Your algebra maps onto the free associative algebra of rank 2 (just kill $a_3$), so its growth is exponential.
-
(edited a bit to cover a few questions about the notation)
A slight simplification of David Speyer's argument: his argument using Groebner bases explains that is we degenerate the relation into $a_1a_2a_3=0$, the resulting algebra has the same Hilbert series. Now, the latter algebra $B$ has a very economic resolution of the trivial module by free right modules: $$0\to span_k(a_1a_2a_3)\otimes_k B\to span_k(a_1,a_2,a_3)\otimes_kB\to B\to k\to 0$$ (the leftmost differential maps $a_1a_2a_3\otimes 1$ to $a_1\otimes a_2a_3$, the next one maps $a_i\otimes 1$ to $a_i$). Computing the Euler characteristics, we get H_B(t)(1-3t+t^3)=1\$.
-
@Volodya Thank You ! But I not quite get: span - means what ? Algebra or module ? spanned from the left or from the both sides over B or over all algebra ? Tensor product - as algebras over C or as C vector spaces ? Why this is right modules than ? This probably works for any number of a_i - does it have some name ? where is it used ? – Alexander Chervov Nov 21 2011 at 11:55
1
Spans are over the ground field $k$, as well as tensor products (because of this we can compute Euler characteristics). The right $B$-module on $V\otimes B$ is via the action on $B$ alone: $(v\otimes b)b':=v\otimes(bb')$. Of course it works for any number of $a_i$'s, and the construction I use is a very special case of a construction due to David Anick (ams.org/mathscinet-getitem?mr=846601). – Vladimir Dotsenko Nov 21 2011 at 12:08
1
Oh, and for your last question: it is used in an awful lot of places, from deformation theory to theoretical computer science. Since the publication of Anick's paper in 1986, quite a few people rediscovered his results in various contexts! – Vladimir Dotsenko Nov 21 2011 at 12:39
3
Thanks for the reference! Anick's paper looks very definitive. – David Speyer Nov 21 2011 at 14:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073070883750916, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/62495/which-finite-nonabelian-groups-have-long-chains-of-subgroups-as-intervals-in-thei/62546
|
## Which finite nonabelian groups have long chains of subgroups as intervals in their subgroup lattice?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given N, what is a finite non-abelian (and preferably non-solvable) group G in whose subgroup lattice Sub[G] there is an interval that is a chain of length at least N?
Since N can be arbitrarily large (but fixed), perhaps there is no easy answer. In that case, can someone suggest which sorts of groups to look at to find intervals that are chains (say, on the order of 10 subgroups in the chain)?
Thanks in advance!
Edit: Thanks to Carnahan's answer, I see that I should have ruled out direct products of cyclic groups with nonsolvable groups. What I'm interested in are intervals in the subgroup lattice of the form:
${ K : H \leq K \leq G }$
where $H$ is a corefree subgroup of $G$.
-
Have you considered already Heisenberg groups over a finite field? Or similiarly the upper triangular matrices in the $N \times N$ matrixes over some finite unital ring. Depending on $N$ you get many subgroups. Of course they are solvable, but if you consider them sitting inside $GL_N$? – Marc Palm Apr 21 2011 at 5:48
I have not looked at those yet, but I will do so now. In particular, I'll check whether the chains you get in those groups are actually intervals. Thanks. – William DeMeo Apr 21 2011 at 6:16
## 6 Answers
You can construct groups with chains of arbitrary length with the help of wreath products. Since you want to assume that the subgroup $H$ is corefree, you get a faithful action of $G$ on the cosets of $H$, so one may identify $G$ with a permutation group. Now assume that $G$ is a transitive permutation group on the set $\Gamma$ and $X$ is a transitive permutation group on the set $\Omega$. The wreath product of $G$ with $X$, that is the semi-direct product $W=X \ltimes G^{\Omega}$, acts on $\Omega\times \Gamma$ by $$(\omega, \gamma) (x, f) = (\omega x, \gamma ((\omega x) f ) ) \quad \text{(where $f\colon \Omega\to G$ and thus $(\omega x) f\in G$.)}$$ The stabilizer of $(\omega, \gamma)$ is then $$W_{(\omega, \gamma)} = X_{\omega} \ltimes (G_{\gamma}\times G \times \dotsm \times G)$$ (where the component $G_{\gamma}$ occurs, strictly speaking, at position $\omega$). Now it is not difficult to see, that if a subgroup $K$ with $W_{(\omega, \gamma)}\leq K$ contains an element $(x, f)$ with $x\notin X_{\omega}$, then $K$ contains $G\times G \times \dotsm \times G$. So either $K$ has the form $Y \ltimes (G^{\Omega})$ with $X_{\omega} < Y \leq X$, or it has the form $X_{\omega} \ltimes (I \times G \times \dotsm \times G)$ with $G_{\gamma}\leq I \leq G$.
So the interval $[W_{(\omega, \gamma)}, W]$ is lattice isomorphic to the lattice obtained by putting $[X_{\omega}, X]$ on top of $[G_{\gamma}, G]$. If the latter are chains, then you get a chain, where the lengths add. Starting with a primitive permutation group (non-solvable, if you wish) and repeating this contruction, you get arbitrarily large chains. Even if you are interested in non-solvable groups, I mention that the Sylow $p$-subgroup of $S_{p^n}$ is a special case of this contruction.
-
This is interesting. Thank you for suggesting this nice, general construction. – William DeMeo Apr 21 2011 at 23:33
I've decided to accept this answer because, while Dr. Holt's answer is also great, your answer came first, and the construction you describe seems like it might answer another question I've been thinking about, which I'll describe in a later comment, once I've verified that it works. Thanks! – William DeMeo May 5 2011 at 9:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think you can get arbitrarily long chains of this type in the simple groups ${\rm PSL}(2,p)$ for $p$ prime.
We make use of the maximal dihedral subgroups of order $p-1$. For given $N$, choose $p$ such that $(p-1)/2 = q^Nr$ with $q$ an odd prime and $r>2$. Then there is a chain of subgroups
$H_0 < H_1 < \cdots H_N < G$
where $H_i$ is dihedral of order $2q^ir$.
-
Thanks, that's really neat. The "r" makes sure anything containing H0 lands in HN. Why does q have to be odd? If we require H0 to be order not dividing 120 (so make r larger and avoid the A4,A5,S4 subgroups), can we take q=2 anyways? – Jack Schmidt Apr 21 2011 at 19:55
Yes you are right, $q$ need not be odd. – Derek Holt Apr 21 2011 at 21:47
Wow, that's pretty cool. I wouldn't have noticed these long chains without your nice use of maximal subgroups. Probably these aren't examples that are easily verified in GAP, but I'll take your word for it for now, and then convince myself that it works. Thank you!! ..and thank you to everyone else who gave helpful suggestions. – William DeMeo Apr 21 2011 at 23:39
I think in the subgroup lattice of the non-solvable group $A_5 \times \mathbb{Z}/2^N\mathbb{Z}$, the interval between `$\{1\} \times \{1\}$` and `$\{ 1 \} \times \mathbb{Z}/2^N\mathbb{Z}$` is a chain of length $N$.
Edit: For any $N$, you can also find primes $p$ and $q$ such that $q \equiv \pm 1$ mod $p^N$, so there are cyclic groups of order $p^N$ inside any group of Lie type over `$\mathbb{F}_q$` (where you use $+1$ in the split case and $-1$ in the non-split case). These will yield intervals that are chains of length $N$. Similarly, the alternating group `$A_{p^N}$` contains a cyclic group of order $p^N$.
-
Ha! Of course, take any cyclic group and take the direct product with any nonsolvable group! Ok, you got me there. When I ruled out cyclic groups, I should have ruled out this construction too. I will revise my question. To be more specific, for my application I need intervals above subgroups that are core-free. (So you can't mod out the non-solvable part and end up with a cyclic group.) ...but thanks for your answer, which will help me improve my question. – William DeMeo Apr 21 2011 at 8:58
Intervals [A,B] in a subgroup lattice are the lattice formed from the subgroups C with A ≤ C ≤ B. An interval is a chain if it is totally ordered by inclusion. When discussing intervals in a subgroup lattice, it is often a good idea to assume we have chosen the group with this lattice minimally. In particular, we can assume G=B so that the interval goes all the way to the top, and we can assume A is core-free so that we cannot quotient out by any normal subgroup contained in every subgroup in the interval.
Intervals that are chains are ubiquitous if we do not require the chain to be very long: if M is any maximal subgroup of G, then the interval [M,G] is a chain of length 1.
Intervals that are very long chains are also easy to find: if G is a cyclic group of prime power order, pn, then the interval [1,G] is a chain of length n. However, these G are the only examples where the interval [1,G] is a chain.
Thus the question arises if there are long chains [A,G] where 1≠A is core-free.
An example for arbitrary n is the dihedral group of order 2n+1 with A any non-central subgroup of order 2. The interval [A,G] consists of the dihedral groups of order 2k for 1 ≤ k ≤ n+1, and so has length n.
-
@Jack: He wants a non-solvable group. – Mark Sapir Apr 21 2011 at 14:10
Thank you! These are perfectly good solvable examples. I verified in GAP for a few k, if G:=DihedralGroup(2^k), and if H is a representative of the 3rd (GAP's labelling) conjugacy class of subgroups, then [H,G] is a chain of length k. Nice! – William DeMeo Apr 21 2011 at 23:16
I am not completely sure what an interval means in this case. But I think that an interesting example to look at would be $p$-groups of maximal class.
-
Thanks, I will look at those. (The interval between two subgroups $A$ and $B$ in the subgroup lattice of a group simply means all subgroups $C$ such that $A \leq C \leq B$.) – William DeMeo Apr 21 2011 at 9:10
It might be worthwhile to check out the paper below and its predecessors (and one successor by Alladi and Turull): Solomon, Ron; Turull, Alexandre Chains of subgroups in groups of Lie type. III. J. London Math. Soc. (2) 44 (1991), no. 3, 437–444.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356335401535034, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/9134/arbitrary-products-of-schemes-dont-exist-do-they
|
## Arbitrary products of schemes don’t exist, do they?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Thinking of arbitrary tensor products of rings, $A=\otimes_i A_i$ ($i\in I$, an arbitrary index set), I have recently realized that $Spec(A)$ should be the product of the schemes $Spec(A_i)$, a priori in the category of affine schemes, but actually in the category of schemes, thanks to the string of equalities (where $X$ is a not necessarily affine scheme)
$$Hom_{Schemes} (X, Spec(A))= Hom_{Rings}(A,\Gamma(X,\mathcal O))=\prod_ {i\in I}Hom_{Rings}(A_i,\Gamma(X,\mathcal O))$$
$$=\prod_ {i\in I}Hom_{Schemes}(X,Spec(A_i))$$
Since this looks a little too easy, I was not quite convinced it was correct but a very reliable colleague of mine reassured me by explaining that the correct categorical interpretation of the more down to earth formula above is that the the category of affine schemes is a reflexive subcategory of the category of schemes. (Naturally the incredibly category-savvy readers here know that perfectly well, but I didn't at all.)
And now I am stumped: I had always assumed that infinite products of schemes don't exist and I realize I have no idea why I thought so!
Since I am neither a psychologist nor a sociologist, arguments like "it would be mentioned in EGA if they always existed " don't particularly appeal to me and I would be very grateful if some reader could explain to me what is known about these infinite products.
-
9
Arbitrary coproducts in CRing do exist. See Categories for the Working Mathematician, section IX.1. – Tom Leinster Dec 17 2009 at 19:07
## 7 Answers
Let me rephrase the question (and Ilya's answer). Given an arbitrary collection $X_i$ of schemes, is the functor (on affine schemes, say)
$Y \mapsto \prod_i Hom(Y, X_i)$
representable by a scheme? If the $X_i$ are all affine, the answer is yes, as explained in the statement of the question. More generally, any filtered inverse system of schemes with essentially affine transition maps has an inverse limit in the category of schemes (this is in EGA IV.8). The topology in that case is the inverse limit topology, by the way.
It is easy to come up with examples of infinite products of non-separated schemes that are not representable by schemes. This is because any scheme has a locally closed diagonal. In other words, if $Y \rightrightarrows Z$ is a pair of maps of schemes then the locus in $Y$ where the two maps coincide is locally closed in $Y$.
Suppose $Z$ is the affine line with a doubled origin. Every distinguished open subset of an affine scheme $Y$ occurs as the locus where two maps $Y \rightrightarrows Z$ agree. Let $X = \prod_{i = 1}^\infty Z$. Every countable intersection of distinguished open subsets of $Y$ occurs as the locus where two maps $Y \rightarrow X$ agree. Not every countable intersection of open subsets is locally closed, however, so $X$ cannot be a scheme.
Since the diagonal of an infinite product of separated schemes is closed, a more interesting question is whether an infinite product of separated schemes can be representable by a scheme. Ilya's example demonstrates that the answer is no.
Let $Z = \mathbf{A}^2 - 0$. This represents the functor that sends $Spec A$ to the set of pairs $(x,y) \in A^2$ generating the unit ideal. The infinite product $X = \prod_{i = 1}^\infty Z$ represents the functor sending $A$ to the set of infinite collections of pairs $(x_i, y_i)$ generating the unit ideal. Let $B$ be the ring $\mathbf{Z}[x_i, y_i, a_i, b_i]_{i = 1}^\infty / (a_i x_i + b_i y_i = 1)$. There is an obvious map $Spec B \rightarrow X$. Any (nonempty) open subfunctor $U$ of $X$ determines an open subfunctor of $Spec B$, and this must contain a distinguished open subset defined by the invertibility of some $f \in B$. Since $f$ can involve at most finitely many of the variables, the open subset determined by $f$ must contain the pre-image of some open subset $U'$ in $\prod_{i \in I} Z$ for some finite set $I$. Let $I'$ be the complement of $I$. If we choose a closed point $t$ of $U'$ then $U$ contains the pre-image of $t$ as a closed subfunctor. Since the pre-image of $t$ is $\prod_{i \in I'} Z \cong X$ this shows that any open subfunctor of $X$ contains $X$ as a closed subfunctor.
In particular, if $X$ is a scheme, any non-empty open affine contains a scheme isomorphic to $X$ as a closed subscheme. A closed subscheme of an affine scheme is affine, so if $X$ is a scheme it is affine.
Now we just have to show $X$ is not an affine scheme. It is a subfunctor of $W = \prod_{i = 1}^\infty \mathbf{A}^2$, so if $X$ is an affine scheme, it is locally closed in $W$. Since $X$ is not contained in any closed subset of $W$ except $W$ itself, this means that $X$ is open in $W$. But then $X$ can be defined in $W$ using only finitely many of the variables, which is impossible.
Edit: Laurent Moret-Bailly pointed out in the comments below that my argument above for this last point doesn't make sense. Here is a revision: Suppose to the contrary that $X$ is an affine scheme. Then the morphism $p : X \rightarrow X$ that projects off a single factor is an affine morphism. If we restrict this map to a closed fiber then we recover the projection from $Z$ to a point, which is certainly not affine. Therefore $X$ could not have been affine in the first place.
-
Nice answer! I don't follow the last paragraph. You show any open subset $U$ of $X$ must contain a product of the form $\prod_{i \in I} Z$. At this point, you are basically done. But I don't understand the particular way you finish the proof. Why do you say "if $X$ is a scheme then it is an affine scheme."? – David Speyer Dec 17 2009 at 12:10
I added some words to clarify this. The point is that $X$ has a nonempty open affine containing a closed subscheme isomorphic to $X$. – Jonathan Wise Dec 17 2009 at 16:22
Thanks, that makes sense. – David Speyer Dec 17 2009 at 16:37
Thank you for this masterful answer: you seem to be completely at home with EGA ! I never hoped to get such a prompt, clear and complete answer when I posted my question. I wish you all the success you obviously deserve. – Georges Elencwajg Dec 17 2009 at 22:50
Dear Jonathan, may I exploit your impressive expertise again? Are there classes of non affine schemes whose infinite products exist as schemes? Or does a modification of your proof show that infinite products of non-affine schemes never are schemes? Is, for example, a denumerable product of projective lines a scheme ? Very friendly, Georges. – Georges Elencwajg Dec 19 2009 at 10:30
show 11 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here's a guess at what goes wrong for general schemes. For simplicity, let X be a non-affine scheme; say it's the union of two affines: A1 and A2 (although I'm actually thinking of $\mathbb A^2 \backslash {0}$), and let's try to define the product $Y=\prod_{i=1}^\infty X$.
Well, we should be able to describe Y as a union of affines (that are glued along some maps). What should these be? There are two "obvious answers". If we carry over our intuition from topology, the natural building blocks should have the form $U_1 \times U_2 \times \ldots$ where each $U_i$ is one of $A_1, A_2$, or $X$ and all but finitely many $U_i$-s are equal to $X$. However, these products are not affine (they aren't really defined as schemes, but since $X$ is not affine, they are even "intuitively" not affine).
The second "obvious answer" would be to take products $U_1 \times U_2 \times \ldots$ where each $U_i$ is either $A_1$ or $A_2$. These would be affine, but this feels like a wrong answer: it would be like using the box topology on an infinite product. They shouldn't even be open in Y (I know, this is rather far-fetched since Y is not yet defined). Also, if you tried to glue Y out of these, I doubt you'd be able to define gluing maps (they are maps from an infinite product to an infinite product - I feel this is bad, but can categorically minded people confirm?).
So far, I don't have an actual proof that the second answer is bad, or that you couldn't define Y with some other affines, but I think there should be a good reason (the same reason as to why we use the product topology for topological spaces, though it eludes me at the moment).
-
5
Why would we want it to be a topological product at all? The topology on $\mathbb{A}^2$ is not the product topology $\mathbb{A}^1\times\mathbb{A}^1$. The thing you claim should be a basis for the topology of $Y$ just isn't, even in the finite product case. – Charles Siegel Dec 17 2009 at 2:05
@Charles: I feel that my answer was badly written, so I rewrote it. I think my point is much clearer now (although it's still far from a proof). Can you look at the new version and say whether you still disagree? – Ilya Grigoriev Dec 17 2009 at 4:40
6
I don't think this deserves a negative vote. It's a nice example, just incomplete. – Jonathan Wise Dec 17 2009 at 8:38
3
Here is a proof that the second answer is bad. Let's take $X=\mathbb{P}^1$, with $U_1$ and $U_2$ the standard affines. Consider infinitely many maps from $\mathbb{P}^1$ to $X$, with $f_i(t) = t-i$. Then the universal property of products gives us a map $F:\mathbb{P}^1 \to X \times X \times \cdots$. But $F^{-1}(U_2 \times U_2 \times \cdots)$ is the complement of infinitely many closed points, which is not an open set, contradicting that $F$ is continuous. – David Speyer Dec 17 2009 at 11:59
1
Also, I agree with Jonathan. This answer helped me understand why my attempted constructions weren't working. – David Speyer Dec 17 2009 at 12:01
Here is another example with a rigorous proof (which is a collaboration with "owk").
Example. Let $R$ be a discrete valuation ring, $I$ an infinite set. Glue two copies of $\text{Spec}(R)$ along the generic point to get a $R$-scheme $X$. Then in the category of $R$-schemes the power $X^I$ does not exist.
Proof: Write $\text{Spec}(R) = \{\eta,\mathfrak{m}\}$, where $\eta$ is the generic point and $\mathfrak{m}$ is the special point. Let $K$ be the quotient field and $k$ the residue field of $R$. Assume $P = X^I$ exists in the category of $R$-schemes.
For an $R$-scheme $T$, a $T$-valued point of $X$ corresponds to an open covering $T = T_1 \cup T_2$ such that $T_1 \cap T_2 = T_{\eta}$. If we apply this to $K$-schemes or $k$-schemes, we see $X \times_R K = \text{Spec}(K)$ and $X \times_R k = \text{Spec}(k) \coprod \text{Spec}(k) = \text{Spec} k[x]/(x^2-x)$. Now the reduction $X(R) \to X(k)$ is bijective: It maps $(\text{Spec}(R),\{\eta\}), (\{\eta\},\text{Spec}(R))$ to $(\text{Spec}(k),\emptyset), (\emptyset,\text{Spec}(k))$. From $P(T)=X(T)^I$ we deduce that also $P(R) \to P(k)$ is bijective.
Since fibers may be described by fiber products and fiber products commute with fiber products by general nonsense, we get as $K$-schemes
$P_{\eta} = (X \times_R K)^I = \text{Spec}(K)^I = \text{Spec}(K)$.
Let us denote the unique point in $P_{\eta}$ also by $\eta$. As $k$-schemes, we get
$P_{\mathfrak{m}} = (X \times_R k)^I = \text{Spec}(k[(x_i)_{i \in I}]/(x_i^2-x_i)_{i \in I})$.
We see that $P_{\mathfrak{m}}$ is homeomorphic to $\{0,1\}^I$, in particular it is not discrete. Remark that $P_{\mathfrak{m}}$ is not open in $P$ since otherwise we would get the contradiction $P(R)=\emptyset$. Also remark that $P_{\mathfrak{m}}$ may be identified with $P(k)$, on which $\text{Aut}(P)$ acts transitively.
Next we want to show that $\eta$ is a generic point of $P$. If not, let $U$ be a nonempty open subset of $U$ with $\eta \notin U$. Then $U \subseteq P_{\mathfrak{m}}$ and it follows that $P_{\mathfrak{m}}$ is the union of the $\sigma(U)$, $\sigma \in \text{Aut}(P)$, and therefore open, contradiction.
Since $P_{\mathfrak{m}}$ is not discrete, there is some nonempty open subset $\text{Spec}(A) \subseteq P$ which contains two points $p_1,p_2 \in P_{\mathfrak{m}}$. They induce $p_1,p_2 \in P(k) \cong P(R)$. Since $R$ is local, $p_1,p_2$ are induced by $p_1,p_2 \in \text{Spec}(A)(R)$. But now $\text{Spec}(A)(R) \subseteq \text{Spec}(A)(K) = P(K)= \{\eta\}$, thus $p_1=p_2$, contradiction. -qed
-
Although I haven't yet studied this interesting example in detail, I already want to thank you, Martin, and "owk" for your interest in this old question. – Georges Elencwajg May 20 2011 at 21:17
Actually we came up with this example in summer 2009. ;) – Martin Brandenburg May 20 2011 at 21:28
If you want a tensor product satisfying the isomorphism described, you can just define it as the inductive limit of all finite tensor products. For example, if you tensor `$k[x_i]$` like this you really obtain k[x1,x2,x3,...]. It seems that this is a quite reasonable construction.
-
But inductive limits of schemes aren't always schemes. Consider any formal scheme which isn't algebrizable. – David Speyer Dec 17 2009 at 2:29
David: this is an inductive limit of rings, not of schemes. – Jonathan Wise Dec 17 2009 at 8:39
Thanks, you're right. – David Speyer Dec 17 2009 at 12:11
looks ok to me. and in a sense, ega does have this result: in any category, arbitrary limits can be made from fiber products and filtered limits (and the terminal object i guess, but let's forget about that), and in the category of schemes fiber products always exist and filtered limits exist when the transition maps are affine.
-
oh yeah, but this doesn't address the question of arbitrary products of arbitrary (non-affine) schemes. i don't see why those would exist. – whatev Dec 17 2009 at 1:18
Do there exist arbitrary coproducts in the cat. of comm. rings? I am not sure how one constructs these "infinite" tensor products. Could any one explain?
-
2
Yes, they do exist. In general, any "algebraic" category such as Group, Ring, Vect has all colimits, and in particular all coproducts. (The precise statement is: every category monadic over Set has all small colimits, assuming that Set satisfies the axiom of choice.) The coproduct of a family (A_i) of rings can be described just like the ordinary (finite) tensor product, but with the restriction that each element must be representable as a finite sum of finite products of elements of A_i's. – Tom Leinster Dec 17 2009 at 19:13
Thanks, now it is more clear – anonymous Dec 18 2009 at 9:47
To add a tiny bit to Tom's description: in general, an arbitrary coproduct is the filtered colimit of the directed system of finite coproducts. So you take the filtered colimit in commutative rings of finite tensor products. This filtered colimit is a reflection or lifting of the ordinary filtered colimit in the category of sets; this is generally true for categories of algebras for a finitary monad on $Set$, or for an algebraic theory given by finitary operations. – Todd Trimble May 20 2011 at 13:18
For Ring I mean commutative by unity. COnsider schemas like ringed spaces (this category is bifibrated on Topological spaces category, and its fibre over X is $Ring-Shv(X)^{op}$, the schemas category is a full subcategory) then general product exist (as ringed spaces) and then the topological base space is the spaces product (this follow from the costruction of the limits in a fibrated category). This product (as ringed space) is a schema iff is locally a affine schemas. Considering the base topology of product and the fact that the product of affine schemas is a (affine) schema, follow that:
if almost all (all but finite) schemas are affine then the product of schemas exist as schema.
For try to generalizee we need study if (or when) the infinite product of open sets is a open in the category of the Sobre topological spaces (local spaces).
More in general? I do this following idea, no too sure:
In Hakim (TOpos annelles and schemas relative) she realize a schemas $SPEC(R)$ (i.e. a ringed space locally like the "Spec" of a ring) associated to a ringed-space $R$, generalizing the costruction os $Spec(R)$ from a ring $R$, of course this construction is universal in some sense. QUestion: Define this construction a categorical reflection (or coreflection)? If Yes we can costruct the prodoct of schemas from the prodoct as ringed-spaces and then take the "SPEC" of this product
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 163, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453885555267334, "perplexity_flag": "head"}
|
http://dsp.stackexchange.com/questions/tagged/psd
|
# Tagged Questions
The psd tag has no wiki summary.
2answers
173 views
### PSD (Power spectral density) explanation
I'm trying to understand how the PSD is calculated. I've looked in a few of my Communication Engineering textbooks but to no avail. I've also looked online. Wikipedia seems to have the best ...
1answer
79 views
### Guitar String Replucks
I'm analyzing guitar string plucks and sustains. I'm having good success with auto correlation using FFT's. Now I'd like to detect plucks while the string is still vibrating. Since I already am ...
0answers
63 views
### Getting sound pressure level (SPL) for a frequency band from Welch's derived PSD
I'm using Welch's method (pwelch, in octave) to get the PSD of a recorded sound signal (i.e. pressure level vs. time). I then took the PSD data and took the sum of the amplitude in each octave band ...
1answer
146 views
### Working backwards from PSD to possible signal
I have been trying to reconstruct a random signal from its PSD and am running into trouble. I know that many different signals in the time or spatial domains can result in the same PSD-- I am ...
1answer
188 views
### Producing Colored Noise from a given PSD Data
I need to estimate (extract/produce) the colored noise of a given PSD Data. I have the following procedure to get the desired results. First of all I calculate the filter coefficiencts using firls ...
2answers
140 views
### PSD of Windowed signal
Suppose PSD of a signal $x(n)$ is $PSD_{xx}(w)=1$. Now if a window function w(n) of length $0\le n \le N-1$ is applied to $x(n)$. what will be the new PSD?
1answer
362 views
### What is the difference between PSD and squared magnitude of frequency spectrum?
The power spectrum of a signal can be calculated by taking the magnitude squared of its Fourier transform. Being an audio person, the signal of interest for me would be a time series. How does this ...
1answer
2k views
### Difference between Power spectral density, spectral power and power ratios?
What 'exactly' is power spectral density for discrete signal? I was always under the assumption that taking the Fourier transform of the signal, and then the ratio of desired freq range magnitude over ...
2answers
2k views
### Why so many methods of computing PSD?
Welch's method has been my go-to algorithm for computing power spectral density (PSD) of evenly-sampled timeseries. I noticed that there are many other methods for computing PSD. For example, in ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460417032241821, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-applied-math/70305-help-mechanical-vibrations.html
|
Thread:
1. Help! Mechanical vibrations.
I was wondering if anyone would be able to start me off on this question;
A Weight stretches a spring 3 inches. It is set in motion at a point 4 inches below it's equilibrium position with zero velocity.
-Find the max amplitude
-Find the max velocity
-When does it reach it's highest point
2. Simple Harmonic Motion
Hello s7b
Originally Posted by s7b
I was wondering if anyone would be able to start me off on this question;
A Weight stretches a spring 3 inches. It is set in motion at a point 4 inches below it's equilibrium position with zero velocity.
-Find the max amplitude
-Find the max velocity
-When does it reach it's highest point
Assuming that Hooke's Law is still obeyed when the spring becomes compressed, the motion is Simple Harmonic, with centre the original equilibrium position, and amplitude 4 inches. That's the answer to the first question.
The tension in the spring is proportional to the extension. When the extension is 3 inches, tension = weight of body = mg. So when extension = 7 inches, tension = $\frac{7mg}{3}$. So, if we resolve vertically at the point of maximum extension:
$mg - \frac{7mg}{3} = m \times acceleration$
$\Rightarrow acceleration = -\frac{4g}{3}= -\frac{128}{3}$, taking g = 32
With the usual notation, the equation of motion is
$\ddot{x} = -\omega^2 x$
And we know that when $x = \frac{1}{3}$ feet, $\ddot{x} = -\frac{128}{3}$. So we can now work out the value of $\omega$
Now the velocity, displacement and amplitude are related by
$v^2 = \omega^2(a^2 - x^2)$
The maximum velocity is when $x=0$. So max velocity = $a\omega$. And the time it takes to reach its highest point is half the period of a complete oscillation. And that period is
$\frac{2\pi}{\omega}$
Can you take it from here?
Grandad
3. where did you get the 7??
4. Also, is there a way of setting up a differential equation anf solving it that way??
5. Simple Harmonic Motion
Hello s7b
Originally Posted by s7b
where did you get the 7??
In the equilibrium position, the extension of the spring is 3 inches. It's then extended a further 4 inches. 3 + 4 = 7.
Originally Posted by s7b
Also, is there a way of setting up a differential equation anf solving it that way??
Yes, although I would expect that you would have covered all this if you have studied SHM.
But, from first principles, it goes something like this (and I'm not using particular measurements or particular units now):
Suppose that the original extension of the spring when the particle is in equilibrium is $b$. Then in this position, tension in spring = $mg$. So when the spring has a further extension $x$ below the equilibrium position, the tension, using Hooke's Law, is
$T = \frac{(b + x)mg}{b}$
So, in this position, use force = mass x acceleration:
$mg - T = m\ddot{x}$
$\Rightarrow mg - \frac{(b + x)mg}{b} = m\ddot{x}$
$\Rightarrow \frac{mgb - mgb - mgx}{b} = m\ddot{x}$
$\Rightarrow \ddot{x} = -\frac{g}{b}x$
This is the standard SHM equation, where $\omega^2 = \frac{g}{b}$. The solutions are well known - see for instance Simple Harmonic Motion -- from Wolfram MathWorld
But if you want to solve it for yourself, you could say that this is a second order differential equation, whose Auxiliary Equation is:
$m^2 + \frac{g}{b} = 0$
and solve it in the usual way to get $x$ in terms of $t$.
The solution takes the form $x = a\cos(\omega t + \alpha)$, where $a$ is the amplitude and $\alpha$, the phase angle, is determined by the initial conditions. If $x = a$ when $t = 0$ (as in our case), $\alpha = 0$, and we get
$x = a\cos(\omega t)$, where $\omega^2 = \frac{g}{b}$
The period of the oscillation is then $\frac{2\pi}{\omega} = 2\pi \sqrt{\frac{b}{g}}$
Or you could write the acceleration as $v\frac{dv}{dx}$ and then solve
$v\frac{dv}{dx} = -\frac{g}{b}x$
to get
$\frac{1}{2}v^2 = -\frac{g}{2b}x^2 + c$
Then use the fact that, at the maximum displacement, $x = a$ (the amplitude) and $v = 0$. This gives
$v^2 = \frac{g}{b}(a^2 - x^2)$
... and so on.
Grandad
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214154481887817, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/96518/how-to-justify-this-differential-manipulation-while-integrating
|
How to justify this differential manipulation while integrating?
Some time ago I had a physics test where I had the following integral: $\int y'' \ \mathrm{d}y$. The idea is that I had a differential equation, and I had acceleration (that is, $y''$) given as a function of position ($y$). The integral was actually equal to something else, but that's not the point. I needed to somehow solve that. I can't integrate acceleration with respect to position, so here's what I did:
$$\int y'' \ \mathrm{d}y = \int \frac{\mathrm{d}y'}{\mathrm{d}t} \ \mathrm{d}y = \int \mathrm{d}y' \frac{\mathrm{d}y}{\mathrm{d}t} = \int y' \ \mathrm{d}y' = \frac1{2}y'^2 + C$$
My professor said this was correct and it makes sense, but doing weird stuff with differentials and such never completely satisfies me. Is there a substitution that justifies this procedure?
-
7
$$\int y''dy=\int y''(t)[y'(t)dt]=\int\frac{1}{2}\left[\frac{d}{dt}(y')^2\right]dt=\frac{(y')^2}{2}+C\;\;?$$ – anon Jan 5 '12 at 1:12
1
$\int y{}^\prime{}^\prime dy$ does not make any sense to me. – AlexE Jan 5 '12 at 10:43
1
@AlexE: I don't really remember why I did that, but I know that I did. The exercise was to find an expression for velocity in terms of position for an object thrown up from the Earth, taking into account that fact that gravity changes with height: the equation was $y'' = G \frac{m_E}{(y+R_E)^2}$, where $m_E$ is Earth's mass and $R_E$ is its radius. – Javier Badia Jan 5 '12 at 13:50
3
$\int y'' dy$ stands for, as anon shows, $\int y''(t) y'(t) dt$. When the differential $df$ appears on its own, where $f$ is a function, it stands for $f'(t) dt$. – user18063 Jan 5 '12 at 16:19
Since this came up on a physics test, it may be helpful to lay out the physical interpretation. If $y(t)$ gives the motion of an object, then $y''$ is its acceleration, and the work done on the object is $W=\int F dy$, which by Newton's second law equals $m\int y'' dy$. Throwing in the factor of $m$, the result $(1/2)my'^2$ is the kinetic energy. The constant of integration $C$ is the object's initial energy. What is being proved is known as the work-kinetic energy theorem, that the work done by the net force acting on the object equals the change in the object's kinetic energy. – Ben Crowell Feb 5 '12 at 0:11
2 Answers
I'm doubtfull about what $y''$ and $dy$ stand for in your problem. If you have $y = y(x)$ then clearly $$\int y'' dx = y'+C$$ But you're integrating with respect to $dy = d\{y(x)\} = y'(x) dx$ assuming $y(x)$ has a continuous derivative. So you finally have.
$$\int y''(x) y'(x) dx = \int y'(x) d(y'(x)) = \frac{y'^2}{2}+C$$
I'd recommend you read about the Riemann Stieltjes integral, which would formally clarify this issues.
-
When dealing with differentials, we generally have a "manipulate first, ask questions later" attitude. Differentials have a way of giving out correct results when manipulated formally but it's a mistake to think that something meaningful is going on. Okay, this is not totally true: there are non-traditional approaches to calculus where some of these manipulations can be rigorously justified, but I couldn't tell you much about them. The most popular of these approaches is due to H.J. Keisler. More info here.
-
Yes. The basic idea is that in Leibniz notation, $dy$ stands for an infinitesimal change in $y$, and $\int$ means a sum with infinitely many terms. This is what Leibniz meant by these notations, and this is what great mathematicians like Gauss and Euler understood them to mean. If you understand them that way, then it becomes obvious that the OP's manipulations are valid. Non-standard analysis (as presented nicely in the Keisler book) clears up concerns about logical problems with infinitesimals -- basically it shows that the concerns were unfounded. – Ben Crowell Feb 5 '12 at 0:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9697880744934082, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/13539/parallel-anti-parallel-vs-triplet-singlet-description-of-two-spins
|
# parallel/anti-parallel vs. triplet/singlet description of two spins
If we consider two spins, we can think of the spins as being either parallel (up|up or down|down)or anti-parallel (up|down or down|up).
Or we can think of them as being in the triplet or singlet configuration.
Is one description more correct than the other? Or is it just a matter of choice between two basis sets? It would seem to me that using T/S is correct because it accurately reflects the symmetry needed in the wavefunction.
-
## 2 Answers
It's just a choice of basis. Whether you use $$\bigl\{|\uparrow\downarrow\rangle,|\downarrow\uparrow\rangle\bigr\}$$ (individual spins) or $$\biggl\{\frac{1}{2}\bigl(|\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle\bigr),\frac{1}{2}\bigl(|\uparrow\downarrow\rangle - |\downarrow\uparrow\rangle\bigr)\biggr\}$$ (triplet/singlet) they span the same space. But usually the T/S basis is more useful because those states are also eigenstates of the total spin operator $S^2$. As a side benefit, they reflect the (anti-)symmetrization requirements of identical particles; for example, if you have two identical fermions with no other quantum numbers (neglecting the fact that such particles don't exist :-P) in a bound state, you know that they have to take the singlet configuration in order for the wavefunction to be antisymmetric.
-
Here is my problem then. Consider a single spin $\uparrow$ at a site. Now bring in another spin. In the first basis you write, there is a 50 % that the added spin can coexist at the site since there is a 50 % chance it will be anti-parallel (so by Pauli exclusion). In the second basis, it would be a 75 % chance they could exist on the same site since there is a 75 % chance they will form triplet state. Where am I wrong here? – BeauGeste Aug 14 '11 at 20:13
– David Zaslavsky♦ Aug 14 '11 at 20:43
The parallel/antiparallel picture is not quite correct. Mostly because it hides the fact that the electrons are indistinguishable from our intuition. In a (very rare) cases when you may consider electrons distinguishable, this picture is correct. In more natural case when electrons are completely equal triplet/singlet picture is much better.
Maybe few more explanations needed. If you say that electron spins are antiparallel what you actually say is "spin of one electron is antiparallel to spin of another". But electrons are completely equal! You can not freely point to electron and say that it is "the one". Singlet/triplet picture however stays on a more solid ground: what is the total angular momentum of system which consists of two electrons. Here you do not choose electrons. You explicitly use the fact that electrons are equal.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243375658988953, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/98833/examples-of-acylindrical-3-manifolds/98837
|
## Examples of acylindrical 3-manifolds
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $C$ be the compact cylinder $S^1\times [0,1]$. A 3-manifold $M$ with incompressible boundary is called acylindrical if every map $(C,\partial C)\to (M,\partial M)$ that sends the components of $\partial C$ to essential curves in $\partial M$ is homotopic rel $\partial C$ into $\partial M$.
I'm looking, for each $g\geq 2$, for examples of compact, orientable, acylindrical, hyperbolic 3-manifolds $M_g$ with non-empty, incompressible boundary such that each component of $\partial M_g$ is homeomorphic to the surface of genus $g$.
I'm sure such things should be well known to the experts.
Here's a little motivation. Such examples would be useful because, given an arbitrary hyperbolic 3-manifold $N$ with incompressible boundary, you can glue copies of the $M_g$ to the non-toroidal boundary components of $N$ and the result, by Geometrization (for Haken 3-manifolds, so you only need Thurston, not Perelman), is a hyperbolic 3-manifold of finite volume.
-
Sorry, the last paragraph is a little garbled. You may need to use several copies of $N$, too. – HW Jun 5 at 2:07
Thanks for all the great answers! I accepted Richard's only because it included very convenient references that are easy to cite. – HW Jun 9 at 18:44
## 4 Answers
The exterior of Suzuki's Brunnian graph on $n$-edges, here pictured with $n=7$, is irreducible, atoroidal, boundary incompressible, and acylindrical. See
Luisa Paoluzzi and Bruno Zimmermann. On a class of hyperbolic 3-manifolds and groups with one defining relation. Geom. Dedicata, 60(2):113–123, 1996
or
Akira Ushijima. The canonical decompositions of some family of compact orientable hyperbolic 3-manifolds with totally geodesic boundary. Geom. Dedicata, 78(1):21–47, 1999.
(I think these manifolds may be contained in Bruno's list also.)
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You are looking for
compact 3-manifolds that admit a hyperbolic metric with geodesic boundary
equivalently
compact 3-manifolds that do not contain any essential surface with $\chi \geq 0$
with the additional requirement that every boundary component has the same genus $g$.
To construct such manifolds you may draw pictures of suffficiently knotted graphs in $S^3$ consisting of some copies of genus-$g$ graphs, and take their complements. Then you can use orb to check whether the complement has a hyperbolic structure with geodesic boundary.
An alternative construction uses ideal triangulations, extending Thurson's original "knotted y" example from his notes. Pick a bunch of tetrahedra and pair their faces so that every edge in the resulting triangulation has valence $> 6$. Then remove an open star at each vertex. Geometrization guarantees that the resulting manifold admits a hyperbolic metric with geodesic boundary (because you can put an angle structure à la Casson which excludes any normal surface with $\chi \geq 0$).
For example, you can take $g\geqslant 2$ tetrahedra and pair the faces in such a way that the resulting triangulation consists of one vertex and one edge only (which has thus valence $6g$). The resulting manifold is a hyperbolic 3-manifold with connected genus-$g$ geodesic boundary. Its hyperbolic structure is simply obtained by giving each tetrahedron the structure of a truncated regular hyperbolic tetrahedron with all dihedral angles of angle $\pi/(3g)$. Thurston's knotted y is obtained in this way for $g=2$.
The manifolds constructed in this way are "the simplest ones" among those having a connected genus-$g$ boundary, from different viewpoints: they have smallest volume (as a consequence of a result of Myiamoto) and smallest Matveev complexity: we have investigated these manifolds here. There are many such manifolds because there are many triangulations with one vertex and one edge: their number grow more than exponentially in $g$.
-
1
Miyamoto points out that one can construct such manifolds as cyclic branched covers over the single edge in the Gieseking manifold. The even order covers will be orientable. The Tripus manifold is the double branched cover. – Agol Jun 5 at 17:54
Thanks for this answer, Bruno. Could you explain quickly why an acylindrical manifold has totally geodesic boundary, or give a reference? – HW Jun 9 at 18:42
1
@Henry: If $M$ is acylindrical, the double $DM$ is hyperbolic with an orientation reversing involution fixing $\partial M$. By Mostow rigidity, this involution may be taken to be an isometry with fixed point set a geodesic surface homotopic to $\partial M$. If you like, there's a more detailed sketch in Leininger's "Small curvature surfaces in 3-manifolds," J. Knot Theory Ramifications 15 (2006), 379--411. There's a much different proof in McMullen's "Iteration on Teichmueller Space," Inventiones mathematicae, 99(2), 425--445. – Richard Kent Jun 9 at 20:29
I believe Bob Brooks constructs really cool examples in this paper:
MR0860677 (88b:32050) Brooks, Robert(1-UCLA) Circle packings and co-compact extensions of Kleinian groups. Invent. Math. 86 (1986), no. 3, 461–469.
The idea is that given a circle-packed hyperbolic surface (such are dense in teichmuller space, by an earlier theorem of Brooks) one can manufacture a hyperbolic manifold whose boundary consists of four copies of the surface.
-
See the proof of theorem 19.8 in my book. I explain two constructions, one via orbifold trick as Igor explained and the other using Meyers' theorem. Meyers' idea is: Take genus $g$ handlebody $H$ and take a knot $K\subset H$ which busts all essential annuli and disks in $H$. Then do a Dehn surgery on $H$ along $K$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9170305728912354, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/101120/some-help-in-digesting-a-paragraph-in-the-introduction-of-deligne-rapoports-les/101124
|
## Some help in digesting a paragraph in the introduction of Deligne/Rapoport’s “Les Schemas de Modules de Courbes Elliptique”
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
On page 149 (DeRa-7), in the middle of the page, I can translate the middle paragraph that starts "3. La surface de Riemann ..." as follows:
3.The Riemann surface $X/\Gamma$ is not compact. Geometrically, this fact is reflected as follows: If $E_\eta$ is an elliptic curve equipped with a level $n$ structure over $\mathbb{C}((T))$, it follows that the minimal model of $E_\eta$ over $\mathbb{C}[[T]]$ has bad reduction. In this case, the special fiber $E_0'$ of the Neron model $E'$ of $E_\eta$ over $\mathbb{C}[[T]]$ is isomorphic to $\mathbb{C}^*\times\mathbb{Z}/kn\mathbb{Z}$ for some suitable $k$. Let $E_0$ be the subgroup of $E_0'$ consisting of the components of $E_0'$ which have order dividing $n$ in $\pi_0(E_0')$. This subgroup is isomorphic to $\mathbb{C}^*\times\mathbb{Z}/n\mathbb{Z}$...
Assuming I've translated it correctly, I've got three questions:
1. The first sentence seems to be saying that any elliptic curve over $\mathbb{C}((T))$ has bad reduction, which (unless the minimal model isn't what I think it is) is obviously not true, since any elliptic curve over $\mathbb{C}$ is also an elliptic curve over $\mathbb{C}((T))$. Where am I going wrong here?
2. Earlier the author defined a level $n$ structure as just an isomorphism from the $n$-torsion of the elliptic curve to $(\mathbb{Z}/n\mathbb{Z})^2$. Thus, any such elliptic curve $E_\eta$ has level $n$ structure for every $n$, which would seem to imply that the special fiber of the Neron model is isomorphic to $\mathbb{C}^*\times\mathbb{Z}/kn\mathbb{Z}$, for every $n$, which obviously makes no sense. (I thought the Neron model depends only on the base ring and the elliptic curve?)
3. I'd appreciate a quick description of what $\pi_0$ of a scheme is (or a good reference for learning about it), and specifically how it's a group (I understand it should intuitively represent the connected components). I've just this week learned about the etale fundamental group $\pi_1$, though I'm not entirely sure how the definition generalizes to higher/lower fundamental groups.
(Also, is there a good translation of this somewhere?)
thanks
• will
-
2
1. "il arrive que" means "sometimes" not "it follows that". 2. The isomorphism needs to be compatible with the action of Galois (with Galois acting trivially on $(\mathbf{Z}/n)^2$.) In other words, for a level $n$ structure to exist, all the $n$ torsion needs to be defined over $\mathbf{C}((T))$, which needn't be the case. – gb Jul 2 at 5:47
Buy Online Access to this Chapter Individual Book Chapter (Electronic Only) EUR 24.95 – Chandan Singh Dalawat Jul 3 at 5:42
Professor Dalawat - Could you please post a link to where I can find a translation of this paper? thanks – Will Chen Jul 14 at 9:50
## 1 Answer
1. "Il arrive que..." means "sometimes". So the paragraph says that sometimes the minimal model of $E_\eta$ over $\mathbf{C}[[t]]$ has bad reduction, which is true.
2. You're starting with an elliptic curve over $\mathbf{C}((t))$, not over $\mathbf{C}$. There's no reason that $E_\eta$ admits a level $n$ structure over $\mathbf{C}((t))$ (as opposed to over some finite extension). Geometrically what's going on (morally, in complex analytic geometry) is that you have a family of elliptic curves over the punctured complex unit disk, and you want to choose a level $n$ structure on the elliptic curve at each point so that the level structures vary nicely. You can do this locally, but not globally (in general). The algebro-geometric analogue of this fact is that for any fixed $n$, $E_\eta$ admits a level $n$ structure after some finite extension of the base (= passing to some cover of the punctured unit disk) but not necessarily over $\mathbf{C}((t))$ itself. You can see this concretely and algebraically by thinking aboutWeierstrass equations for an elliptic curves over $\mathbf{C}((t))$, thinking about the equations defining the coordinates of the $n$-torsion points, and seeing that there's no reason for those equations to be solvable if the ground field is not algebraically closed.
3. $\pi_0(E_0')$ refers to the group of connected components of $E_0'$. Normally $\pi_0$ is just a set (literally the set of connected components, not just intuitively), but since $E_0'$ has a group structure, you can add two connected components by picking a point on each one, adding the points, and taking the component this sum lands on (you should check that this is well-defined).
Finally, you should note that this paragraph doesn't really make sense, except as motivation: $X/\Gamma$ is a complex analytic object, whereas the rest of the paragraph takes place in the category of schemes.
-
So, to clarify, $E_\eta$ admits a level $n$ structure over a field $K$ if and only if the full $n$-torsion subgroup is $K$-rational? Re: gb's comment - I don't understand what the isomorphism being compatible with Galois means when the image is just a group, not a scheme. ...or are we thinking of $(\mathbb{Z}/n\mathbb{Z})^2$ as a group scheme? If so, how exactly are we doing this? – Will Chen Jul 2 at 9:38
Yes, we're thinking of $(\mathbf{Z}/n\mathbf{Z})^2$ as a group scheme. Over any base scheme $S$, the scheme structure on $(\mathbf{Z}/n\mathbf{Z})_S^2$ is just $n^2$ disjoint copies of $S$. – Rebecca Bellovin Jul 2 at 19:40
Ahh, interesting. Thanks! – Will Chen Jul 3 at 2:00
@Rebecca Sorry for reviving this old post, but I was just thinking about this, and Deligne remarks in the next paragraph that "this discussion suggests that the points at infinity of $\mathcal{H}^*/\Gamma$ correspond to these subgroups $E_0$ with level structure. Why would the discussion suggest this? What do elliptic curves $\mathbb{C}((T))$ have to do with the point at infinity? And what does the Neron model have to do with anything? Is he saying that the elliptic curve over the un-compactified modular curve always has bad reduction at the cusps? – Will Chen Aug 15 at 1:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441718459129333, "perplexity_flag": "head"}
|
http://polymathprojects.org/2012/06/15/polymath7-research-threads-2-the-hot-spots-conjecture/?like=1&_wpnonce=274c59a080
|
# The polymath blog
## June 15, 2012
### Polymath7 research threads 2: the Hot Spots Conjecture
Filed under: hot spots,research — Terence Tao @ 9:48 pm
The previous research thread for the Polymath7 “Hot Spots Conjecture” project has once again become quite full, so it is again time to roll it over and summarise the progress so far.
Firstly, we can update the map of parameter space from the previous thread to incorporate all the recent progress (including some that has not quite yet been completed):
This map reflects the following new progress:
1. We now have (or will soon have) a rigorous proof of the simplicity of the second Neumann eigenvalue for all non-equilateral acute triangles (this is the dotted region), thus finishing off the first part of the hot spots conjecture in all cases. The main idea here is to combine upper bounds on the second eigenvalue $\lambda_2$ (obtained by carefully choosing trial functions for the Rayleigh quotient), with lower bounds on the sum $\lambda_2+\lambda_3$ of the second and third eigenvalues, obtained by using a variety of lower bounds coming from reference triangles such as the equilateral or isosceles right triangle. This writeup contains a treatment of those triangles close to the equilateral triangle, and it is expected that the other cases can be handled similarly.
2. For super-equilateral triangles (the yellow edges) it is now known that the extreme points of the second eigenfunction occur at the vertices of the base, by cutting the triangle in half to obtain a mixed Dirichlet-Neumann eigenvalue problem, and then using the synchronous Brownian motion coupling method of Banuelos and Burdzy to show that certain monotonicity properties of solutions to the heat equation are preserved. This fact can also be established via a vector-valued maximum principle. Details are on the wiki.
3. Using stability of eigenfunctions and eigenvalues with respect to small perturbations (at least when there is a spectral gap), one can extend the known results for right-angled and non-equilateral triangles to small perturbations of these triangles (the orange region). For instance, the stability results of Banuelos and Pang already give control of perturbed eigenfunctions in the uniform norm; since for right-angled triangles and non-equilateral triangles, the extrema only occur at vertices, and from Bessel expansion and uniform C^2 bounds we know that for any perturbed eigenfunction, the vertices will still be local extrema at least (with a uniform lower bound on the region in which they are extremisers), we conclude that the global extrema will still only occur at vertices for perturbations.
4. Some variant of this argument should also work for perturbations of the equilateral triangle (the dark blue region). The idea here is that the second eigenfunction of a perturbed equilateral triangle should still be close (in, say, the uniform norm) to some second eigenfunction of the equilateral triangle. Understanding the behaviour of eigenfunctions nearly equilateral triangles more precisely seems to be a useful short-term goal to pursue next in this project.
But there is also some progress that is not easily representable on the above map. It appears that the nodal line $\{u=0\}$ of the second eigenfunction $u$ may play a key role. By using reflection arguments and known comparison inequalities between Dirichlet and Neumann eigenvalues, it was shown that the nodal line cannot hit the same edge twice, and thus must straddle two distinct edges (or a vertex and an opposing edge). (The argument is sketched on the wiki.) If we can show some convexity of the nodal line, this should imply that the vertex straddled by the nodal line is a global extremum by the coupled Brownian motion arguments, and the only extremum on this side of the nodal line, leaving only the other side of the nodal line (with two vertices rather than one) to consider.
We’re now also getting some numerical data on eigenvalues, eigenfunctions, and the spectral gap. The spectral gap looks reasonably large once one is away from the degenerate triangles and the equilateral triangle, which bodes well for an attempt to resolve the conjecture for acute angled triangles by rigorous numerics and perturbation theory. The eigenfunctions also look reassuringly monotone in various directions, which suggests perhaps some conjectures to make in this regard (e.g. are eigenfunctions always monotone along the direction parallel to the longest side?).
This isn’t a complete summary of the discussion thus far – other participants are encouraged to summarise anything else that happened that bears repeating here.
## 96 Comments »
1. [...] an active discussion in the last week or so, with almost 200 comments across two threads (and a third thread freshly opened up just now). While the problem is still not completely solved, I feel optimistic that it should [...]
Pingback by — June 15, 2012 @ 10:22 pm
2. Thanks, this summary is very helpful!
I’ve been pondering two suggestions on the previous thread, which concerned some sort of perturbation argument and the heat kernels. This got me to thinking about the Hankel function of the first kind, which is the fundamental solution
of the Helmholtz equation with the outgoing wave condition at infinity.
At least from the numerical data, it seems the second Neumann eigenvalue $\lambda_{\epsilon}$ varies smoothly with perturbations $\epsilon$ in angle.
At an eigenvalue of the interior Neumann Laplacian, we can think of the eigenfunction as solving the Helmholtz equation $(\Delta + \lambda_\epsilon) u=0$ with zero Neumann data. One could presumably write this $u$ in terms of a layer potential using the fundamental solution of the Helmholtz equation with wave number $\sqrt{\lambda_{\epsilon}}$, or some clever combination of the Calderon operators. I think the regularity of such operators on Lipschitz domains is documented, eg. in the books by Nedelec or Hsiao/Wendland or MacLean. I don’t know if the behaviour of these integral operators as $\epsilon$ varies is also characterized, but it most likely is.
Using these integral operators, one can reformulate the boundary value problem in terms of the solution of an integral equation, $\mathcal{T}_\epsilon \phi=F$ where $\mathcal{T}_\epsilon.$ Suppose $\mathcal{V}_\epsilon \phi$ then is the desired Neumann Laplace eigenfunction.
The boundary trace of $u$ will be given in terms of this density, and we know this trace is at least continuous. Can one say something about the composition of the trace operator with $\mathcal{V}_\epsilon \circ \mathcal{T}_\epsilon^{-1}$, as epsilon varies?
Is there a clever reformulation of the Hot Spot conjecture in terms of these layer densities?
Comment by — June 15, 2012 @ 10:52 pm
3. Some numerically computed 2nd Neumann eigenfunctions are in this public directory. Titles on the graphs give two of the angles (scaled by pi).
http://www.math.sfu.ca/~nigam/polymath-figures/EigenFunctions/
Comment by — June 15, 2012 @ 11:22 pm
• These are nice. Is it correct that alpha corresponds to the lower left corner and beta corresponds to the lower right corner? And I notice the triangles vary in shape from picture to picture… are the triangles in the figure actually drawn such that they have the correct angles? If so that definitely gives the full picture!
Also, maybe these pictures can shed light on the conjecture “are eigenfunctions always monotone along the direction parallel to the longest side?” proposed in the summary. It is a bit hard for me to make out exactly, but I think in triangles of the type “one hot two cold corners” (well often there is only one globally coldest corner and the other is only locally coldest) the level curves of $u$ roughly form concentric circles at each of the verticies.
If this is the case I believe such triangles disprove the conjecture: If the conjecture were true then, letting $\xi$ be a vector pointing in the direction of the longest side, you could always head in/against the direction $\xi$ to achieve ever hotter/colder values. I think this precludes three local extrema at the corners.
(I am not explaining this very well but basically I have in mind the sort of argument used in Linear Programming to prove that if you have a linear function and you want to maximize it over a convex polygon the maximum occurs at a vertex of the polygon. The argument goes that you head in the direction of the gradient of the function and eventually you ram against the wall, keep going, and end up in a corner.)
Comment by — June 16, 2012 @ 8:25 am
• Hi Chris,
– I’ve put low-resolution thumbnails of these runs on one page for easier viewing, you should be able to click on any to get the detailed plot (again somewhat low-resolution, .png seems to compress awkwardly)
http://people.math.sfu.ca/~nigam/polymath-figures/eigenpics.html
- Each triangle here corresponds to a different choice of parameter angles.
- On each plot, I’ve marked the location of the max(|u|) by a green star. You’ll notice there is only one such point in each triangle plotted, and it occurs on the vertex.
- Yes. If the nodal line doesn’t go through the vertex, the level curves of u, if you zoom in enough to the corners, are nearly concentric circles. This goes all the way back to some comment in an older thread: locally near a corner, the eigenfunction looks like that of a wedge, ie, like a P_n(r,\theta) . Near a corner I think it’s a combination of Bessel functions in the distance from the corner, and the angular variable.
- I just noticed the titles on my figures (generated by outputting some run-time data) are not scaled correctly. I can re-run the code to change the titles.
Comment by — June 16, 2012 @ 2:37 pm
• even more at http://people.math.sfu.ca/~nigam/polymath-figures/eigen2.html
Comment by — June 16, 2012 @ 4:31 pm
• Ah, you’re right, the conjecture is false. Around any acute vertex, if the eigenfunction takes the value $c$ at that vertex, then the eigenfunction has the asymptotic expansion $c J_0(\sqrt{\lambda} r) + o(r^2) = c - cr^2/4 + o(r^2)$ where r is the distance to that vertex, so the level sets locally are indeed close to circles. (This was concealed in the super-equilateral isosceles case because the non-extreme vertex had $c=0$.) That’s a pity – now I don’t know if there are any plausible monotonicity conjectures one can make that would be valid for all acute triangles.
Comment by — June 16, 2012 @ 5:15 pm
4. Hi Terry,
I discussed with Chris this afternoon and I’m still worried about the PDE proof of Corollary 4. Sorry that I have come back to this point many times.
The point again is about the reflection part. It is too good to be true. The solution of the heat equation is OK when you do even reflection. However, the gradient of the new solution in the reflected region may not stay in the convex sector ${\rm S}$ anymore.
Let me give a clear example. Assume that we have $u=u(x,y):\mathbb R \times [0,\infty) \to \mathbb R$ with $\dfrac{\partial u}{\partial y}(x,0)=0$. Let
$w$ be the even reflection of $u$, i.e. $w(x,y)=u(x,y),\ y\ge 0$ and $w(x,y)=u(x,-y),\ y <0$. Then $\nabla w(x,y)=(u_x,-u_y)(x,-y),\ y<0$. So $\nabla w(x,y)$ may not be in ${\rm S}$ for $y<0$ anymore.
Comment by — June 16, 2012 @ 12:41 am
• Yes, the gradient of $\nabla u$ may leave S in the reflected region, but this is not relevant to the argument, as one only needs the gradient to lie in S (or in $S_{\varepsilon(t+1)}$) in the original domain. (In particular, when one discusses things like “the first time the gradient touches the boundary of $S_{\varepsilon(t+1)}$, it is understood that one is working in the original domain.)
Comment by — June 16, 2012 @ 2:59 am
• Then how could you conclude that $\Delta \nabla w(x,0,t)$ points inward or tangentially where $(x,0,t)$ is the first point that $\nabla u(x,0,t)$ touches the boundary of $S_{(1+\epsilon)t}$? As now, in the neighborhood of $(x,0,t)$, $\nabla w$ takes values not only in $S$ or $S_{(1+\epsilon)t}$.
Comment by — June 16, 2012 @ 3:38 am
• Ah, right. If one is working on a boundary point $(x,0,t)$, then as you say, in a neighbourhood of that point, $\nabla w$ will live either in $S_{\varepsilon(t+1)}$ or its reflection, but both of these sets (assuming for sake of concreteness that S is oriented in the first quadrant, with one side on the x axis) lie to the right of where $\nabla w(x,0,t)$ will be, which in this case is $(-\varepsilon(t+1),0)$. So $\Delta \nabla w(x,0,t)$ will have a non-negative first component and so will point inwards of $S_{\varepsilon(t+1)}$ (or of its reflection).
Alternatively, one can argue entirely inside the original domain without reflection using directional derivatives (and the Neumann boundary condition) to get the inward/tangential nature of the Laplacian.
Comment by — June 16, 2012 @ 4:05 am
• Thank you. I will double-check this carefully. Just a quick comment that we want to have $\Delta \nabla w(x,0,t)$ points inwards or tangentially to $S_{\epsilon(t+1)}$ (NOT its reflection) in order to obtain the contradiction. So in this example, we need both of the components are non-negative.
Anyway, I’ll move on to consider other directions.
Comment by — June 16, 2012 @ 5:22 am
• At the point $\nabla w(x,0,t)$, the outward normal to $S_{\varepsilon(t+1)}$ is parallel to the boundary, so that $S_{\varepsilon(t+1)}$ and its reflection have the same normal (and thus the same notion of “inwards” and “tangentially”).
Comment by — June 16, 2012 @ 5:48 am
• This is an important point. In fact as far as I can see it is the only place in the proof where it is used that the axes of S are oriented exactly along the directions of DB and AB.
I tried reproducing the argument for a triangle with Neumann conditions in the three edges. Setting $S$ to be the sector corresponding to the exterior of one of the angles in a triangle, I tried to check whether if when the initial gradients are all in $S$, they would stay in $S$. Note that this would imply some monotonocity along the three edges.
The same argument goes through as long as these two edges form an obtuse or right angle. It is exactly the same proof, but it fails for acute angles.
The way it fails for acute angles is funny. It is exactly because the prolongation of the edges of $S$ do not intersect the boundary $S_{\varepsilon(t+1)}$ perpendicularly. It is necessary that these two intersect perpendicularly to make Terry’s reflection argument work.
I guess that in light of thread 3 we should not expect a monotonicity result like this one to hold for acute triangles.
Comment by — June 18, 2012 @ 9:12 pm
5. I’ve been reading the latest summary, and I’m not sure what point 3. above and the orange region say about the hot spots conjecture for almost 30-60-90 triangles, in a neighbourhood in parameter space of point $C$ above .
I followed the link in the summary to (one of) the Banuelos and Pang papers. The abstract of the 2008 paper I glanced at mentioned a “snowflake type boundary” ; so, I can’t say I understand how the Banuelos and Pang paper from 2008 relates to the stability of the second eigenvalue & eigenfunction for the Neumann Laplacian in the orange region of the parameter space. Thanks for another illuminating summary.
Comment by — June 16, 2012 @ 2:03 am
• Theorem 1.3 of the Banuelos-Pang paper gives continuity of the second eigenvalue, and Theorem 1.5 gives stability of the second eigenfunction, under the assumption of Gaussian bounds on heat kernels (which are certainly true for acute angled triangles) together with simplicity of the second value. (Banuelos-Pang developed these results with the intention of applying them to snowflake domains, but the results are more general than this.)
Comment by — June 16, 2012 @ 3:01 am
• Ok. Many thanks. That clears up my questions about the Banuelos-Pang paper.
David Bernier
Comment by — June 16, 2012 @ 3:17 am
6. Here is a basic way to get stability of the second eigenfunction. Consider an acute triangle $\Omega$ with second eigenvalue $\lambda_2$ and third eigenvalue $\lambda_3 > \lambda_2$, thus the Rayleigh quotient $\int_\Omega |\nabla u|^2 / \int_\Omega |u|^2$ for mean zero $u$ is lower bounded by $\lambda_2$, with equality when $u$ is a scalar multiple of the second eigenfunction $u_2$, and lower bounded by $\lambda_3$ when $u$ is orthogonal to $u_2$.
Now consider a perturbed triangle $\Omega' = B \Omega$ for some linear transformation $B = 1+O(\varepsilon)$ close to the identity. The second eigenvalue of the perturbed triangle is the minimiser of the perturbed Rayleigh quotient $\int_\Omega |\nabla u|^2 + \varepsilon \nabla u \cdot A \nabla u / \int_\Omega |u|^2$, where $A = O(1)$ is the self-adjoint matrix such that $B^{-1} (B^{-1})^T = 1 + \varepsilon A$.
Consider the second eigenfunction $u'_2$ of the perturbed problem (viewed on the reference triangle $\Omega$ rather than on $\Omega'$), normalised to have L^2 norm 1. We can split it as $u'_2 = \alpha u_2 + \beta v$, where $v$ has unit norm and is orthogonal to $u_2$ (and also mean zero), and $\alpha,\beta$ are scalars with $\alpha^2+\beta^2=1$. We will show that $\beta=O(\frac{\varepsilon \lambda^{3/2}_2}{\lambda_3^{1/2} (\lambda_3-\lambda_2)})$, which shows that $u'_2$ is within $O(\varepsilon)$ in $L^2$ norm of (a scalar multiple of) the original eigenfunction $u_2$ when there is a large spectral gap.
Since $u'_2$ minimises the perturbed Rayleigh quotient, one can compare it against the original eigenfunction $u_2$ to obtain the inequality
$\int_\Omega |\nabla u'_2|^2 + \varepsilon \nabla u'_2 \cdot A \nabla u'_2 \leq \int_\Omega |\nabla u_2|^2 + \varepsilon \nabla u_2 \cdot A \nabla u_2$.
Expanding out $u'_2 = \alpha u_2 + \beta v$, noting from integration by parts that $\nabla u_2$ is orthogonal to $\nabla v$, and using $\alpha^2 = 1 - \beta^2$, then after some algebra one ends up at
$\beta^2 \int_\Omega |\nabla v|^2 + 2 \varepsilon \alpha \beta \int_\Omega \nabla u_2 \cdot A\nabla v + \varepsilon \beta^2 \int_\Omega \nabla v \cdot A \nabla v \leq \beta^2 \lambda_2 + \beta^2 \varepsilon \int_\Omega \nabla u_2 \cdot A \nabla u_2.$
After some Cauchy-Schwarz (bounding $\alpha$ by 1 and recalling that $\|\nabla u_2\|^2 = \lambda_2$) this becomes
$\beta^2 \| \nabla v \|_2^2 + O( \varepsilon \beta \lambda_2^{1/2} \|\nabla v \|_2 ) + O( \varepsilon \beta^2 \|\nabla v \|_2^2 ) \leq \beta^2 \lambda_2 + O( \beta^2 \varepsilon \lambda_2 )$
Now, one can bound the $\lambda_2$ factors on the RHS by $\frac{\lambda_2}{\lambda_3} \|\nabla v \|_2^2$. If one does this and rearranges for a while, one eventually ends up with the bound
$\beta \|\nabla v\|_2 = O( \frac{\varepsilon \lambda_2^{3/2}}{\lambda_3-\lambda_2} )$
(assuming $\varepsilon$ small enough); since $\|\nabla v \|_2 \geq \lambda_3^{1/2}$, this gives the bound on $\beta$.
This type of perturbation analysis measures the perturbation in L^2 and in H^1 with quite explicit bounds. What one really wants though is uniform (or even better, C^1 or C^2) bounds, as this gives enough control to start tracking where the extrema go, but I think one may be able to get that from the L^2 and H^1 bounds by using the Bessel function expansion formalism (but the bounds get worse when doing so, unfortunately). Still, one may be able to work everything out explicitly for, say, perturbations of the 30-60-90 triangle, so that we can get an explicit neighbourhood of that triangle for which one can establish the conjecture.
Comment by — June 16, 2012 @ 3:53 am
• One thing I realized about the above analysis is that it also works when there is multiplicity of the second eigenvalue, and in particular when $\Omega$ is the equilateral triangle. In this case, $u_2$ becomes the projection of $u'_2$ to the second eigenspace (normalized to have norm one). So this should imply that for a perturbed equilateral triangle, the second eigenfunction is close to some second eigenfunction of the equilateral triangle in H^1 norm at least. Integration by parts also gives uniform H^3 bounds on this eigenfunction, so by some interpolation (Gagliardo-Nirenberg type inequalities) should show that the second eigenfunction of the perturbed eigenfunction is close in C^0 (and probably also C^1) to a second eigenfunction of the equilateral triangle. I think I can show that for all second eigenfunctions of the equilateral triangle, the extrema only occur at the vertices (and are uniformly bounded away from the extrema once one is a bounded distance away from the vertices), so this should indeed establish the claim stated previously that the extrema can only occur at the vertices for perturbed equilateral triangles.
Comment by — June 16, 2012 @ 5:01 pm
• This would be great, and I think one can get some numerical evidence for the desired uniform bounds. I can compute the L^2-projection of an eigenfunction onto the eigenfunction of a nearby triangle. In other words, I can compute $\beta v$ in your notation, and try to get a feel for the sup-norm bounds on derivatives, for perturbed eigenfunctions in the neighbourhood of a few different ‘reference’ triangles. I’ll fix one side of the triangles to be 1.
I had a quick question. In your argument, the asymptotic constant in the bound for $\beta$ presumably depends on the measure of the original acute triangle, $\Omega.$ Let’s call it $C_\Omega$
In your series of calculations above, do you recall whether the constant scaled as the area or the square root of the area? Once again, I ask because this will help me decide how small to choose the neighbourhoods of the reference triangles- we don’t want $C_\Omega \epsilon$ to get big.
[Fixed latex. Incidentally, there was an error in the latex instructions on this blog (now fixed) - one needs to enclose latex commands in "\$latex" and "\$" rather than "<em>" and "</em>". -T.]
Comment by — June 16, 2012 @ 7:23 pm
• The implied constants in the above analysis are actually dimensionless – they don’t depend on the area of $\Omega$. Instead, they depend on the operator norm of the perturbing matrix $A$.
Comment by — June 16, 2012 @ 7:29 pm
• So this constant is uniform for all acute triangles? In other words, if I perturb *any* triangle $\Omega$ by an affine map which is close to the identity, the same bound $\beta(\int_\Omega \|\nabla u\|^2 )^\frac{1}{2} = O(\frac{\epsilon\lambda_{2,\Omega}^{3/2}}{\lambda_{3,\Omega}-\lambda_{2,\Omega}}) \approx C(\frac{\epsilon\lambda_{2,\Omega}^{3/2}}{\lambda_{3,\Omega}-\lambda_{2,\Omega}}) + \,\,l.o.t.$ should hold? This would be good news for purposes of coding.
Comment by — June 16, 2012 @ 9:22 pm
• I wrote a more careful computation at http://michaelnielsen.org/polymath1/index.php?title=Stability_of_eigenfunctions . The upshot is that if one takes a perturbation $B\Omega$ of the triangle $\Omega$, and considers a second eigenfunction of $B\Omega$ which (after rescaling back to $\Omega$) takes the form $u_2+v$ for some second eigenfunction $u_2$ of $\Omega$ of norm 1, and some v orthogonal to u_2, then
$\displaystyle \| \nabla v \|_2^2 \leq \frac{(\kappa^2-1) \lambda_{2,\Omega}^{1/2}}{1 - \kappa^2 \frac{\lambda_{2,\Omega}}{\lambda_{3,\Omega}}}$
provided that the denominator is positive, where $\kappa = \|B\|_{op} \|B^{-1}\|_{op}$ is the condition number of B, and $\lambda_{3,\Omega}$ is first Neumann eigenvalue that is strictly greater than $\lambda_{2,\Omega}$. (This is slightly different from what I said before; the factor of $\lambda_2^{3/2}$ in my previous post should have instead been a $\lambda_2^{1/2} \lambda_3$.)
Comment by — June 17, 2012 @ 12:02 am
• Thank you, and also for the detailed notes on the wiki. This is a relief. The previous bound did not appear to hold in some numerics I ran.
Comment by — June 17, 2012 @ 3:02 am
• For what it’s worth, here is Terry’s argument, with calculations done in terms of functions pulled back to an isoceles right-angled triangle. http://people.math.sfu.ca/~nigam/polymath-figures/Perturbation.pdf
Comment by — June 19, 2012 @ 11:20 pm
7. Here is a Mathematica notebook with calculations for the proof of simplicity. This is not the full proof. It reduces the problem to solving a set linear inequalities. There are 2 cases. I am also attaching a plot with acute triangles as yellow area and red/blue as the cases. Clearly everything is covered. Now I just need to find the cleanest way to handle those inequalities.
http://pages.uoregon.edu/siudeja/TrigInt.m (this one is needed to run the notebook)
http://pages.uoregon.edu/siudeja/simple.nb (first line needs to be edited)
http://pages.uoregon.edu/siudeja/simple_plot.pdf
The first case uses 3 reference triangles and rotated symmetric eigenfunction of equilateral for upper bound. The second case uses 2 reference triangles and Cheng’s bound.
Comment by Bartlomiej Siudeja — June 16, 2012 @ 6:50 pm
8. I’ve been thinking about how one would try to use rigorous numerics to verify the hot spots conjecture for acute-angled triangles. In previous discussion, we proposed covering the parameter space with a mesh of reference triangles, getting rigorous bounds on the eigenfunctions and eigenvalues of these reference triangles, and then using stability analysis to then cover the remainder of parameter space. This would be tricky though, in part because we would need rigorous bounds for the finite element schemes for each of the reference triangles.
But another possibility is to only use for the reference triangles the triangles for which explicit formulae for the eigenfunctions and eigenvalues are available, namely the equilateral triangles, the isosceles right-angled triangles, the 30-60-90 triangles, and the thin sector (a proxy for the infinitely thin isosceles triangle). In other words, to use the red dots in the above diagram as references.
The trouble is that if one does a naive perturbation analysis, one could only hope to verify the hot spots conjecture in small neighborhoods of each red dot, which would not be enough to cover the entire parameter space. But I think, in principle at least, that one has a way to amplify the perturbation analysis to get much larger neighborhoods of each reference triangle, by using not just the lowest eigenvalue and eigenspace of the reference triangle, but the first k eigenvalues and first k eigenspaces for some medium-sized k (e.g. k = 100). This would necessitate the use of 100 x 100 linear algebra (e.g. minimizing a quadratic form on 100 variables) but this seems well within the capability of rigorous numerics. As long as the strength of one’s perturbation analysis grows at a reasonable rate with k, this may well be enough to rigorously cover the entire parameter space.
For instance, consider perturbing off of the isosceles right triangle $\Omega$ with vertices (0,0), (0,1), (1,1). The eigenfunctions here are quite explicit (because one can reflect the triangle indefinitely to fill out the plane periodically with periods (0,2), (2,0), and then use Fourier series): every pair of integers (k,l) gives rise to an eigenfunction
$\cos(\pi(kx+ly)) + \cos(\pi(kx-ly)) + \cos(\pi(lx+ky)) + \cos(\pi(lx-ky))$
with eigenvalue $\pi^2 (k^2+l^2)$. Thus, for instance, the second eigenfunction is (up to constants) $\cos(\pi x)+\cos(\pi y)$ with eigenvalue $\pi^2$, the next eigenfunction is $\cos(\pi x) \cos(\pi y)$ with eigenvalue $2 \pi^2$, the next eigenfunction is $\cos(2\pi x ) + \cos(2\pi y)$ with eigenvalue $4\pi^2$, and so forth (the next eigenvalue, for instance, is $5\pi^2$).
As discussed in the wiki, if one wants to find the second eigenfunction for a perturbed triangle $B\Omega$, this amounts to minimizing the quadratic form
$Q(u,u) := \int_\Omega M \nabla u \cdot \nabla u$
amongst mean zero functions of norm 1, where $M := (B^{-1}) (B^{-1})^T$. This is a quadratic minimization problem on an infinite-dimensional Hilbert space. However, one can restrict u to a finite-dimensional subspace, namely the space generated by the first k eigenvalues of the reference triangle. For instance, one could consider u which are a linear combination of $\cos(\pi x) + \cos(\pi y)$, $\cos(\pi x) \cos(\pi y)$, and $\cos(2\pi x) + \cos(2\pi y)$. This minimization problem could be computed with rigorous numerics without difficulty.
What I believe to be true is that there is some rigorous stability inequalities that relate the minimizers of the finite-dimensional variational problem with the minimizers of the infinite-dimensional problem, with the error bounds improving in k (roughly speaking, I expect the H^1 norm of the error to decay like $\lambda_k^{-1/2}$ or even $\lambda_k^{-1}$). By increasing k, one thus gets quite good rigorous control on where the minimizers are.
Unfortunately, the H^1 norm is not good enough to control where the extrema go: one would prefer to have control in a norm such as C^0 or (even better) C^1. But it should be possible to use Bessel function expansions and the eigenfunction equation to convert H^1 control of an eigenfunction to C^0 or C^1 control, though this could get a bit messy with regards to the explicit constants in the estimates. But in principle, this provides a completely rigorous way to control the eigenfunction in arbitrarily large neighborhoods of reference triangles in parameter space, and so for some sufficiently large but finite k (e.g. k=100) one may be able to cover everything rigorously.
Of course, it would be preferable to have a more conceptual way to prove hot spots than brute force numerical calculation (since, among other things, a conceptual argument would be easier to generalize to other domains than triangles), but I think the numerical route is certainly a viable option to pursue if we don’t come up with a promising conceptual attack on the problem.
Comment by — June 17, 2012 @ 10:26 pm
• I’ve been pondering something similar. Even for a conceptual approach, there’s a duality between the problem of locating the extrema of eigenfunctions of
$- \nabla(M\nabla w)= \lambda w, \,\,M\nabla w \cdot n=0$ on a right-angled isoceles triangle and that of the hot spot conjecture. I’m of the view that a successful conceptual attack will deal with all the cases (except, maybe, the equilateral one), in one go.
As far as numerics go, here was my thinking:
A) one can get very good approximations of both the initial part of the spectrum and eigenfunctions of a positive definite operator on a right-angled isoceles triangle, using non-polynomial functions in a spectral approach.
B) you had a perturbation analysis in thread 6 from one triangle to another, nearby one;
C) one can pull back both triangles in your analysis to a right-angled isoceles one, and do the perturbation analysis there. The constants include, explicitly, the information about the pullback. I am doing this to check if how big an \epsilon I was allowed to pick depended on the shape of the triangles. I’ll post a link to this (naive) write-up to the wiki soon.
For any numerical attack, I think it’s extremely important to record how the eigenvalues are being located, or how the minimizer is being found.
For (A) I use approximations $u_N(x,y) = \sum_{i=1}^N c_ip_i(x,y)$ where the basis functions are not piecewise polynomials. I’m writing up both a Fourier spectral method and a method based on Bessel functions. This is in the debugging stage, and I will wait for better results before I upload details.
Comment by — June 17, 2012 @ 11:07 pm
• what about a partition-of-unity approach, where one takes 4 patches (3 which include the corners, and one for the center)? we’ve got great control of what happens near the corners.
Comment by — June 17, 2012 @ 11:10 pm
• I apologize about not describing what I meant by a spectral approach. On a square, I know that $\phi_{mn}(x,y) = \exp(imx)\exp(iny)$ are orthogonal. So one writes the approximation as $u_N(x,y) := \sum_{m,n}^N c_{nm} \phi_{mn}(x,y)$, writes a discrete variant of the eigenproblem, and solves it for the unknown coefficients. On clearly has to take the basis functions with the correct symmetry, while working on the triangle. These basis functions don’t satisfy the prescribed boundary conditions, so the discrete formulation has to be carefully done.
Comment by — June 17, 2012 @ 11:21 pm
• One advantage of the quadratic form minimization approach is that one does not need to explicitly keep track of boundary conditions, as the minimizer to the quadratic form will automatically obey the required Neumann conditions. It’s true though that it isn’t absolutely necessary to use the reference eigenfunctions as the basis for the approximation, and that other bases could be better (for instance, in the spirit of what was done to show simplicity of eigenvalues, one could take an overdetermined basis consisting of eigenfunctions pulled back from multiple reference triangles). This would of course make the rigorous stability and perturbation analysis more complicated, though, so some there is some tradeoff here. I guess we could do some numerical experiments to probe the efficiency of various types of bases for approximate eigenfunctions.
Comment by — June 18, 2012 @ 12:04 am
• The most efficient set of approximations functions I’ve found (in terms of fewest non-zero coefficients) are: some Bessel functions centered at the three corners, and then I threw in some trigonometric polynomials. These aren’t orthogonal, and I’d have to show the discrete problem was consistent with the original one.
Comment by — June 18, 2012 @ 12:46 am
• All explicitly known eigenfunctions for triangles are trigonometric polynomials (equilateral, right isosceles and 30-60-90. So a basis made of linear perturbations of these is very reasonable. However, transplantation of known cases gives extremely ugly Rayleigh quotients on arbitrary triangles. Trig functions do not mix well. But, this can be done. That is how I am proving simplicity.
For nearly degenerate triangles Bessel functions must be best (Cheng’s bound is obtained this way), but linear functions, varying along long side, adjusted to have average 0 are really good too. After all sides are almost perpendicular and variation must happen along the long side. For simplicity of nearly degenerate triangles I am using linear function (it gives slightly better results for not-so-nearly-degenerate cases.
Comment by Bartlomiej Siudeja — June 18, 2012 @ 3:58 pm
• This reminds me of a small remark I wanted to make on the nearly degenerate case. Instead of approximating by a thin sector, one can instead rescale to the right-angled triangle $\Omega$ with corners (0,0), (1,0), (0,1), in which case one is trying to minimise a Rayleigh quotient the numerator of which looks something like $\int_\Omega |\partial_x u|^2 + A |\partial_y u|^2\ dx dy$ for some large A (this is the formula for a thin right-angled triangle, in general there will also be a mixed term involving $\partial_x u \partial_y u$ which I will ignore for this discussion). One can then map this to the unit square S with corners (0,0), (1,0), (0,1), (1,1) via the transformation $(x,y) \mapsto (x,y/x)$, in which case the numerator now becomes $\int_0^1 \int_0^1 x |\partial_x u|^2 + A x^{-1} |\partial_y u|^2\ dx dy$ and the denominator $\int_0^1 \int_0^1 x |u|^2\ dx dy$. Morally, when A is large the second term in this numerator forces u to be nearly indepent of y. On the other hand, to minimise $\int_0^1 x |\partial_x u|^2\ dx / \int_0^1 x |u|^2\ dx$ assuming mean zero one easily computes that the minimiser comes from our old friend, the Bessel function $J_0(x / j_1)$ with eigenvalue $j_1^2$. It is then tempting to analyse this quotient using a product basis consisting of tensor products of Bessel functions in the x variable and cosines in the y variable, i.e. $J_0(x / j_{1,k} ) \cos( \pi l y)$ for various integers k,l. Among other things this should give an alternate proof of the hot spots conjecture in the nearly degenerate case, though I didn’t work through the details.
Comment by — June 18, 2012 @ 5:25 pm
• I like this line of thinking.
Using a basis of the form $J_0(x/j_{1,k}) cos(\pi ly)$ is pretty much what I’m coding up, and is the heart of the method of particular solutions (Fox-Henrici-Moler, Betcke-Trefethen and later Barnett-Betcke).
Comment by — June 18, 2012 @ 6:05 pm
• After doing some theoretical calculations, I’m now coming around to the opinion that bases of trigonometric polynomials in a reference triangle are not as accurate as one might hope for, unless the triangle being analysed is very close to the reference triangle. The problem lies in the boundary conditions: an eigenfunction in a triangle is going to be smooth except at the boundary, but when one transplants it to the reference triangle (say, the 45-45-90 triangle for sake of argument) and then reflects it into a periodic function (in preparation for taking Fourier series) it will develop a sawtooth-like singularity at the edges of the reference triangle due to the slightly skew nature of the boundary conditions (distorted Neumann conditions). As a consequence, the Fourier coefficients decay somewhat slowly: a back-of-the-envelope calculation suggests to me that even if one uses all the trig polynomials of frequency up to some threshold K (so, one is using about K^2 different trig polynomials in the basis), the residual error in approximating the sawtooth-type eigenfunction in this basis is about $O(\varepsilon K^{-3/2})$ in L^2 norm, and $O(\varepsilon K^{-1/2})$ in H^1 norm, if the triangle is within $O(\varepsilon)$ of the reference triangle. This is pretty lousy accuracy, in practice it suggests that unless $\varepsilon$ is very small, one would need to take K in the thousands (i.e. work with a basis of a million or so plane waves) to get usable accuracy.
So it may be smarter after all to try to use a basis that matches the boundary conditions better, even if this makes the orthogonality or the explicit form of the basis more complicated.
Comment by — June 20, 2012 @ 8:18 pm
• yes, the trig functions are pretty inefficient here. Here’s what works (without proof right now, so I won’t post results): take some bessel functions around the corners, add in some trig functions. take a linear combination. evaluate the normal derivatives at points on the boundary, and enforce that the linear combination does not vanish at randomly selected points in the interior. set up an overdetermined system (ie, take more points than unknowns), find the QR decomposition of the matrix. this gives you orthogonal columns, which correspond to a discrete orthogonal basis. use this to solve for the eigenvalues and eigenfunctions. it’s a tweak of what Trefethen and Betcke did, and the results are not as spectacular. but I can get results with 30 unknowns that I can’t using a basis of 200 purely trig functions.
Comment by — June 20, 2012 @ 8:52 pm
• Hmm, I can see that this scheme would give good numerical results, but obtaining rigorous bounds on the accuracy of the numerical eigenfunctions obtained in this manner could be tricky.
One possibility is to compute the residual $\| -\Delta u - \lambda u \|_{L^2(\Omega)}^2$ of the numerically computed eigenvalue $\lambda$ and eigenfunction $u$. Assuming $u$ is smooth enough (e.g. $C^3$ except at the vertices, where the third derivative blows up slower than $1/|x|$) and obeys the Neumann boundary conditions exactly, we have the expansion
$\| -\Delta u - \lambda u \|_{L^2(\Omega)}^2 = \sum_k |\lambda_k - \lambda|^2 |\langle u, u_k \rangle|^2$
where $u_k$ are the true orthonormal eigenbasis of the Neumann Laplacian. If $\lambda < \lambda_3$, this implies in particular that
$\| -\Delta u - \lambda u \|_{L^2(\Omega)}^2 \geq \frac{(\lambda_3 - \lambda)^2}{\lambda_3^2} \sum_{k \geq 3} \lambda_k^2 |\langle u, u_k \rangle|^2$
$= \frac{(\lambda_3 - \lambda)^2}{\lambda_3^2} \| \nabla^2( u - c u_2 ) \|_{L^2(\Omega)}^2$
where $c = \langle u, u_2 \rangle$. Thus, an $L^2$ bound on the residual, together with a sufficiently good rigorous lower bound on the third eigenvalue (in particular producing a gap between this eigenvalue and the numerical second eigenvalue $\lambda$), gives an $\dot H^2$ bound on the error:
$\| \nabla^2(u - c u_2) \|_{L^2(\Omega)} \leq \frac{\lambda_3}{\lambda_3 - \lambda} \| -\Delta u -\lambda u \|_{L^2(\Omega)}$.
One can also get control on lower order terms from the Poincare inequality, thus
$\| \nabla^i(u - c u_2) \|_{L^2(\Omega)} \leq \frac{\lambda_3^{1-i/2}}{\lambda_3 - \lambda} \| -\Delta u -\lambda u \|_{L^2(\Omega)}$
for i=0,1,2.
This way, we don't have to do any analysis at all of the numerical scheme that comes up with the numerical eigenvalue and eigenfunction $\lambda, u$, so long as we can rigorously compute the residual. Note from Sobolev embedding that H^2 controls C^0, so in principle this is enough control on the error to start locating extrema…
Comment by — June 20, 2012 @ 11:11 pm
• I agree, the numerical analysis is gnarly, Betcke and Barnett did it for the Dirichlet case on analytic domains for the method of particular solutions http://arxiv.org/pdf/0708.3533.pdf
For the finite element calculations and the more recent attacks, I’ve been keeping track of the numerical residual $\int_\Omega |\nabla u|^2 -\lambda|u|^2$. I can calculate the I’ve also been keeping track of the (numerical) spectral gap.
The devil is in the detail: for the finite element calculations, I can compute the integrals analytically (piecewise polynomials on triangles). For the new bessel+trig calculations, I can’t compute the integrals analytically. So for the latter, the residuals are computed through high-accuracy quadrature, but not analytically.
Comment by — June 20, 2012 @ 11:57 pm
9. The second eigenfunction of the Neumann Laplacian for the
unit interval $[0, 1]$ is
$u(x) = \cos \left(2\pi x\right)$ .
I’ve been thinking of trying to prove that the
equivalent of the hot spots conjecture for this
eigenfunction holds, i.e. that the max and min
are attained on the boundary of $[0, 1]$ .
The method of approach I had involved assuming
the supremum of the absolute value of $u(x)$
is attained only at a point $p$ in the
interior of $[0, 1]$, and using a
non-constant stretching/compressing map
$\phi$ from $[0, 1]$ to itself
to construct $x \mapsto u\left(\phi \left( x \right) \right)$ ,
followed by compressing/stretching the values of $u(x)$
so as to preserve the $L^2$-norm of the resulting
function, while hopefully decreasing the Rayleigh quotient.
for the unit interval’s second eigenfunction, but triangles
are another matter altogether.
I’d be grateful if someone could tell me how to
access and use the LaTex sandbox area for Polymath
blogs.
David
[The sandbox is at http://polymathprojects.org/how-to-use-latex-in-comments/ - T.]
Comment by — June 18, 2012 @ 5:19 am
• I am not sure that I follow…
For the interval $[0,1]$ with Neumann boundary, the second eigenfunction is (I am pretty sure), as you say, $u_2(x)=\cos(2\pi x)$. Knowing that, the fact that $u_2(x)$ attains its max and min at the boundary points of $[0,1]$ just follows from understanding the cosine function (e.g. sketching the graph of $u_2$). Maybe I am missing something (your edit below makes me think you instead wanted to suggest something about the third eigenfunction perhaps)?
But in any case, for the interval $[0,1]$ I believe that explicit expressions for the eigenfunctions are known (though of course coming up with an understanding of them that doesn’t rely on the explicit expressions might be useful in that it could be generalized to two dimensional triangles).
Comment by — June 18, 2012 @ 6:01 am
• I read some lecture notes online about Neumann boundary conditions, and as I recall, for the unit interval one is led to an ODE for the eigenvalues and eigenfunctions of the Neumann Laplacian. So, the solution via this route is easy. From Terry’s and others’ comments, explicit expressions for the first non-trivial eigenfunction of the Neumann Laplacian for triangular regions are only known for a tiny region of the parameter space, e.g. equilateral triangles, the 45-45-90 degrees triangle and quite a bit about isosceles triangles with a very pointy angle, using the known exact solutions for sectors. I haven’t used LaTex much recently, and that’s why I didn’t write all that much in my post. In any case, I can describe it more in words. For the unit interval, the $u(x)$ on the unit interval is sufficiently well-behaved candidate function for being the second non-trivial eigenfunction, and I also assume that $u$ has mean zero on the unit interval, as well as having derivative zero at $x=0$ and $x=1$. If needed, I also assume that \$u\$ has two distinct zeros in the unit interval, just as $x \mapsto \cos \left(2\pi x\right)$ does. Without loss of generality, we can assume that $u(x)$ is strictly positive near 0 and near 1, and that $u(x)$ is strictly negative in the open interval from $r_{1}$ to $r_2$, these last two real numbers denoting the first and second zeros of $u(x)$ on the unit interval. What I’m aiming at is a proof by contradiction of the analog of hot spots for the eigenfunction associated to the second non-trivial eigenvalue, where the domain of interest is simply the unit interval, rather than triangular regions. As you say, the solutions are explicitly known here and computing them is quite easy. So I’m working without assuming the knowledge of these explicit cosine solutions, relying more on physical intuition and the method/technique of Rayleigh quotients. In my previous post, I called $p$ the point in the unit interval where the maximum of the absolute value of the candidate function $u(x)$ is attained. So $r_{1} < p 1$ results in smaller eigenvalues for the domain $[0, K]$. That’s one of the key things I keep in mind, because it reflects how the Rayleigh quotient changes when the graph of any well-behaved function $u(x)$ is dilated or expanded by a factor of $K$ along the x-axis ; we can assume that nothing happens along the y-axis. Introducing $\phi(x)$ which maps the unit interval to itself is a means to an end. The immediate objective is to get something like a perturbation of $u(x)$, except that I usually think of pertubation as adding, say, $\epsilon g(x)$ to $u(x)$, for some well-chosen $g(x)$ and a small $\epsilon$. Suppose $q$ is used to denote the point in the unit interval such that $q \mapsto p$ or in other word, $p = \phi(q)$. Then surely we want to have $0 < \phi'(q) 1$ and $\phi'(1)>1$. Locally near 0 and near 1, the Rayleigh quotient will be increased. We can deduce the value of the squared second derivative of the composite function $x \mapsto u\left(\phi \left( x \right) \right)$ at 0 and 1 from the assumption that $u$ is an eigenfunction. The only counterfactual assumption used (apart from benign requirements such as $u$ having two zeros on the unit interval), is that there is a point $p$ in the interior of the unit interval such that $|u(p)| > |u(0)|$ and $|u(p)| > |u(1)|$ . I think I’d like to keep constant the $L^2$-norm of the composite function. At $(q, \phi(q))$, this requires decreasing further modification of the composite function by multiplying by $sqrt(\phi'(q))$, which while locally near $q$ will maintain the contribution to the $L^2$-norm of the modified composite of $\phi$ with $u$, will locally decrease the square of the second derivative of $u$. The hope is that, since by assumption $|u(p)|$ is greater than $|u(0)|$ and $|u(1)|$, in total, the integral of the square of the modified composite of $u$ with $\phi$ will decrease, contradicting the Rayleigh principle. As I recall, the Rayleigh quotient method here requires that a candidate function for the second non-trivial eigenfunction both have mean zero, and be orthogonal to the first non-trivial eigenfunction. Also, the various $\phi'(x)$ values as x ranges over $[0, 1]$ must have a mean of 1, by the fundamental theorem of calculus. The end result hoped for is that the modified function, while still satisfying the orthogonality requirements of the Rayleigh-Ritz method, have a strictly smaller Rayleigh quotient, where one proves this using a specially chosen $\phi$. Say u~ is the modified function obtained starting with u. Then u~ is in the admisible space, but has strictly lower Rayleigh quotient than u [Contradiction].
So, under the benign assumptions about two nodes for the second non-trivial eigenfunction, that eigenfunction attains its min and max at 0 or 1.
Comment by — June 18, 2012 @ 8:21 am
• A couple of thoughts:
-Apriori I would guess that the second eigenfunction for the domain $[0,1]$ with Neumann boundary conditions has *one* zero not two. That is, I would guess that an analogue of the Nodal Line Theorem would hold and as such it would have two nodal domains. Indeed, the function $\cos(2\pi x)$ has one zero (at $\frac{\pi}{2}$) in the interval $[0,1]$ (Edit: Clearly I am wrong here :-). See my reply below).
-If I understand correctly, your idea is to argue by contradiction: If the second eigenfunction $u_2(x)$ achieves an extremum in the interior of $[0,1]$, then by considering an appropriately chosen $\phi:[0,1]\to[0,1]$, we can show that $u_2(\phi(x))$ in fact has a smaller Raleigh quotient than $u_2(\phi(x))$ (while still having mean zero), contradicting the fact that $u_2(x)$ minimizes this quotient. This is definitely an interesting idea… in principle it could work for a triangle, but we would need to find a function $\phi:D\to D$ with all these properties.
Comment by — June 18, 2012 @ 9:19 pm
• I’ve been reading the mathematical parts of a technical report by Moo K. Chung and others (on applied mathematics and medical imaging) called: “Hot Spots Conjecture and Its Application to Modeling Tubular Structures” . For eigenfunctions of the Neumann Laplacian, their numbering in English starts with “first eigenfunction”, followed by “second eigenfunction”, where these are denoted respectively by $\psi_{ 0}$ and $\psi_{1}$ .
It’s clear from the presentation in that report that $\psi_{ 0}$ is a non-zero constant function, whatever the domain. They state that the nodal set of the $ith$ eigenfunction $\psi_{ i-1}$ divides the region or domain into at most $i$ sign domains.
For the unit interval, I believe that $ith$ eigenfunction $\psi_{ i-1}$ is given by the formula $\psi_{i-1} \left( x\right) = \cos \left(\left(i-1\right)\pi x\right)$. I find that $\cos \left(2\pi x\right)$ has roots at ${ 1 \over 4}$ and ${ 3 \over 4}$ .
With respect to your second thought, a proof by contradiction is indeed what I have in mind. The laplacian of the composition of functions represented by $x \mapsto u_{2}\left(\phi \left( x \right) \right)$ seems to me to depend not only on $\phi \prime \left( x\right)$, but also on $\phi \prime \prime \left( x\right)$. So an affine transformation map $\phi$, where $\phi \prime$ is everywhere constant, and allowing mappings from a domain $D$ to a possible different domain $D \prime$ for illustration, transforms the laplacian in a simple way (noting that $\phi \prime \prime$ is then identically zero.) In case $\phi$ is not affine, I think the relation between the laplacian of $u_{2}$ composed with $\phi$ and the laplacian of $u_{2}$ isn’t easy to understand on an intuitive level …
Comment by — June 19, 2012 @ 6:00 am
• Ah, you are correct… $\cos(2\pi x)$ does indeed have two zeros at the places you state (I was thinking of $\cos(\pi x)$ which has its zero at $\frac{1}{2}$… and not $\frac{\pi}{2}$ as I wrote). And I see I was also confused about the numbering… sorry about that!
So you are considering the more general hot-spots problem then (showing that all of the eigenfunctions attain their extrema on the boundary)? Also, if you plan to argue by Raleigh quotients, you should only need to worry about first derivatives, no?
I played around a bit trying to make such an argument (for the first/second eigenfunction… i.e. for the simpler hot spots conjecture) but didn’t have any luck. An issue, as I see it, is that any argument has to account for the fact that there are examples of (non simply connected) domains which have the extrema of the first/second eigenfunction in the interior.
So any construction of $\phi$ would have to fail for that counter example domain. Then again, as $\phi$ in some sense “moves points around”, it may somehow be tied to the topology of the space… so maybe the construction of $\phi$ would depend on the domain being simply connected? But this is just wild conjecture… I have nothing specific in mind.
One other idea/conjecture that came up when I was playing around with this: Perhaps it is possible to show that “there cannot be an interior maximum without an interior minimum”. This is a weaker statement than the full hot-spots conjecture, but if it were true and if we also knew that the nodal line, which straddles a corner, does so in a convex way then we would be done: In the convex sub-domain the extremum would lie at the corner, therefore in the other sub-domain the extremum must also lie on the boundary.
Comment by — June 19, 2012 @ 7:08 am
• When I thought of trying to argue for “hot spots” (for the unit interval) by contradiction, it just so happens that the first test case that came to mind was the eigenfunction for the Neumann laplacian $u_{2} \left( x \right) = \cos \left( 2 \pi x \right)$ , which has a minimum of $- 1$ at $x = 1/2$ and maxima of $1$ at $x = 0$ and $x = 1$ . We would have to speak about “maximum of the absolute value of a non-trivial eigenfunction” rather than “maximum of a non-trivial eigenfunction” and “minimum of a non-trivial eigenfunction” , because for $u_{2} \left( x \right) = \cos \left( 2 \pi x \right)$, the minimum of $u_{2}$ is attained at $x = 1/2$ only.
When arguing by Rayleigh quotients, including a domain transformation such as the $\phi$ in an earlier comment, if for simplicity’s sake we stick to $u_{2}$ as the name of a hypothetical counter-example of “hot spots” or a variant of “hot spots” which refers to “points which extremize the absolute value of a non-trivial eigenfunction”, the way I was thinking of it, we would need to compute the second derivative (“laplacian”) of $u_{2} \circ \phi$. From a mental computation using the “chain rule”, the first derivative of the composition of two functions is a product of two terms, one of which in our case would be $\phi '$ . Then differentiating again, the “product rule” will yield a term where $\phi ''$ appears. If that is the case, does it matter? I think it does, in the sense that it makes “tweaking” the numerator of the Rayleigh quotient harder to do (the numerator involves the laplacian of a test function) . I might be overlooking something, so that I might not need to worry about $\phi ''$ … You mention topology of the domain, for instance the property of “simple connectedness” of a domain , as possibly relevant/”decisive” for the “hot spots” conjecture to hold. I think it’s an interesting topic, how topology of the domain affects the truth or falsity of the “basic” conjecture, i.e. the part relating to extrema of $u_{2}$.
Comment by — June 19, 2012 @ 4:39 pm
• Yes, the interval shows that the “generalized hot-spots conjecture” is not true for the interval. That is, after multiplying by $(-1)$ if necessary, the second/third eigenfunction will have its hottest point in the interior. The same is true for a circular region I believe. But, as you say, the conjecture could be modified to consider the absolute value of $u$.
Honestly, I haven’t much thought about the generalized hot-spots conjecture… the simple one (about the first/second Neumann eigenfunction) is hard enough!
Concerning the Raleigh quotient, I believe that for the interval it is given by $\mathcal{R}\left[u\right]=\frac{\int_0^1 (u_x)^2\ dx}{\int_0^1 u^2\ dx}$ and so only first derivatives are needed, no?
Comment by — June 19, 2012 @ 5:49 pm
• Ah, you’re right. I just read about the energy functional and variational (calculus of variations) characterization of the first/second Neumann eigenfunction, and the expression to be minimized is just what you wrote in the last paragraph. So, under a transformation of domains, it seems like we don’t need to worry about the second derivative of the $\phi$ map from the domain to itself …
Comment by — June 20, 2012 @ 6:46 am
• Some of the gory details of how to reformulate this problem under change of domain are on the wiki: http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture
Comment by — June 20, 2012 @ 2:15 pm
10. Correction to my latest comment: I wrote “second eigenfunction” , and following Terry Tao’s summary, “third eigenfunction” for the eigenfunction linked to $\lambda_{3}$ seems like a better term.
Comment by — June 18, 2012 @ 5:32 am
11. I am slowly finishing the proof of simplicity. It may require a computer for some of the longest calculations. Especially certain large polynomial inequalities with 2 variables are not easy to handle. I have implemented an algorithm for proving these. I am attaching a Mathematica notebook and a pdf version of it. Algorithm can be found in Section 5 of http://www.ams.org/mathscinet-getitem?mr=MR2779073.
http://pages.uoregon.edu/siudeja/simple.nb
http://pages.uoregon.edu/siudeja/simple.nb.pdf
There are still 2 cases missing, but general idea should be clear. I have included comments in the notebook, so it should be relatively easy to read.
Comment by Bartlomiej Siudeja — June 18, 2012 @ 10:18 pm
• I have updated the files under above links. The proof is now complete. Eigenvalues are simple.
I have started working on the best possible lower bound for spectral gap for a given triangle. For now, I implemented an upper bound based on many eigenfunctions from known cases. There are lists of eigenfunctions of known cases sorted according to eigenvalues. There is also an upper bound based on a chosen number of eigenfunctions for each known case. There is certainly room for improvement, but the upper bound is already very accurate. There is however an issue of optimizing a linear combination of eigenfunctions. If I take 5 or 10 eigenfunctions per known case I get great results. For 15 per case, bound is actually worse, which means there are local minima.
Just for comparison. For a triangle with vertices (0,0), (1,0), (0.501,0.85) (slightly off from equilateral) I am getting 17.62110831277495 as the upper bound. Using second order FEM with 262144 triangles I am getting 17.62110794. Nilima, could you compare this with your results?
Comment by Bartlomiej Siudeja — June 19, 2012 @ 6:26 pm
• sure, I can run this case. Do you mean the 2nd computed eigenvalue is 17.62110794? or the spectral gap?
Comment by — June 19, 2012 @ 6:39 pm
• The second eigenvalue. The third is 18.1342622359. So the gap is about 0.5 (numerically).
Comment by Bartlomiej Siudeja — June 19, 2012 @ 6:41 pm
• I just ran it with P1 elements, 80358 nodes. Ev = 17.6212. If I use 320844 nodes, the ev = 17.62110632.
You’re using a higher-order element, so I think our results are comparable.
Comment by — June 19, 2012 @ 7:29 pm
• I have updated the files again. There is a lower bound for the gap at the end of the notebook. For any triangle I tried, it gives about 60% of the numerical gap. The problem is of course with the lower bound for the third eigenvalue. The upper bound for the second eigenvalue is very good, even with just 3 eigenfunctions per known case.
Comment by Bartlomiej Siudeja — June 20, 2012 @ 2:44 am
• 60% is the worst case. For known cases it gives exact values.
Comment by Bartlomiej Siudeja — June 20, 2012 @ 3:48 am
• Hi Bartolomiej,
this is good news! I had a question: how much sharper are your bounds on the second eigenvalue, compared with those you’d get from the Poincare inequality here (the functions have mean zero, so one can get a pretty explicit bound). Maybe this is already in your posted notes?
Comment by — June 20, 2012 @ 3:45 pm
• Poincare inequality gives lower bound for the second eigenvalue, instead of upper. Optimal Poincare inequality for triangles gives lower bound in terms of just second equilateral eigenvalue. Same techniques as for lower bound in my notes, just less involved, work for Poincare.
Comment by Bartlomiej Siudeja — June 20, 2012 @ 3:54 pm
• Yes, I was thinking of the lower bound. However, do the techniques in your notes give tighter lower bounds than those given by a Poincare inequality?
Comment by — June 20, 2012 @ 6:14 pm
• Together with Laugesen we used the same techniques to prove optimal Poincare inequality. However, here I had to push the method to its limits (more reference triangles). Bounds are also a bit different, since we are dealing with second+third eigenvalue.
Comment by Bartlomiej Siudeja — June 20, 2012 @ 7:23 pm
12. I had a thought about a somewhat different tactic, which combines some of the ones discussed earlier. Let us consider the case of triangles with rational angles. These are dense in the set of all triangles, so if we can prove the Hot Spots conjecture for this case, it seems to me that we can easily make continuity arguments to the general case.
Now, consider the universal cover of such a triangle under the reflection group. First off, is it the case that this is an N-sheeted manifold, for some N a factor of the gcd of the denominators?
Second, we can consider the Brownian motion formulation pulled back to this cover, where each preimage of the triangle has a point heat source. Can’t we argue that, since the corners are effectively exposed to “multiple” neighborhoods (N * angle / $\pi$), whereas every other point is (locally) exposed to a single neighborhood, the corners must be hot spots? (E.g. move the source towards a given corner and look at the long-term behavior.)
This argument will fail if N=1. But there are only 3 such cases (the equilateral, isoceles right, and 30-60-90 triangles), and all of them have the Hot Spots conjecture verified.
Comment by Craig H — June 19, 2012 @ 9:07 pm
• I think I may be off on my definition of N and my neighborhood counting — but if N is finite, it shouldn’t matter.
Comment by Craig H — June 19, 2012 @ 9:10 pm
• I don’t quite follow the argument that being exposed to multiple neighborhoods should make the corner points the hottest. For one thing, by continuity, the points slightly in from the corner will also be very hot despite only being locally exposed to its nearest neighbors.
The picture in my mind of this manifold is that it looks something like a spiral staircase near the corner. The center of this staircase might seem the hottest, but then when you “fold up the triangles” to convert the heat flow back on the manifold back to the heat flow on the triangle, every point in the triangle ultimately has many points that map to it.
That’s not to say there isn’t a good idea here… I just don’t have a good intuition for what the N-sheeted manifold looks like :-) And for example in the case of an equilateral triangle which tiles the plane (a 1-sheeted manifold?) I don’t see any intuitive reason why the hottest point should be at the boundary :-/
As a side point, the intuitive reason I see behind why the hot spots should lie in the corners of sharpest angles comes from the probabilistic interpretation: The heat flow is dual to reflected Brownian motion and reflected Brownian motion has a hard time getting inside sharp corners. This is because reflected Brownian motion reflects perpendicularly off of the boundary. Therefore, hitting the boundary gives the reflected Brownian motion a nudge away from the corner! (Yes, for sharper corners, the perpendicular does not point as much out from the corner… but this is offset by the fact that the reflected Brownian motion hits the boundary much more frequently in a narrow corner).
In fact, I suspect that if you take any Neumann boundary region $D$ and append a spike of angle $\epsilon$ and let $\epsilon\to 0$, then for sufficiently small $\epsilon$, one of the hot-spots will be at the tip of the spike… irrespective of the shape of the original domain $D$. (And I imagine if you have two such spikes, then their tips will be the the two hot-spots).
Comment by — June 20, 2012 @ 2:23 am
• Alright, let me make my intuition here a little more precise: Around any of the preimages of the corners, the area of the ball of radius $R$ goes as $\pi p R^2$, where $p$ is the numerator of the angle ($\alpha = \pi p/q$). Whereas around any non-corner point $P$, there is some $\epsilon$ such that $\forall r < \epsilon$, Area($B(P,r)$) = $\pi r^2$. And you are going to get that the heat is a smooth function of $\epsilon$, with a maximum as $\epsilon \rightarrow 0$ (in fact, we know just what it looks like for small $\epsilon$ — it is constant plus angularly-modulated Bessel function).
Comment by Craig H — June 20, 2012 @ 1:32 pm
• I see that there is more area on the manifold near a corner point than an interior point… but I don’t follow the intuition of why that would make the point hotter… couldn’t it be argued just as well that there is more area for heat to escape off to?
Comment by letmeitellyou — June 20, 2012 @ 5:39 pm
• Also, I said that this argument pretty explicitly fails in the three $(\pi/p, \pi/q, \pi/r)$ cases where the reflected triangles tile the plane — but we have separate arguments for each of those cases as well.
Comment by Craig H — June 20, 2012 @ 1:34 pm
• Just FYI, I realized that in general we do not have a finite number of sheets. One can take, for example, the 45-67.5-67.5 isoceles triangle. Reflecting about the two congruent edges gives an octagon, but if on reflects long edge-short edge-long edge, the two images of the short edge are at right angles to each other. Repeating this process produces a square tiling of the plane. If you assume that the short edge has length 1, you can get an infinite number of square tilings of the plane translated by 1 in a diagonal direction — that is, the preimage of the corners of the original triangle contains Z[\sqrt{2}]^2. Since this set is dense, we can’t have a finite number of sheets.
Comment by Craig H — June 25, 2012 @ 7:39 pm
13. Perhaps it might be a good idea to focus numerical work on a single “generic” triangle – e.g. a 40-60-80 triangle – as a test case, and see to what extent we can show rigorously that extrema for this triangle (and for nearby triangles) only occur at the vertices? This would give an indication of how fine a mesh of triangles we would need to cover the whole sample space.
The way I see it, one could rigorously show the hot spots conjecture for a given triangle $\Omega$ by some variant of the following scheme:
1. Find an approximation $u$ to the second eigenfunction which is provably accurate in $C^0$ norm with some error $\varepsilon$, thus $|u(x)-u'(x)| \leq \varepsilon$ for all x, where u’ is the true second eigenfunction.
2. As a consequence, any point x at which $u(x)$ differs by more than $2\varepsilon$ from the extrema of u, cannot be an extremum of u’. Assuming that u behaves as expected, with extrema only at vertices, this should place all extrema of u’ within $O(\sqrt{\varepsilon})$ of the vertices.
3. It should be possible to perform a Bessel function expansion around each vertex with upper and lower bounds on coefficients. With sufficiently good such bounds, this should show that in the $O(\sqrt{\varepsilon})$ neighbourhood of each extremal vertex, there are no critical points other than the vertex.
1, 2, and 3 give the hot spots conjecture for the specific triangle (and will also give the conjecture for some explicit neighbourhood of that triangle in parameter space).
The catch is that it may require some absurdly high level of accuracy, e.g. $\varepsilon = 10^{-6}$, in order to work, in which case we may be able to resolve a single triangle (such as the 40-60-80 triangle) but would not be able to cover all of parameter space without a ridiculous amount of computer time. The square rooting of the epsilon parameter is a little disconcerting. Note though that if one had C^1 control instead of C^0 control then one could exclude all critical points outside of an epsilon-neighbourhood of the vertices rather than a sqrt(epsilon) neighbourhood, and with C^2 control one could exclude critical points everywhere. But such control may be unreasonable…
Comment by — June 22, 2012 @ 1:37 am
• I think what you propose is do-able. Presumable you want to check a generic triangle to avoid any special symmetries which arise on the right-angled one? I’d much rather pull back the problem on the 40-60-80 triangle and compute on the right-angled one. Then there isn’t any fussing with mesh quality etc.
One would begin with the literature to check for both a priori and a posteriori sup-norm estimates for algorithms. Here’s a nice paper on FEM for eigenvalues, but the estimates are not in the sup norm. http://www.ing-mat.udec.cl/~rodolfo/Papers/DPR.pdf
In the finite-element world, one usually examines L^2 (or other Hilbert space) norms, but I think there’s enough on sup norm estimates as well. What we’ll hopefully get is a theorem which provides an estimate which relies only on the computed eigenfunction $u_h$, to get control on $sup|u(x)-u_h(x)| \leq C h^r$. There’s already lots in the literature on such estimates in the energy norm, but we’ll have to look for equivalent estimates in $C^1$.
Here $C>0$ is a constant which may depend on the triangle, $h$ is some parameter in the method which goes to zero as the approximating sequence $u_h$ approaches $u$. One could compute several approximation terms, and use an extrapolation technique (Richardson’s or something more sophisticated) to get an extremely refined estimate.
Until we see what’s in the literature on the sup-norm estimates on the gradients, we can’t really assess how hard it will be to get the accuracy. For what it’s worth, the measure of error I use to monitor code, $\|\nabla u_h\|_0^2 - \lambda_h\|u\|_0^2$, is down to $10^{-13}$ on these problems without trying hard.
Comment by — June 22, 2012 @ 4:20 am
• Yes, I picked the 40-60-80 triangle because it looked quite typical, so that one isn’t deceived by symmetry. I agree though that in practice we would pull it back to a nicer triangle such as the right-angled triangle.
I’m curious as to how you are computing the numerical eigenvalue $\lambda_h$; it doesn’t appear to simply be the Rayleigh quotient, since otherwise your error $\| \nabla u_h\|_0^2 - \lambda_h \|u\|_0^2$ would simply be zero. In any case, your accuracy of $10^{-13}$ is reassuring, it probably gives us a fair amount of room to play with.
Here is one simple way to get $C^0$ or $C^1$ estimates away from vertices. Let x be an interior point of the triangle, and suppose first that the ball B(x,R) of radius R centred at x lies completely inside the triangle. Let $-\Delta u = \lambda u$ be a true eigenfunction. The radial averages
$f(r) := \frac{1}{2\pi} \int_0^{2\pi} u(x + r e^{i\theta})\ d\theta$
(using complex notation) obey the Bessel ODE $f''(r) + \frac{1}{r} f'(r) + \lambda f(r) = 0$, and thus must be a scalar multiple of the Bessel function $J_0(\sqrt{\lambda} r)$. In particular, on integrating from r=0 to r=R we conclude a mean value theorem
$\displaystyle u(x) = \frac{ \int_{B(x,R)} u(y)\ dy}{ \int_{B(x,R)} J( \sqrt{\lambda} |y-x| )\ dy}$ (1)
which, among other things, can be used to convert L^2 type bounds on u to C^0 bounds on u in the interior region of the triangle. Since the eigenfunction equation commutes with derivatives in the interior, we can also convert H^1 bounds to C^1 bounds in a similar fashion:
$\displaystyle \nabla u(x) = \frac{ \int_{B(x,R)} \nabla u(y)\ dy}{ \int_{B(x,R)} J( \sqrt{\lambda} |y-x| )\ dy}$.
If one is near an edge of the triangle, but not near a vertex, one can achieve a similar formula after reflecting across the edge, so long as B(x,R) does not touch a vertex. Once one touches a vertex, things start getting a bit crazy and I don’t see a clean formula, but as I said in the previous comment we may be able to handle the area near the vertices by a different method.
If the approximate eigenfunction u_h also approximately obeys estimates such as (1), then the error $u-u_h$ will also approximately obey an equation like (1), which should in principle thus give good C^0 or C^1 estimates on the error away from vertices…
Comment by — June 22, 2012 @ 5:48 am
• To compute the numerical eigenvalue, I first formulate the eigenvalue problem in discrete form. This is done either (1) variationally (through finite elements) or (2) by the modified method of particular solutions (linear combinations of Bessel functions and trigonometric polynomials).
For (1), I end up with a large generalized eigenvalue problem $Ax = \lambda Bx$, where $A_{ij} = \int_\Omega \nabla \phi_i \nabla \phi_j$, and $B_ij = \int_\Omega \phi_i \phi_j$. This is solved using Arnoldi iterations to get the eigenvalues.
For (2), I’m still trying to tweak the method, but the eigenvalues are located by a numerical minimization, which in turn is done using the Nelder-Mead algorithm. I am nervous about using this for validated numerics unless I can prove the method is consistent in infinite precision arithmetic.
Comment by — June 22, 2012 @ 2:26 pm
• Here is a way to quantify the fact that there are no extrema near a vertex, other than at the vertex itself.
Using polar coordinates around a vertex of angle $\alpha$, with angular coordinate $\theta \in [0,\alpha]$, a Neumann eigenfunction $-\Delta u = \lambda u$ obeys the equation
$\partial_{rr} u + \frac{1}{r} \partial_r u + \frac{1}{r^2} \partial_{\theta\theta} u + \lambda u = 0$
and thus (by separation of variables, or equivalently by Fourier series in the angular variable) has the Bessel expansion
$u(r,\theta) = \sum_{k=0}^\infty c_k J_{k\pi/\alpha}(\sqrt{\lambda} r) \cos( k \pi \theta / \alpha )$ (1)
in any sector $S := \{ (r,\theta): 0 \leq r \leq R; 0 \leq \theta \leq \alpha \}$ inside the triangle. One can estimate the size of the coefficients $c_k$ by computing the $L^2$ norm on S using Plancherel’s theorem:
$\int_S |u|^2 = \frac{\alpha}{2} \sum_{k=0'}^\infty |c_k|^2 \int_0^R J_{k\pi/\alpha}(\sqrt{\lambda} r)^2\ r dr$ (2)
where the prime in the summation indicates that the $k=0$ term is counted twice. One could also compute the $\dot H^1$ norm using the identity $|\nabla u|^2 = |\partial_r u|^2 + \frac{1}{r^2} |\partial_\theta u|^2$ and the Plancherel identity:
$\int_S |\nabla u|^2 = \frac{\alpha}{2} \sum_{k=0'}^\infty |c_k|^2 \int_0^R |J'_{k\pi/\alpha}(\sqrt{\lambda} r)|^2 + \frac{k^2\pi^2}{\alpha^2 r^2} |J_{k\pi/\alpha}(\sqrt{\lambda} r)|^2\ r dr$. (3)
If we normalise u to have L^2 norm 1 on the triangle, then we have $\int_S |u|^2 \leq 1$ and $\int_S |\nabla u|^2 \leq \lambda$, which gives some l^2 type control on the coefficients $c_k$. Of course, to make this optimal, one should make the sector S as large as possible while still inside the triangle.
From (1) and the triangle inequality we have
$|u(0)| - |u(r,\theta)| \geq u(0) (1 - |J_0(\sqrt{\lambda} r)|) - \sum_{k=1}^\infty |c_k| |J_{k\pi/\alpha}(\sqrt{\lambda} r)|$
|
so if $|u(r,\theta)|$ is greater or equal to $|u(0)|$, we must have
$\sum_{k=1}^\infty |c_k| |J_{k\pi/\alpha}(\sqrt{\lambda} r)| \geq u(0) (1 - |J_0(\sqrt{\lambda} r)|).$ (4)
Note that for small r, $1 - |J_0(\sqrt{\lambda} r)|$ is comparable to $r^2$, and the higher order Bessel functions $J_{k\pi/\alpha}(\sqrt{\lambda} r)$ are comparable to $r^{k\pi/\alpha}$, so this already rules out any competitor to u(0) that is sufficiently close to the vertex. By using (2) or (3) and Cauchy-Schwarz with (4), one can get an explicit region of r for which competitors are ruled out.
Comment by — June 22, 2012 @ 4:55 pm
• Some results.
In http://www.math.sfu.ca/~nigam/polymath-figures/dump-data.odt I’ve documented the following:
Case 1: For a right-angled triangle (2 sides of 1), you’ll find values of:
# of degrees of freedom, $\lambda_{calculated}$, \$max of u_h,\$ \$min of u_h,\$ $u_h$ at the three corners, and the error $\lambda_h-\lambda$.
Case 2: For the 40-60-80 triangle, we don’t have the exact eigenvalue. I’ve tabulated values of
# of degrees of freedom, $\lambda_{calculated}$, \$max of u_h,\$ \$min of u_h,\$ $u_h$ at the three corners,
Method: I’ve computed these quantities using finite elements + shifted Arnoldi iteration for the eigenvalue. The finite element eigenfunction is then interpolated onto a very fine grid before we locate the max and min. For each triangle, I’ve computed using piecewise linears, then piecewise quadratic, and finally piecewise cubic functions. I’ve used the freeware FreeFem++, and am happy to pass along the code to anyone who’d like to play with it.
Observations:
1. First, the Arnoldi iteration will not favor an eigenfunction $u_h$ over $- u_h$. So one can see that max and min value of u may switch from iteration to iteration, but the absolute values don’t.
2. Next, for all these experiments, one of the corner values is precisely the maximum value of \$abs(u_h)\$. Using Terry’s argument above, there are no others in the neighbourhood.
I’m happy to assert we’ve got 7 digits accuracy for the maxima and minima, but would have to run this on a larger machine than my laptop to get better.
Comment by — June 22, 2012 @ 8:27 pm
• I was trying to recreate your numbers using my code, and I have a few questions for case 1. What vertices are you using? Right isosceles triangle with sides 1 and 1 should have eigenvalue pi^2. Also, what do you mean by degrees of freedom? Is this the size of the matrix in the eigensolver? Finally, I was using shift-and-invert transform to speed up eigenvalue convergence. This seems to work pretty well (speed-wise), but makes it impossible to get as accurate results as you are getting (10^-9 at best for eigenvalue). Have you considered using shift-and-invert, or is this just a bad idea?
In observation 1, is it possible to apply boundary condition, say 1 at a vertex, to force maximum at a specific vertex? Kind of like applying Dirichlet to the matrix in the eigensolver. You would also get a nicely scaled eigenfunction. I am not even sure this is possible.
Comment by Bartlomiej Siudeja — June 22, 2012 @ 11:43 pm
• For case 1, I used the vertices (0,0), (1,0), (1,1).
I thought the 2nd eigenfunction on this triangle should have form $\cos(\pi x) \cos (\pi y)$, giving $- \Delta 2\cos(\pi x) cos (\pi y)= 2\pi^2$. At any rate, I’m reporting the first eigenvalue corresponding to a non-constant eigenfunction. Turns out it’s very close to $2\pi^2$.
The number of degrees of freedom is the number of unknowns, and therefore the number of unknowns in the eigensolver. The matrices are sparse, so it’s also approximately (upto a factor of a constant) the number of entries in the matrix.
For a piece-wise linear finite element, this will correspond to the number of vertices in the mesh (counted correctly). For piecewise quadratic elements there will be more finite element basis functions per triangle, and so forth. These dofs scale linearly with \$1/h\$ where \$h\$ is some measure of mesh size.
How are you computing the eigenvalue? Using a shift-and-invert on top of an Arnoldi iteration is overkill, so I don’t do it. But if you are using an inverse Rayleigh iteration or something, that should be OK…. upto the expense of inversion.
Numerically, why would you apply a Dirichlet condition to a vertex? By pinning down that node, one will influence the discrete eigenvalue problem you’re trying to solve, and potentially in a nonlinear manner. Different finite element packages do different things to enforce Dirichlet conditions, so this would become another factor to consider in the analysis.
I wanted to use a conforming approximation strategy, and would therefore be reluctant to enforce any boundary conditions which are not natural. In principle, if we think $u_h\in H^1(\Omega)$, pinning it down at one node won’t matter IF we do the eigenvalue solve exactly, and IF we don’t care about point values of $u_1$. But we’re looking for the behavior of $u_h$ and derivatives at points,
Comment by — June 23, 2012 @ 12:33 am
• This one is actually the third eigenfunction. There is also cos(pi x)+cos(pi y) which gives eigenvalue pi^2.
I am using SLEPc eigensolver included in FEniCS project, and matrices A, M as you described here somewhere. If I use shift-and-invert I am getting at least 10x speed gain. Even without using any shift. I was also using Krylov-Schur instead of Arnoldi.
I did not mean to imply applying 0 boundary condition to a vertex. More like applying 1 to a vertex. I know this is not very natural, but it should force eigenfunction to have same sign at the vertex. And should make no difference for derivative calculations.
Comment by Bartlomiej Siudeja — June 23, 2012 @ 2:42 am
• I don’t get it. To keep calculations simple, let’s consider the triangle flipped over, so the corners are $(0,0), (1,0), (0,1)$. $u=cos(\pi x)+ cos(pi y)$ certainly has $-\Delta u = \pi^1$, and the normal derivatives vanish on the straight edges. Indeed, $u$ is the first non-constant eigenfunction on the square.
On the third side of the triangle, which is along the line $y=1-x$, the outward (unscaled) normal is $(1,1)$. So, the normal derivative of $u$ on this edge is simply $u_x+u_y= -\pi (\sin(\pi x) + \sin(\pi (1-x) ) = -\pi [\sin(\pi/2) cos( (\pi-2x)/2) ]$, which does not vanish. Am I missing something?
At any rate, imposing $u=1$ will have the same problem. Actually, a bit worse: you’ll end up with a non-symmetric formulation, since the computed $u$ will not be in a vector space: $U_h:= \{ u_h \in H^1(\Omega) \vert u_h(corner)=1\}$. To deal with this, one has to use test functions in a genuine vector space $V_h:= \{ v_h \in H^1(\Omega) \vert v_h(corner)=0\}.$ Since $U_h \not = V_h$, the resultant discrete formulation is not symmetric. That’s fine, but I’m not sure there’s much point.
Comment by — June 23, 2012 @ 4:22 am
• Second eigenvalue for a square has multiplicity 2, constant in x or in y. Now the sum and the difference of the two will be either constant on the diagonal, or have 0 normal derivative there. So one of the two newly created eigenfunctions will be the eigenfunction of the right isosceles triangles. On the flipped triangle that would be cos(pi x)-cos(pi y). This is actually an example of a subdomain with the same smallest eigenvalue as the whole domain.
Comment by Bartlomiej Siudeja — June 23, 2012 @ 4:55 am
• interesting…. but the normal derivative didn’t vanish on the diagonal, so what is going on?
Comment by — June 23, 2012 @ 4:59 am
• Try cos(\pi x)-cos(\pi y). This one should have 0 normal derivative on (1,0) — (0,1). For your original triangle (with (1,1) as vertex) the plus version should work.
Comment by Bartlomiej Siudeja — June 23, 2012 @ 5:08 am
• You’re right!
Thanks for your patience- this has been a helpful discussion for me. This error, and the fact that I also got the second eigenvalue on the 40-60-80, suggested I had a systematic bug in the code I wrote today. I found it, fixed it, and have replaced the data:
http://www.math.sfu.ca/~nigam/polymath-figures/dump-data.odt
The conclusions remain the same.
Comment by — June 23, 2012 @ 6:06 am
• I have also noticed that using too fine mesh gives worse results. I guess rounding errors kick in. With cubics I get the best results with DOF of 10000-20000 (10^-10 for eigenvalue of right isosceles). After that I loose a bit of accuracy.
Comment by Bartlomiej Siudeja — June 23, 2012 @ 3:01 am
• yes, too fine a mesh will likely give poor results. Not rounding, I suspect the quality of the mesh degenerates, unless one puts in some checks on the quality of the mesh. One must also be a bit cautious: the eigenfunctions are not $C^\infty$, so there’s not reason to assume taking very high degree polynomials will work (in fact, already quartic approximants show bad results).
Comment by — June 23, 2012 @ 4:25 am
• It seems that in both cases you are getting the third eigenvalue.
Comment by Bartlomiej Siudeja — June 23, 2012 @ 3:13 am
• I could not reply to your latest post. With old data even the third eigenfunction was good for hot-spots conjecture. I am not sure what vertices you are using for 40-60-80 triangle, but the old ones where giving me an obtuse triangle. You have right isosceles vertex for 40-60-80 in the new file (aa, bb values).
Comment by Bartlomiej Siudeja — June 23, 2012 @ 6:14 am
• fixed, thanks. I’m going to take a break from coding now, since I’m making silly errors.
Comment by — June 23, 2012 @ 6:36 am
• You deserve a break. You are doing a great job with the numerical side of the project.
Comment by Bartlomiej Siudeja — June 23, 2012 @ 6:38 am
• thanks are due to you- you’ve been incredibly patient and careful, and I appreciate your looking over the results. I fixed the print statement for the vertices. Who knows, maybe introduced some other error. I’ll stop for now.
Comment by — June 23, 2012 @ 6:51 am
• Your 40-60-80 triangle is a bit obtuse, when plotted. It seems aa and bb should be different. In fact we should probably switch from 40-60-80 to something with rational vertices.
Comment by Bartlomiej Siudeja — June 26, 2012 @ 12:50 am
• hmm, it plots fine for me.
Comment by — June 26, 2012 @ 1:57 am
• Assuming you have 0,0 and 1,0 as other 2 vertices 0.83, 0.3 gives obtuse triangle using Pythagorean theorem.
Comment by Bartlomiej Siudeja — June 26, 2012 @ 2:29 am
14. Just some other thoughts on the analytic approach (I should also say that I have been following the comments the rigorous-numerics approach but I haven’t had much to contribute):
—————————-
Thought 1:
I was thinking about how to argue rigorously that the Nodal line for an arbitrary triangle shouldn’t be too “squiggly”. Intuitively it seems to me that this is a consequence of the Raleigh quotient interpretation: A squiggly nodal line would seem to make $u$ have a large $H^1$-norm, whereas “straightening out the squiggle” would lower its $H^1$-norm while leaving the $L^2$-norm unchanged.
More concretely, I’ve been looking at the problem of minimizing the Raleigh quotient for mean-zero $u$ subject to the constraint $u(x)\in\{-1,1\}$. Since $u$ can only take one of two values, we divide the region into a “hot region” and “cold region”. The $L^2$-norm of $u$ is $1$ and the $H^1$-norm of $u$ should be proportional to the length of the curve dividing the two sub-regions. Therefore, minimizing the Raleigh quotient is the same as minimizing the length of a curve which divides the triangle into two sub-regions of equal area. I believe that such a line should not be squiggly…
Then, we might consider the case that $u$ takes on one of $n$ values and minimizing the Raleigh quotient would again correspond to minimizing the lengths of the curves which divide the different sub-regions (where $u$ takes on different values).
Then, somehow we could maybe show that as $u$ is allowed to take more values, it approximates the true $u$.
One thing that concerns me is that I don’t see how this approach would account for the known counterexamples to the hot-spots conjecture in non-simply connected domains. In fact, I don’t have any intuition behind those counterexamples so if someone does have intuition I would love to hear it :-)
——————————————-
Thought 2:
From the comment thread started by David (meditationnate): For the interval $[0,1]$, if $u$ solves the Neumann problem then $u'$ solves the Dirichlet problem. So one could first find information about the solution to the Dirichlet problem and then integrate it to get information about the solution to the Neumann problem.
I wonder if any information could be gained from studying $\nabla u$. I don’t have any particular idea… but such an argument might be nice because recovering $u$ from $\nabla u$ requires the domain to be simply connected… which is precisely the largest class of domains where the conjecture is conjectured to hold!
———————————————-
Thought 3:
Since reflected Brownian motion is dual to the heat flow, and since the Laplacian is self adjoint, the heat flow started from a point $x$ looks the same as the probability density function (p.d.f.) of a reflected Brownian motion started at the point $x$.
It therefore ought to be the case that we have an ergodicity result that says: If you consider *one* path of reflected Brownian motion started from any point $x$, then if you look at some long term average of where it spends its time it ought to correspond somehow to the pdf/second eigenfunction.
Then it would suffice to show that a reflected Brownian motion doesn’t spend much time in the sharpest corners (which is intuitively true since the boundary reflection is perpendicular to the boundary, giving it a shove away from the corner, and sharper corners make the Brownian motion hit the boundary more frequently).
(Edit: On second thought, I think my intuition about reflected Brownian motion having a hard time going into a sharp corner is wrong. For one thing a perpendicular push at the boundary doesn’t push you “away” but rather doesn’t affect your radial distance from the corner one way or another. More to the point, if you consider reflected Brownian motion in an infinite wedge, it should be the same as if you reflect the infinite wedge many times… and so the chance of the reflected Brownian motion getting close to the corner should be the same as that of a free Brownian motion getting close to the origin)
Comment by — June 22, 2012 @ 3:11 am
15. Hi Chris,
Your ideas are nice. Regarding the second thought, here’s one way to think about the eigenvalue problem.
Right now, we look for $u\in H^1$ so that $-\Delta u - \lambda u=0$ on the domain, and $\frac{\partial u}{\partial n}=0$ on the boundary. We also enforce that the mean of $u=0$.
Now let us rewrite the problem as $\sigma - \nabla v =0$, and $\nabla \cdot \sigma + \lambda v=0$ on the domain. On the boundary, enforce $\sigma \cdot n=0$. Also enforce $u$ to have mean zero. This system will have solutions $\sigma \in H(div)$ and $v\in L^2$.
Such a reformulation is called a mixed method in the numerical analysis literature. Maybe the same in the PDE literature? Mixed methods are very well studied. Here’s a nice paper by Boffi and Gastaldi introducing these methods in a general setting for parabolic problems http://www.ing.unibs.it/~gastaldi/paper/evo.pdf
Comment by — June 22, 2012 @ 4:52 am
16. I was thinking about a geometric interpretation for the Laplacian, when we start the heat equation with the second eigenfunction $u_{2}$, looking at the evolution for this particular $u_{2}$ as time goes to infinity. If $v\left( t \right)$ is the evolved function/heat distribution at time $t$, then as $t$ goes to infinity the sup norm of $v \left( t \right)$ goes to zero. With $t$ fixed, we can look at the mean curvature of the graph of $v(t)$ as a surface embedded in $R^3$. If $h$ is the mean curvature at a point on that surface, then by definition $h = 1/2 \left( \kappa_{1} + \kappa_2 \right)$, where $\kappa_{1}$ and $\kappa_{2}$ are the principal curvatures of the surface, at some point in the graph in $R^3$ of $v \left( t \right)$. The principal curvatures were studied by Gauss, at the beginnings of differential geometry. As $t$ goes to infinity, the normal to the surface is almost perpendicular to the $x$ , $y$ plane. If the normal is parallel to the $z$-axis, the mean curvature $h$ is twice the Hessian matrix of $v \left( t \right)$ as a function of $x$ and $y$. The trace of the Hessian is both twice the mean curvature $h$, as well as the Laplacian of $v\left( t \right)$ at the reference point, say $\left(x_{0}, y_{0} \right)$. So as $t$ goes to infinity, the Laplacian at the reference point should be very close to to the twice the mean curvature, so Laplacian ~= $2h$ as $t$ goes to infinity. Positive curvature belongs to surfaces such as $z = x^2 + y^2$ (we need to say which normal to a surface serves as reference to determine mean curvature, with sign included). So, Laplacian ~= $2h$ as $t$ goes to infinity. I’ve thought about that when considering the evolution of $v\left( t \right)$ in a very short time interval . I’ve also thought about the heat kernel and Green’s function methods. The heat-reflecting property of the sides of the triangles make things very complicated, it seems.
Comment by — June 23, 2012 @ 5:55 am
17. I’ve just rolled the thread over again, as this one is hitting the 100-comment mark: http://polymathprojects.org/2012/06/24/polymath7-research-threads-3-the-hot-spots-conjecture/
Comment by — June 24, 2012 @ 7:23 pm
RSS feed for comments on this post. TrackBack URI
Theme: Customized Rubric. Blog at WordPress.com.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 490, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242088794708252, "perplexity_flag": "head"}
|
http://www.scholarpedia.org/article/Granger_causality
|
# Granger causality
From Scholarpedia
Anil Seth (2007), Scholarpedia, 2(7):1667.
Curator and Contributors
1.00 - Anil Seth
0.50 -
Eugene M. Izhikevich
0.17 -
Benjamin Bronner
Granger causality is a statistical concept of causality that is based on prediction. According to Granger causality, if a signal X1 "Granger-causes" (or "G-causes") a signal X2, then past values of X1 should contain information that helps predict X2 above and beyond the information contained in past values of X2 alone. Its mathematical formulation is based on linear regression modeling of stochastic processes (Granger 1969). More complex extensions to nonlinear cases exist, however these extensions are often more difficult to apply in practice.
Granger causality (or "G-causality") was developed in 1960s and has been widely used in economics since the 1960s. However it is only within the last few years that applications in neuroscience have become popular.
## Personal account by Clive Granger
Figure 1: Prof. Clive W.J. Granger, recipient of the 2003 Nobel Prize in Economics
The following is a personal account of the development of Granger causality kindly provided by Professor Clive Granger (Figure 1). Please do not edit this section.
The topic of how to define causality has kept philosophers busy for over two thousand years and has yet to be resolved. It is a deep convoluted question with many possible answers which do not satisfy everyone, and yet it remains of some importance. Investigators would like to think that they have found a "cause", which is a deep fundamental relationship and possibly potentially useful.
In the early 1960's I was considering a pair of related stochastic processes which were clearly inter-related and I wanted to know if this relationship could be broken down into a pair of one way relationships. It was suggested to me to look at a definition of causality proposed by a very famous mathematician, Norbert Weiner, so I adapted this definition (Wiener 1956) into a practical form and discussed it.
Applied economists found the definition understandable and useable and applications of it started to appear. However, several writers stated that "of course, this is not real causality, it is only Granger causality." Thus, from the beginning, applications used this term to distinguish it from other possible definitions.
The basic "Granger Causality" definition is quite simple. Suppose that we have three terms, $$X_t\ ,$$ $$Y_t\ ,$$ and $$W_t\ ,$$ and that we first attempt to forecast $$X_{t+1}$$ using past terms of $$X_t$$ and $$W_t\ .$$ We then try to forecast $$X_{t+1}$$ using past terms of $$X_t\ ,$$ $$Y_t\ ,$$ and $$W_t\ .$$ If the second forecast is found to be more successful, according to standard cost functions, then the past of $$Y$$ appears to contain information helping in forecasting $$X_{t+1}$$ that is not in past $$X_t$$ or $$W_t\ .$$ In particular, $$W_t$$ could be a vector of possible explanatory variables. Thus, $$Y_t$$ would "Granger cause" $$X_{t+1}$$ if (a) $$Y_t$$ occurs before $$X_{t+1}\ ;$$ and (b) it contains information useful in forecasting $$X_{t+1}$$ that is not found in a group of other appropriate variables.
Naturally, the larger $$W_t$$ is, and the more carefully its contents are selected, the more stringent a criterion $$Y_t$$ is passing. Eventually, $$Y_t$$ might seem to contain unique information about $$X_{t+1}$$ that is not found in other variables which is why the "causality" label is perhaps appropriate.
The definition leans heavily on the idea that the cause occurs before the effect, which is the basis of most, but not all, causality definitions. Some implications are that it is possible for $$Y_t$$ to cause $$X_{t+1}$$ and for $$X_t$$ to cause $$Y_{t+1}\ ,$$ a feedback stochastic system. However, it is not possible for a determinate process, such as an exponential trend, to be a cause or to be caused by another variable.
It is possible to formulate statistical tests for which I now designate as G-causality, and many are available and are described in some econometric textbooks (see also the following section and the #references). The definition has been widely cited and applied because it is pragmatic, easy to understand, and to apply. It is generally agreed that it does not capture all aspects of causality, but enough to be worth considering in an empirical test.
There are now a number of alternative definitions in economics, but they are little used as they are less easy to implement.
Further references for this personal account are (Granger 1980; Granger 2001).
## Mathematical formulation
G-causality is normally tested in the context of linear regression models. For illustration, consider a bivariate linear autoregressive model of two variables $$X_1$$ and $$X_2\ :$$
$X_1(t) = \sum_{j=1}^p{A_{11,j}X_1(t-j)} + \sum_{j=1}^p{A_{12,j}X_2(t-j)+E_1(t)}$ $\tag{1} X_2(t) = \sum_{j=1}^p{A_{21,j}X_1(t-j)} + \sum_{j=1}^p{A_{22,j}X_2(t-j)+E_2(t)}$
where $$p$$ is the maximum number of lagged observations included in the model (the model order), the matrix $$A$$ contains the coefficients of the model (i.e., the contributions of each lagged observation to the predicted values of $$X_1(t)$$ and $$X_2(t)\ ,$$ and $$E_1$$ and $$E_2$$ are residuals (prediction errors) for each time series. If the variance of $$E_1$$ (or $$E_2$$) is reduced by the inclusion of the $$X_2$$ (or $$X_1$$) terms in the first (or second) equation, then it is said that $$X_2$$ (or $$X_1$$) Granger-(G)-causes $$X_1$$ (or $$X_2$$). In other words, $$X_2$$ G-causes $$X_1$$ if the coefficients in $$A_{12}$$ are jointly significantly different from zero. This can be tested by performing an F-test of the null hypothesis that $$A_{12}$$ = 0, given assumptions of covariance stationarity on $$X_1$$ and $$X_2\ .$$ The magnitude of a G-causality interaction can be estimated by the logarithm of the corresponding F-statistic (Geweke 1982). Note that model selection criteria, such as the Bayesian Information Criterion (BIC, (Schwartz 1978)) or the Akaike Information Criterion (AIC, (Akaike 1974)), can be used to determine the appropriate model order $$p\ .$$
Figure 2: Two possible connectivities that cannot be distinguished by pairwise analysis. Adapted from Ding et al. (2006).
As mentioned in the previous section, G-causality can be readily extended to the $$n$$ variable case, where $$n>2\ ,$$ by estimating an $$n$$ variable autoregressive model. In this case, $$X_2$$ G-causes $$X_1$$ if lagged observations of $$X_2$$ help predict $$X_1$$ when lagged observations of all other variables $$X_3 \ldots X_N$$ are also taken into account. (Here, '$$X_3 \ldots X_N$$ correspond to the variables in the set $$W$$ in the previous section; see also Boudjellaba et al. (1992) for an interpretation using autoregressive moving average (ARMA) models.) This multivariate extension, sometimes referred to as ‘conditional’ G-causality (Ding et al. 2006), is extremely useful because repeated pairwise analyses among multiple variables can sometimes give misleading results. For example, a repeated bivariate analyses would be unable to disambiguate the two connectivity patterns in Figure 2. By contrast, a conditional/multivariate analysis would infer a causal connection from $$X$$ to $$Y$$ only if past information in $$X$$ helped predict future $$Y$$ above and beyond those signals mediated by $$Z\ .$$ Another instance in which conditional G-causality is valuable is when a single source drives two outputs with different time delays. A bivariate analysis, but not a multivariate analysis, would falsely infer a causal connection from the output with the shorter delay to the output with the longer delay.
Application of the above formulation of G-causality makes two important assumptions about the data: (i) that it is covariance stationary (i.e., the mean and variance of each time series do not change over time), and (ii) that it can be adequately described by a linear model. The #Limitations and extensions section will describe recent extensions that attempt to overcome these limitations.
## Spectral G-causality
By using Fourier methods it is possible to examine G-causality in the spectral domain (Geweke 1982; Kaminski et al. 2001). This can be very useful for neurophysiological signals, where frequency decompositions are often of interest. Intuitively, spectral G-causality from $$X_1$$ to $$X_2$$ measures the fraction of the total power at frequency $$f$$ of $$X_1$$ that is contributed by $$X_2\ .$$ For examples of the advantages of working in the frequency domain see #Application in neuroscience.
For completeness, we give below the mathematical details of spectral G-causality. The Fourier transform of (1) gives
$\tag{2} \left( \begin{array}{ll} A_{11}(f) & A_{12}(f)\\ A_{21}(f) & A_{22}(f) \end{array} \right) \left( \begin{array}{l} X_{1}(f)\\ X_{2}(f) \end{array} \right) = \left( \begin{array}{l} E_{1}(f)\\ E_{2}(f) \end{array} \right)$
in which the components of $$A$$ are
$A_{lm}(f) = \delta_{lm} - \sum_{j=1}^pA_{lm}(j)e^{(-i2{\pi}fj)}$ $\delta_{lm} = 0 \quad (l = m)$ $\delta_{lm} = 1 \quad (l \neq m)$
Rewriting Equation (2) as
$\left( \begin{array}{l} X_{1}(f)\\ X_{2}(f) \end{array} \right) = \left( \begin{array}{ll} H_{11}(f) & H_{12}(f)\\ H_{21}(f) & H_{22}(f) \end{array} \right) \left( \begin{array}{l} E_{1}(f)\\ E_{2}(f) \end{array} \right)$
we have
$\left( \begin{array}{ll} H_{11}(f) & H_{12}(f)\\ H_{21}(f) & H_{22}(f) \end{array} \right) = \left( \begin{array}{ll} A_{11}(f) & A_{12}(f)\\ A_{21}(f) & A_{22}(f) \end{array} \right)^{-1}$
where $$H$$ is the transfer matrix. The spectral matrix $$S$$ can now derived as
$S(f) = \langle X(f)X^*(f) \rangle = \langle H(f) \Sigma H^*(f) \rangle$
in which the asterisk denotes matrix transposition and complex conjugation, $$\Sigma$$ is the covariance matrix of the residuals $$E(t)\ ,$$ and $$H$$ is the transfer matrix. The spectral G-causality from $$j$$ to $$i$$ is then
$I_{j \rightarrow i}(f) = -ln \left( 1 - \frac{ \left( { \Sigma_{jj} - \frac{\Sigma_{ij}^2}{\Sigma_{ii} }} \right) |H_{ij}(f)|^2}{S_{ii}(f)} \right)$
in which $$S_{ii}(f)$$ is the power spectrum of variable $$i$$ at frequency $$f\ .$$ (This analysis was adapted from (Brovelli et al. 2004; Kaminski et al. 2001)).
Recent work by Chen et al. (2006) indicates that application of Geweke’s spectral G-causality to multivariate (>2) neurophysiological time series sometimes results in negative causality at certain frequencies, an outcome which evades physical interpretation. They have suggested a revised, conditional version of Geweke's measure which may overcome this problem by using a partition matrix method. Other variations of spectral G-causality are discussed by Breitung and Candelon (2006) and Hosoya (1991).
Two alternative measures which are closely related to spectral G-causality are partial directed coherence (Baccala & Sameshima 2001) and the directed transfer function (Kaminski et al. 2001; note these authors showed an equivalence between the directed transfer function and spectral G-causality). For comparative results among these methods see Baccala and Sameshima (2001), Gourevitch et al. (2006), and Pereda et al. (2005). Unlike the original time-domain formulation of G-causality, the statistical properties of these spectral measures have yet to be fully elucidated. This means that significance testing often relies on surrogate data, and the influence of signal pre-processing (e.g., smoothing, filtering) on measured causality remains unclear.
## Limitations and extensions
### Linearity
The original formulation of G-causality can only give information about linear features of signals. Extensions to nonlinear cases now exist, however these extensions can be more difficult to use in practice and their statistical properties are less well understood. In the approach of Freiwald et al. (1999) the globally nonlinear data is divided into a locally linear neighborhoods (see also Chen et al. 2004), whereas Ancona et al. (2004) used a radial basis function method to perform a global nonlinear regression.
### Stationarity
The application of G-causality assumes that the analyzed signals are covariance stationary. Non-stationary data can be treated by using a windowing technique (Hesse et al. 2003) assuming that sufficiently short windows of a non-stationary signal are locally stationary. A related approach takes advantage of the trial-by-trial nature of many neurophysiological experiments (Ding et al. 2000). In this approach, time series from different trials are treated as separate realizations of a non-stationary stochastic process with locally stationary segments.
### Dependence on observed variables
A general comment about all implementations of G-causality is that they depend entirely on the appropriate selection of variables. Obviously, causal factors that are not incorporated into the regression model cannot be represented in the output. Thus, G-causality should not be interpreted as directly reflecting physical causal chains (see Personal account).
## Application in neuroscience
Figure 3: Causal interactions among EEG sensors during stage 2 sleep. Adapted from Kaminski et al (2001).
Over recent years there has been growing interest in the use of G-causality to identify causal interactions in neural data. For example, Bernasconi and Konig (1999) applied Geweke’s spectral measures to describe causal interactions among different areas in the cat visual cortex, Liang et al. (2000) used a time-varying spectral technique to differentiate feedforward, feedback, and lateral dynamical influences in monkey ventral visual cortex during pattern discrimination, and Kaminski et al. (2001) noted increasing anterior to posterior causal influences during the transition from waking to sleep by analysis of EEG signals (fig 2). More recently, Brovelli et al. (2004) identified causal influences from primary somatosensory cortex to motor cortex in the beta-frequency range (15-30 Hz) during lever pressing by awake monkeys. In the domain of functional MRI, Roebroeck et al. (2005) applied G-causality to data acquired during a complex visuomotor task, and Sato et al. (2006) used a wavelet variation of G-causality to identify time-varying causal influences. G-causality has also been applied to simulated neural systems in order to probe the relationship between neuroanatomy, network dynamics, and behavior (Seth 2005; Seth & Edelman 2007).
A recurring theme in the application of G-causality to empirical data is whether the aim is to recover the underlying structural connectivity, or to supply a description of network dynamics which may differ from the structure. In the former case, G-causality and its variants are successful to the extent that they can account for indirect interactions (using conditional G-causality) and unobserved variables (an unsolved problem). In the latter case, while accounting for these factors is still important, network dynamics is seen as a joint product of network structure and the dynamical processes operating on that structure, which may be modulated by environment and context. Spectral G-causality is a good example of a causal description which goes beyond inferring structural connectivity. Other examples are provided by modeling work which shows how the same network structure can generate different causal networks depending on context (Seth 2005; Seth & Edelman 2007).
## Alternative techniques
### Information theory
Schreiber (2000) introduced the concept of transfer entropy which is a version of mutual information operating on conditional probabilities. It is designed to detect the directed exchange of information between two variables, conditioned to common history and inputs. For a comparison of transfer entropy with other causal measures, including various implementations of G-causality, see (Lungarella et al. (in press)). An advantage of information theoretic measures, as compared to standard G-causality, is that they are sensitive to nonlinear signal properties. A limitation of transfer entropy, as compared to G-causality, is that it is currently restricted to bivariate situations. Also, information theoretic measures often require substantially more data than regression methods such as G-causality (Pereda et al., 2005).
### Maximum likelihood models
For neural data consisting of spike trains, it is possible to use a combination of point process theory and maximum likelihood modeling to detect causal interactions within an ensemble of neurons (Chornoboy et al. 1988; Okatan et al. 2005).
### Alternative frameworks
The above techniques (including G-causality) are data driven inasmuch as causal interactions are inferred directly from simultaneously recorded time series. A different model driven framework for analyzing causality is given by structural equation modeling (Kline 2005). In this approach, a causal model is first hypothesized, free parameters are then estimated, and only then is the fit of the model to the data assessed. While this method makes more efficient use of data than standard G-causality, it does so at the expense of constraining the repertoire of possible. causal descriptions.
Another alternative framework for determining causality is to measure the effects of selective perturbations or lesions (Keinan et al. 2004; Tononi & Sporns 2003). While this method can in principle deliver unambiguous information about physical causal chains, perturbing or lesioning a system is not always possible or desirable and may disrupt the natural behavior of the system.
## Concluding remarks
Time series analysis methods are becoming increasingly prominent in attempts to understand the relationships between network structure and network dynamics in neuroscience settings. Many linear and nonlinear methods now exist, based on a variety of techniques including regression modeling, information theory, and dynamical systems theory (Gourevitch et al., 2006; Pereda et al. 2005). G-causality provides one method that, in its most basic form (see #Mathematical formulation), is easy to implement and rests on a firm statistical foundation. More complex extensions to the frequency domain and to nonlinear data now exist and continue to be developed. However, in the analysis of neurophysiological signals it may be that simple, linear methods should be tried first before moving on to more complicated alternatives.
## Resources
A detailed review of the theory and application of G-causality can be found in Ding et al. (2006).
James P. LeSage has provided a toolbox that includes G-causality analysis among a wide selection of econometric routines. This toolbox, designed for MATLAB (Mathworks, Natick, MA), can be downloaded from www.spatial-econometrics.com.
Anil K. Seth has developed a toolbox that focuses on time-domain G-causality and which includes methods for graph-theoretic analysis of G-causality interactions. This toolbox, also designed for MATLAB, can be downloaded from Anil K Seth's website.
Links to many other MATLAB resources for a variety of nonlinear time series analysis methods are provided in the Appendix of Pereda et al. (2005).
## See also
transfer entropy, mutual information, structural equation modeling
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 82, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276739358901978, "perplexity_flag": "middle"}
|
http://nrich.maths.org/6454/index?nomenu=1
|
## 'Reaction Types' printed from http://nrich.maths.org/
### Show menu
In chemistry, rates of reaction for complicated reactions are often approximated using simple polynomials. It is very useful to understand the qualitative nature of such algebraic representations. These concepts are explored in this task.
Consider the following algebraic forms for an approximate rate of a reaction $R$ in terms of $t$. When $R(t)$ is negative, it can be assumed that the reaction has either not started or has stopped.
$A: R_1(t) = -t +0.1t^3$
$B: R_2(t) = 2+ 2t - 2t^2$
$C: R_3(t) = 2 +t+0.1 t^2$
$D: R_4(t) = 5t - t^2$
$E: R_5(t) = t + t^2-0.1t^3$
$F: R_6(t) = -t + t^2$
Which of these start reacting immediately? Start slowly? Keep speeding up? Speed up to a peak and then slow down? Eventually stop? How would you best describe the reactions in words? Can you think of reactions which might be modelled by these sorts of equations?
Without doing any detailed calculations, can you work out which reaction will take the longest to get started? Which reaction will be the fastest after a long time?
Each reaction is simultaneously started and left to run. Assuming that the algebraic forms continue to hold, which of the reactions will, at some time, be the fastest reaction?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9600846171379089, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/104605-inequalities.html
|
# Thread:
1. ## inequalities
Use the inequality absolute value[sin(a)-sin(b)] << absolute value[a-b], which is valid for all real numbers a and b, to prove that sin x is continuous on R.
2. just apply the mean value theorem for the function $f(t)=\sin t$ on the interval $[a,b].$
3. Is the mean value theorem the same thing as the intermediate value theorem?
IVT
f(x) is defined on [a,b]
f(x) assumes all values of (a,b)
then f(c) where c is a point in (a,b) intersects f(x)
4. no, the assumptions are different, google it and see.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8301753401756287, "perplexity_flag": "middle"}
|
http://asmeurersympy.wordpress.com/2010/06/
|
# Aaron Meurer's SymPy Blog
My blog on my work on SymPy and other fun stuff.
## The Risch Algorithm: Part 1
June 30, 2010
My work this week isn’t very interesting, even insomuch as my work any week is interesting, so this week I have elected to start a series of blog posts about the Risch Algorithm in general. I will start out with the basics in this post.
Anyone who has taken Calculus knows a handful of heuristics to calculate integrals. u-substitution, partial fractions, integration by parts, trigonometric substitution, and table integration are a few of the more popular ones. These are general enough to work for most integrals that are encountered in problems in Physics, Engineering, and so on, as well as most of those generated by solving differential equations from the same fields. But these fall short in a couple of ways. First off, they are just heuristics. If they fail, it does not mean that no integral exists. This means that they are useless for proving that certain functions, such as $e^{-x^2}$ do not have integrals, no matter how hard you try to find them. Second, they work for only relatively simple functions. For example, suppose you have a rational function in $\log{x}$ and $x$. An example would be $\frac{(\log{x})^2 + 2\log{x} + x^2 + 1}{x\log{x} + 2x^3}$. We are not interested in integrating this function, but rather in finding it back given its derivative, $- \frac{1 + 7 x^{2} \log{x} + \log{x} + (\log{x})^2 + 3 x^{2} + 6 x^{2} (\log{x})^2 + (\log{x})^3 + 2 x^{4}}{4 x^{4} \log{x} + x^{2} (\log{x})^2 + 4 x^{6}}$. The only method I named above that would come even close to being applicable to this integrand is partial fractions. This requires multivariate partial fraction decomposition (with respect to $x$ and $\log{x}$), and gives $-{\frac {2\,{x}^{2}+1+\log{x} }{{x}^{2}}}+{\frac {-1+8\,{x}^{4}-16\,{x}^{6}-{x}^{2}}{ \left( \log{x} +2\,{x}^{2} \right) ^{2}{x}^{2}}}+{\frac {-3\,{x}^{2}+12\,{x}^{4}-1}{ \left( \log{x} +2\,{x}^{2} \right) {x}^{2}}}$, which brings us no closer to a solution.
The reason that I started with a function and then computed its derivative was to show how easy it is to come up with a very complicated function that has an elementary anti-derivative. Therefore, we see that the methods from calculus are not the ones to use if we want an integration algorithm that is complete. The Risch Integration Algorithm is based on a completely different approach. At its core lies Liouville’s Theorem, which gives us the form of any elementary anti-derivative. (I wish to point out at this point that heuristics like this are still useful in a computer algebra system such as SymPy as fast preprocessors to the full integration algorithm).
The Risch Algorithm works by doing polynomial manipulations on the integrand, which is entirely deterministic (non-heuristic), and gives us the power of all the theorems of algebra, allowing us to actually prove that anti-derivatives cannot exist when they don’t. To start off, we have to look at derivations. As I said, everything with the Risch Algorithm is looked at algebraically (as opposed to analytically). The first thing to look at is the derivative itself. We define a derivation as any function $D$ on a ring $R$ that satisfies two properties:
1. $D(a + b) = Da + Db$ (Sum Rule),
2. $D(ab) = aDb + bDa$ (Product Rule)
for any $a, b \in R$. Furthermore, define the set of constant elements as $Const_D(R) = \{a \in R\textrm{ such that }Da = 0\}$. From just these two rules, you can prove all the rules from calculus such as the power rule and the quotient rule. Defining things algebraically lets us avoid analytic problems, such as discontinuities and the need to prove convergence all the time. Another problem from analysis is the multivalue nature of certain functions, namely the natural logarithm. We get around this by defining $\log{a}$ as the unique function satisfying $D\log{a} = \frac{Da}{a}$, for $a \neq 0$. From this definition we can prove the famous logarithmic identities $\log{ab} = \log{a} + \log{b}$ and $\log{a^n} = n\log{a}$ for logarithmic derivatives, again using only the two rules for a derivation given above. For example, $D\log{ab}=\frac{Dab}{ab}=\frac{aDb + bDa}{ab} = \frac{bDa}{ab} + \frac{aDb}{ab} =$$\frac{Da}{a} + \frac{Db}{b}=D\log{a} + D\log{b}=D(\log{a} + \log{b})$.
The above definition for the natural logarithm gives the first insight into how the integration algorithm works. We define transcendental functions in terms of their derivatives. So if $t = e^x$, then $Dt/t = 1$. We can define all of the trigonometric functions in terms of $e^x$ and $\log{x}$ if we use $\sqrt{-1}$, but we can also avoid this. For example, if $t = \tan{x}$, then $Dt = 1 + t^2$ because $\frac{d}{dx}\tan{x} = \sec^2{x} = 1 + \tan^2{x}$.
We say that $t\in K$ is a monomial over the field $k$ with respect to a derivation $D$ if it satisfies
1. $t$ is transcendental over $k$,
2. $D[t]\in k[t]$.
The first condition is necessary because the we are only going to deal with the trancenental version of the Risch Algorithm (the algebraic case is solved too, but the solution method is quite different, and I am not implementing it this summer). The second condition just says that the derivative of t is a polynomial in t and a rational function in x. The functions I mentioned above all satisfy these properties for $K = \mathbb{Q}$. Theorems in analysis show that $\log{x}$, $e^x$, and $\tan{x}$ are all transcendental over $\mathbb{Q}[x]$. This is actually the only use of analysis that we make in the integration algorithm. Also, we see that if $t_1=\log{x}$, $t_2=e^x$, and $t_3=\tan{x}$, then $Dt_1=\frac{1}{x}$, $Dt_2=t_2$, and $Dt_3=1 + t_3^2$, which are all polynomials in their respective $t_i$ and rational functions in $x$. In the algorithm, $K$ is actually a tower of monomial extensions of $\mathbb{Q}$, so $t_n$ is a monomial over $\mathbb{Q}(x, t_1, \dots, t_{n-1})$. This allows us to work with functions like $e^{\tan{x}}$. We can’t make $t=e^{\tan{x}}$ directly because $\frac{d}{dx}e^{\tan{x}} = (1 + \tan^2{x})e^{\tan{x}}$ is not a polynomial in $t$ (it also contains $\tan{x}$) . But if we let $t_1$ be such that $Dt_1=1 + t_1^2$, i.e., $t_1=\tan{x}$, then we can let $t_2$ be such that $Dt_2=(1 + t_1^2)t_2$, i.e., $t_2=e^{\tan{x}}$. Remember that the $t_i$ are all “functions” of x, but there is no need to write $t=t(x)$ as long as we remember that $Dt\neq 0$, i.e., $t\not \in Const_D(K)$. This is another advantage of using algebraic over analytic methods; it allows us to reduce an integral down to a rational function in the “symbols” $x$ and $t_1, t_2, \dots, t_n$. By convention, we make the first extension $t_0$ such that $Dt_0=1$, i.e., $t_0=x$. I will just call it $x$ here instead of $t_0$, to avoid confusion.
This is the preparsing that I alluded to in an earlier post that I have not implemented yet. The reason that I haven’t implemented it yet is not just because I haven’t gotten around to it. We have to be careful when we build up the extension that each element is indeed transcendental over the already built-up field $k$. For example, although it appears transcendental, the function $e^{\frac{1}{2}\log{(1 + x^2)}}$ is really algebraic because it equals $\sqrt{1 + x^2}$. There are additional requirements, such that each extension is not the derivative of logarithmic derivative of an element of $k$ (see also the example I gave in the previous post). This is the part that I was talking about in my previous post that is not written out as much as the other algorithms in Bronstein’s book. So this is algorithmically solved, just like the rest of the Algorithm, but it is non-trivial and may end up being the hardest part of the algorithm for me to implement, just because it will probably require the most figuring out on my part.
So we can see that we can convert a transcendental integral, such as the one above, into a rational function in x and monomial extensions $t_1, t_2, \dots, t_n$. For example, the above integrand would become $- \frac{1 + t + t^{2} + 3 x^{2} + 6 t^{2} x^{2} + 7 t x^{2} + t^{3} + 2 x^{4}}{t^{2} x^{2} + 4 t x^{4} + 4 x^{6}}$. We then perform certain polynomial manipulations on this integrand, using the fact that $Dx=1$ and $Dt=\frac{1}{x}$. For the transcendental case of the Risch Algorithm, this is similar to the rational function integration that I outlined in this post, and has Liouville’s Theorem at its core. This is where I will start off next time.
### Like this:
5 Comments | Uncategorized | Permalink
Posted by Aaron Meurer
## Quick Update
June 26, 2010
I’ve spend most of this week sitting in a car, so while I have been able to do some work, I haven’t had much time to write up a blog post. So, to comply with Ondrej’s rule, here is a quick update.
I have been working my way through Bronstein’s book. I finished the outer algorithmic layer of the implantation. Basically, the algorithm does polynomials manipulation on the integrand. It first reduces the integrand into smaller integrals, until it gets to an integral where a subproblem must be solved to solve it. The subproblem that must be solved differs depending on the type of the integral. The first one that comes up in Bronstein’s text is the Risch Differential Equation, which arises from the integration of exponential functions. (I will explain all of these thing in more detail in a future blog post). At this point, the algorithms begin to recursively depend on each other, requiring me to implement more and more algorithms at a time in order for each to work. To make things worse, a very fundamental set of algorithms are only described in the text, not given in pseudo-code, so I have had to figure those things out. These are algorithms to determine if a differential extension is a derivative or logarithmic derivative of elements that have already been extended. Again, I will explain better in a future post, but the idea is that you replace elements in an integrand with dummy variables, but each element has to be transcendental over the previous elements. So if you have $\int (e^x + e^{x^2} + e^{x + x*^2})dx$, and you set $t_1 = e^x$ and $t_2 = e^{x^2}$ ($Dt_1 = t_1$ and $Dt_2 = 2xt_2$), then you cannot make $t_3 = e^{x + x^2}$ because $e^{x + x^2} = t_1t_2$. The ability to determine if an element is a derivative or a logarithmic derivative of an element of the already build differential extension is important not only for building up the extension for the integrand (basically the preparsing), but also for solving some of the cases of the subproblems such as the Risch Differential Equation problem.
So I am still figuring out some of the details on that one. The description in the book is pretty good (this is probably the best written math textbook I have ever seen), but I still have had to figure out some of the mathematical details on paper (which is something I enjoy anyway, but it can be more stressful). Hopefully by the next time I can have some code that is working enough to actually demonstrate solving some complex integrals, (with manual preparsing), and even more excitingly, prove that some non-elementary integrals, such as the classic $\int e^{-x^2}dx$, are indeed so. And I also hope to have some more explanations on how the Risch algorithm works in future posts.
### Like this:
1 Comment | Uncategorized | Permalink
Posted by Aaron Meurer
## Strange Python Behavior (can someone please explain to me what is going on here?)
June 16, 2010
Every once in a while, seemingly really simple Python code does something completely unexpected for me. Look at the following snippet of Python code. This is run straight from the 2.6.5 interpreter, with no other commands executed. Do you notice anything strange?
```$python
Python 2.6.5 (r265:79359, Mar 24 2010, 01:32:55)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>; l = lambda i: a[i]
>>> l
<function at="" 0x39e7f0="">
>>> H = [(1, 2), (3, 4)]
>>> [l(0) + l(1) for a in H]
[3, 7]
```
Did you spot it? Here is a hint. Running a different but similar session:
```$python
Python 2.6.5 (r265:79359, Mar 24 2010, 01:32:55)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> l = lambda i: a[i]
>>> l
<function at="" 0x39e7f0="">
>>> l(0)
Traceback (most recent call last):
File "", line 1, in
File "", line 1, in
NameError: global name 'a' is not defined
```
Do you see it now? I defined the lambda function `l` in terms of `a` without defining first defining `a`! And furthermore, it just works when `a` is defined. This is actually independent of the fact that we are working in a list comprehension, as this continuation of the previous session shows:
```>>> a = [3, 4, 5]
>>> l(0)
3
```
But I want to expand on the list comprehension example, because there even more bizzare things going on here. Restarting a new session again:
```$python
Python 2.6.5 (r265:79359, Mar 24 2010, 01:32:55)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> l = lambda i: a[i]
>>> H = [(1, 2), (3, 4)]
>>> [l(0) + l(1) for a in H]
[3, 7]
>>> (l(0) + l(1) for a in H)
<generator object="" at="" 0x3a4350="">
>>> list((l(0) + l(1) for a in H))
[7, 7]
```
So, if you are astute and have been using Python for long enough, you should be able to catch what is going on here. If you don’t know, here is a hint (continuation of previous session):
```>>> a
(3, 4)
```
So, as you may know, in Python 2.6 and earlier, list comprehension index variables “leek” into the local namespace. The strange thing here is that although the list comprehension would reset it, the generator version does not. Well, normally, it does do this:
```>>> x = 1
>>> [x for x in range(10)]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> x
9
>>> del x
>>> list((x for x in range(10)))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> x
Traceback (most recent call last):
File "", line 1, in
NameError: name 'x' is not defined
>>> x = 1
>>> list((x for x in range(10)))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> x
1
```
So the above bit has something to do with the way the `lambda` function was defined with the `a`. By the way, here is what happens with the generator comprehension (is that what these are called?) if `a` is not defined:
```>>> del a
>>> list((l(0) + l(1) for a in H))
Traceback (most recent call last):
File "", line 1, in
File "", line 1, in
File "", line 1, in
NameError: global name 'a' is not defined
```
This is how I discovered this. I had defined a lambda function using an variable that was then passed to a list comprehension that used this variable as the index without realizing it. But then I tried converting this into a generator comprehension to see if it would be faster, and got the above error.
Finally, since the “feature” of leaking list comprehension loop variables into the local namespace is going away in Python 3, I expected things to behave at least a little differently in Python 3. I tried the above in a Python 3.1.2 interpreter and got the following:
```$python3
Python 3.1.2 (r312:79147, Mar 23 2010, 22:02:05)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> l = lambda i: a[i]
>>> l
<function at="" 0x100585a68="">
>>> H = [(1, 2), (3, 4)]
>>> [l(0) + l(1) for a in H]
Traceback (most recent call last):
File "", line 1, in
File "", line 1, in
File "", line 1, in
NameError: global name 'a' is not defined
>>> list((l(0) + l(1) for a in H))
Traceback (most recent call last):
File "", line 1, in
File "", line 1, in
File "", line 1, in
NameError: global name 'a' is not defined
```
So in Python 3, both the list comprehension and the generator comprehensions act the same, which is not too surprising. I guess I should recode that piece of code to make it future proof, although this doesn’t seem easy at the moment, and it may require converting a one-liner into a six-liner. If you are interested, the piece of code is here.
So can anyone provide any insight into what is going on with that lambda function? Running it with the `-3` switch to `python2.6` didn’t give any warnings related to it.
Update: As I noted in a comment, I figured out how to make this future-proof. I need to convert it from
```def residue_reduce_derivation(H, D, x, t, z):
lambdafunc = lambda i: i*derivation(a[1], D, x, t).as_basic().subs(z, i)/ \
a[1].as_basic().subs(z, i)
return S(sum([RootSum(a[0].as_poly(z), lambdafunc) for a in H]))
```
to
```def residue_reduce_derivation(H, D, x, t, z):
return S(sum((RootSum(a[0].as_poly(z), lambda i: i*derivation(a[1], D, x, t).as_basic().subs(z, i)/ \
a[1].as_basic().subs(z, i)) for a in H)))
```
Thanks to all the commenters for the explanations.
Also, you may have noticed that I discovered that if you use [code] instead of <code>, you get these nicer code blocks that actually respect indentation! Now I just need to figure out how to make them syntax highlight Python code.
Update 2: [code='py'] colors it! Sweet!
Update 3: I just discovered that SymPy has a `Lambda()` object that handles this better. In particular, it pretty prints the code, and is what is already being used for `RootSum()` in the rational function integrator, at least in Mateusz’s polys9.
```>>> integrate(1/(x**5 + 1), x)
log(1 + x)/5 + RootSum(625*_t**4 + 125*_t**3 + 25*_t**2 + 5*_t + 1, Lambda(_t, _t*log(x + 5*_t)))
```
Still, this has been a very good learning experience.
### Like this:
11 Comments | Uncategorized | Permalink
Posted by Aaron Meurer
## A Weeklog
June 15, 2010
These seem to be all the rave these days, so I figured, why not jump on the bandwagon:
```
Aaron-Meurer:doc aaronmeurer20100615153531(integration$)$git weekreport
Aaron Meurer (20):
Fix some bugs in Poly
Make Poly(sin(x)/x*t, t, domain='EX').clear_denoms() work
Fix integrate to work correctly with heurisch.py
Use more efficient gcdexdiophantine() algorithm
Add support for taking the derivation over the coefficient domain in risch.py
Add (but do not yet use) splitfactor_sqf() in risch.py
Add polynomial_reduce() to risch.py
Add tests for algorithms in risch.py in a new test_risch.py file
Only allow coercion to larger domains
Allow coercion from ZZ(a) to ZZ(a, b)
Fix doctest in new heurisch.py file
Add residue_reduce()
Formatting fixes in docstrings in sympy/polys/algebratools.py
Add includePRS option to resultant functions
Add permute method to DMP
Add a test for the includePRS option of resultant()
Have residue_reduce() make S_i monic
Rewrite polynomial_reduce() non-recursively
Add integrate_hypertangent_polynomial()
Add integrate_nonlinear_no_specials()
```
### Like this:
Posted by Aaron Meurer
## Integration of rational functions
June 11, 2010
So for this week’s blog post I will try to explain how the general algorithm for integrating rational functions works. Recall that a rational function is the quotient of two polynomials. We know that using common denominators, we can convert the sum of any number of rational functions into a single quotient, $\frac{a_nx^n + a_{n-1}x^{n-1} + \cdots + a_2x^2 + a_1x + a_0}{b_nx^n + b_{n-1}x^{n-1} + \cdots + b_2x^2 + a_1x + a_0}$. Also, using polynomial division we can rewrite any rational function as the sum of a polynomial and the quotient of two polynomials such that the degree of the numerator is less than the degree of the denominator ($F(x) = \frac{b(x)}{c(x)} = p(x) + \frac{r(x)}{g(x)}$, with $deg(r) < deg(g)$). Furthermore, we know that the representation of a rational function is not unique. For example, $\frac{(x + 1)(x - 1)}{(x + 2)(x - 1)}$ is the same as $\frac{x + 1}{x + 2}$ except at the point $x = 1$, and $\frac{(x - 1)^2}{x - 1}$ is the same as $x - 1$ everywhere. But by using Euclid’s algorithm for finding the GCD of polynomials on the numerator and the denominator, along with polynomial division on each, we can cancel all common factors to get a representation that is unique (assuming we expand all factors into one polynomial). Finally, using polynomial division with remainder, we can rewrite any rational function $F(x)$ as $\frac{a(x)}{b(x)} = p(x) + \frac{a(x)}{d(x)}$, where $a(x)$, $b(x)$, $c(x)$, $d(x)$, and $p(x)$ are all polynomials, and the degree of $a$ is less than the degree of $d$.
We know from calculus that the integral of any rational function consists of three parts: the polynomial part, the rational part, and the logarithmic part (consider arctangents as complex logarithms). The polynomial part is just the integral of $p(x)$ above. The rational part is another rational function, and the logarithmic part, which is a sum of logarithms of the form $a\log{s(x)}$, where $a$ is an algebraic constant and $s(x)$ is a polynomial (note that if $s(x)$ is a rational function, we can split it into two logarithms of polynomials using the log identities).
To find the rational part, we first need to know about square-free factorizations. An important result in algebra is that any polynomial with rational coefficients can be factored uniquely into irreducible polynomials with rational coefficients, up to multiplication of a non-zero constant and reordering of factors, similar to how any integer can be factored uniquely into primes up to multiplication of 1 and -1 and reordering of factors (technically, it is with coefficients from a unique factorization domain, for which the rationals is a special case, and up to multiplication of a unit, which for rationals is every non-zero constant). A polynomial is square-free if this unique factorization does not have any polynomials with powers greater than 1. Another theorem from algebra tells us that irreducible polynomials over the rationals do not have any repeated roots, and so given this, it is not hard to see that a polynomial being square-free is equivalent to it not having repeated roots.
A square-free factorization of a polynomial is a list of polynomials, $P_1P_2^2 \cdots P_n^n$, where each $P_i$ is square-free (in other words, $P_1$ is the product of all the factors of degree 1, $P_2$ is the product of all the factors of degree 2, and so on). There is a relatively simple algorithm to compute the square-free factorization of a polynomial, which is based on the fact that $gcd(P, \frac{dp}{dx})$ reduces the power of each irreducible factor by 1. For example:
(Sorry for the picture. WordPress code blocks do not work)
It is not too hard to prove this using the product rule on the factorization of P. So you can see that by computing $\frac{P}{gcd(P, \frac{dP}{dx})}$, you can obtain $P_1P_2\cdots P_n$. Then, by recursively computing $A_0 = P$, $A_1 = gcd(A_0, \frac{dA_0}{dx})$, $A2 = gcd(A_1, \frac{dA_1}{dx})$, … and taking the quotient each time as above, we can find the square-free factors of P.
OK, so we know from partial fraction decompositions we learned in calculus that if we have a rational function of the form $\frac{Q(x)}{V(x)^n}$ , where $V(x)$ is square-free, the integral will be a rational function if $n > 1$ and a logarithm if $n = 1$. We can use the partial fraction decomposition that is easy to find once we have the square-free factorization of the denominator to rewrite the remaining rational function as a sum of terms of the form $\frac{Q}{V_k^k}$, where $V_i$ is square-free. Because $V$ is square-free, $gcd(V, V')=1$, so the Extended Euclidean Algorithm gives us $B_0$ and $C_0$ such that $B_0V + C_0V'=1$ (recall that $g$ is the gcd of $p$ and $q$ if and only if there exist $a$ and $b$ relatively prime to $g$ such that $ap+bq=g$. This holds true for integers as well as polynomials). Thus we can find $B$ and $C$ such that $BV + CV'= \frac{Q}{1-k}$. Multiplying through by $\frac{1-k}{V^k}$, $\frac{Q}{V^k}=-\frac{(k-1)BV'}{V^k} + \frac{(1-k)C}{V^{k-1}}$, which is equal to $\frac{Q}{V^k} = (\frac{B'}{V^{k-1}} - \frac{(k-1)BV'}{V^k}) + \frac{(1-k)C-B'}{V^{k-1}}$. You may notice that the term in the parenthesis is just the derivative of $\frac{B}{V^{k-1}}$, so we get $\int\frac{Q}{V^k}=\frac{B}{V^{k-1}} + \int\frac{(1-k)C - B'}{V^{k-1}}$. This is called Hermite Reduction. We can recursively reduce the integral on the right hand side until the $k=1$. Note that there are more efficient ways of doing this that do not actually require us to compute the partial fraction decomposition, and there is also a linear version due to Mack (this one is quadratic), and an even more efficient algorithm called the Horowitz-Ostrogradsky Algorithm, that doesn’t even require a square-free decomposition.
So when we have finished the Hermite Reduction, we are left with integrating rational functions with purely square-free denominators. We know from calculus that these will have logarithmic integrals, so this is the logarithmic part.
First, we need to look at resultants and PRSs. The resultant of two polynomials is defined as differences of the roots of the two polynomials, i.e., $resultant(A, B) = \prod_{i=1}^n\prod_{j=1}^m (\alpha_i - \beta_j)$, where $A = (x - \alpha_1)\cdots(x - \alpha_n)$ and $B = (x - \beta_1)\cdots(x - \beta_m)$ are monic polynomials split into linear factors. Clearly, the resultant of two polynomials is 0 if and only if the two polynomials share a root. It is an important result that the resultant of two polynomials can be computed from only their coefficients by taking the determinant of the Sylvester Matrix of the two polynomials. However, it is more efficiently calculated using a polynomial remainder sequence (PRS) (sorry, there doesn’t seem to be a Wikipedia article), which in addition to giving the resultant of A and B, also gives a sequence of polynomials with some useful properties that I will discuss below. A polynomial remainder sequence is a generalization of the Euclidian algorithm where in each step, the remainder $R_i$ is multiplied by a constant $\beta_i$. The Fundamental PRS Theorem shows how to compute specific $\beta_i$ such that the resultant can be calculated from the polynomials in the sequence.
Then, if we have $\frac{A}{D}$, left over from the Hermite Reduction (so $D$ square-free), let $R=resultant_t(A-t\frac{dD}{dx}, D)$, where $t$ is a new variable, and $\alpha_i$ be the distinct roots of R. Let $p_i=\gcd(A - \alpha_i\frac{dD}{dx}, D)$. Then it turns out that the logarithmic part of the integral is just $\alpha_1\log{p_1} + \alpha_2\log{p_2} + \cdots \alpha_n\log{p_n}$. This is called the Rothstein-Trager Algorithm.
However, this requires finding the prime factorization of the resultant, which can be avoided if a more efficient algorithm called the Lazard-Rioboo-Trager Algorithm is used. I will talk a little bit about it. It works by using subresultant polynomial reminder sequences.
It turns out that the above $gcd(A-\alpha\frac{dD}{dx}, D)$ will appear in the PRS of $D$ and $A-t\frac{dD}{dx}$. Furthermore, we can use the PRS to immediately find the resultant $R=resultant_t(A-t\frac{dD}{dx}, D)$, which as we saw, is all we need to compute the logarithmic part.
So that’s rational integration. I hope I haven’t bored you too much, and that this made at least a little sense. I also hope that it was all correct. Note that this entire algorithm has already been implemented in SymPy, so if you plug a rational function in to `integrate()`, you should get back a solution. However, I describe it here because the transcendental case of the Risch Algorithm is just a generalization of rational function integration.
As for work updates, I found that the Poly version of the heursitic Risch algorithm was considerably slower than the original version, due to inefficiencies in the way the polynomials are currently represented in SymPy. So I have put that aside, and I have started implementing algorithms from the full algorithm. There’s not much to say on that front. It’s tedious work. I copy the algorithm from Bronstein’s book, then try make sure that it is correct based on the few examples given and from the mathematical background given, and when I’m satisfied, I move on to the next one. Follow my integration branch if you are interested.
In my next post, I’ll try to define some terms, like “elementary function,” and introduce a little differential algebra, so you can understand a little bit of the nature of the general integration algorithm.
### Like this:
3 Comments | Uncategorized | Permalink
Posted by Aaron Meurer
## PuDB, a better Python debugger
June 4, 2010
So Christian Muise unwittingly just reminded me on IRC that I forgot to mention the main method that I used to learn how the heurisch function works in my last blog post. I usually only use a debugger when I have a really hard bug I need to figure out, when the print statements aren’t enough. The reason for this is that the debugger that I had been using, winpdb, is, well, a pain to use. There are so many little bugs, at least in Mac OS X, that it is almost not worth while to use it unless I need to. For example, restarting a script from the debugger doesn’t work. If I pass a point that I wanted to see, I have to completely close the winpdb window and restart it from the command line, which takes about half a minute. Also, winpdb uses it’s own variant of pdb, which seems to cause more problems than it creates (like bugging me about sympy importing pdb somewhere every time I start debugging.)
But I really wanted to be able to step through the heurisch code to see exactly how it works, because many of the implementation details, such as gathering the components of an expression, will be similar if not exactly the same in the full algorithm. So I started my quest for a better debugger. For me, the ideal debugger is the C debugger in XCode. That debugger has saved me in most of my programming assignments in C. But it is only for C based languages (C, Objective-C, probably C++, …), not Python. So I did a Google search, and it turns out that there is a list of Python debuggers here. So I went through them, and I didn’t have to go far. The very first one, pudb, turns out to be awesome!
You can watch this screencast to get a better idea of the features, or even better install it and check them out. The debugger runs in the console, not in some half-hacked GUI (half-hacked is what any non-Cocoa GUI looks like in Mac OS X). The only down side to this is that you have to use the keyboard to do everything, but it ends up not being too bad. And you can press ‘?’ at any time to see the possible commands.
To install it, just do `easy_install pudb`. To run it, just create a script of what you want to debug, and do `python -m pudb.run my-script.py ` and it just works! I have a line that says `alias pudb='python -m pudb.run'` in my `.profile`, which makes it even easier to run. If you want to set a break point in the code, you can either navigate there from within pudb by pressing ‘m’, or you add a line that says `from pudb import set_trace; set_trace()` to the code (if you add the line to your code, you don’t even need to create a script. Just execute the code in IPython and when it hits that line, it will load the debugger).
Some cool features:
- IPython console. Just press ‘!’ to go to a console, where you can manipulate variables from the executed namespace, and you can choose an IPython console.
- Very easy to navigate. You just need to know the keys ‘s’, ‘n’, and ‘t’.
- View the code from elsewhere than what is being run. Pressing ‘m’ lets you view all imported modules. You can easily view points on the stack by choosing them.
- If an exception is thrown, it catches it! This may sound obvious for a debugger, but it is one of things that didn’t work very well in winpdb. You can view the traceback of the exception, and choose to restart without having to close and reopen the debugger. Actually, it asks you if you want to restart every time the script finishes too, which is also a great improvement over winpdb.
This is what it looks like. Click for a bigger picture:
This is where the heurisch algorithm hangs.
Some annoyances (in case Andreas Kloeckner reads this):
- The default display for variables is type, which is completely useless. I have to manually go through and change each to str so I can see what the variable is. Is there a way to change this default?
- It asks me every time if I want to use IPython. I always want to use IPython.
- This is might be a Mac OS X Terminal bug, but when I execute a statement that takes a while to run, it doesn’t redraw the pudb window until it finishes. This means that stepping through a program “flashes” black from what is above pudb in the window, and if I run a statement that takes forever, I loose the ability to see where it is unless I keyboard interrupt. Fortunately, it catches keyboard interrupts, so I can still see the traceback.
- There is no way to resize the variables window, or to scroll sideways in it. If I want to see what a long variable expression is, I have to go to the IPython console and type it there.
Some of these might be fixable and I just don’t know it yet. But even with them, this is still an order of magnitude improvement over winpdb. Now I can actually use the debugger all the time in my coding, instead of just when I have a really tough bug and no other choice.
UPDATE:
The first two were trivial to fix in a fork of the repository (isn’t open source awesome?). So if those are bothering you too, check out my branches at http://github.com/asmeurer/PuDB. Maybe if I have some time I will make them global options using environment variables or something and see if Andreas wants to merge them back into the main repo.
As for the second one, I realized that it might be a good thing, because you can see anything that is printed. Still, I would prefer seeing both, if possible (and the black flashes are annoying).
UPDATE 2:
You can resize the side view by pushing +/-, though there doesn’t seem to be a way to, say, make the variables view bigger and the breakpoints view smaller.
UPDATE 3:
A while back Ondrej modified the code to have a different color theme, and I followed suit. See this conversation at GitHub. So now, instead of looking like a DOS terminal, in PuDB for me looks like this:
PuDB XCode Midnight Theme Colors
This is exactly the same colors as my code in XCode, the editor I use, with the Midnight Theme. It’s pretty easy to change the colors to whatever you want. Right now, you have to edit the source, but Ondrej or I might someday make it so you can have themes.
Also, having used this all summer (and it was a life-saver having it in multiple occasions, and I am sure made my development speed at least twice as fast in others), I have one additional gripe. It is too difficult to arrow up to the variable that you want to access in the variables view. It would be nice to have a page up/page down feature there.
UPDATE 4: PuDB has since improved a lot, include many fixes by myself. It now supports themes, saved settings, variable name wrapping, and more. See this followup post.
### Like this:
11 Comments | Uncategorized | Permalink
Posted by Aaron Meurer
## Update for this week
June 4, 2010
So I started writing up a blog post on how rational function integration works, but Ondrej wants a blog post every week by the end of I don’t think I would do it justice by rushing to finish it now (read: I’m to lazy to do it). So instead, I’ll just give a short post (if that’s possible for me) on what I have been doing this week.
I finished up writing doctests for the polynomials module for now (see issue 1949), so now this week I started looking at the integrator. In particular, I went through each of the 40 issues with the Integration label and added them to a test file that I can monitor throughout the summer to see my progress. It is the test_failing_integrals.py file in my Integration branch, where all my work will be going for the foreseeable future. So if you want to follow my work, follow that branch. Here are some observations from those issues:
- integrate() can’t handle almost all algebraic integrals (functions with square roots, etc.). It can handle the derivative of arcsin and arcsinh because of special code in heurisch.py, but that’s about it. Before I can do any work on the Algebraic Risch Algorithm, I will need to implement the transcendental algorithm, so I think my temporary solution for this may be to add pattern matching heuristics for some of the more common algebraic integrals (anyone know a good integral table?).
- I figured out why integrate hangs forever with some integrals, such as the one in issue 1441. Here is, in a nutshell, how the Heuristic Risch algorithm works: Take the integrand and split it into components. For example, the components of x*cos(x)*sin(x)**2 are [x, cos(x), sin(x)]. Replace each of these components with a dummy variable, so if x = x0, cos(x) = x1, and sin(x) = x2, then the integrand is x0*x1*x2**2. Also, compute the derivative of each component in terms of the dummy variables. So the derivatives of [x0, x1, x2] are [1, -x2, 2*x1*x2]. Then, using these, perform some magic to create some rational functions out of the component dummy variables. Then, create a candidate integral with a bunch of unknowns [A1, A2, …], which will be rational numbers, and a multinomial of the An’s and the xn’s that should equal 0 if the candidate integral is correct. Then, because the xn’s are not 0, and there is also some algebraic independence, you have the the An coefficients of each term must equal 0. So you get a system of linear equations in the An’s. You then solve these equations, and plug the values of the An’s into the candidate integral to give you the solution, or, if the system is inconsistent, then if cannot find a solution, possibly because there is no elementary one.
Well, that over simplifies a lot of things, but the point I want to make is that the integral from issue 1441 creates a system of ~600 linear equations in ~450 variables, and solving that equation is what causes the integration to hang. Also, as Mateusz, my mentor and the one who wrote the current integration implementation, pointed out, quite a bit of time is spent in the heurisch algorithm doing expansion on large Basic polynomials. When I say Basic polynomials, I mean that they are SymPy expressions, instead of Poly instances. Using Poly should speed things up quite a bit, so my next move will be to convert heurisch() into using Poly wherever applicable.
- There were a few bugs in the rational integration, which I fixed in my branch. The problem was in rational integrals with symbolic coefficients. Because the new polys are able to create polynomials using any expression as a generator, not just symbols, things like Poly(sin(y)*x, x) creates Poly(sin(y)*x, x, domain=’ZZ[sin(y)]‘). But using the polynomial ring or fraction field creates problems with some things like division, whereas we really only want the domain to be EX (expression domain) in this case. So this was not too difficult to fix, and you can see the fix in my integration branch.
- Some integrals will require some good implementation of special functions such as the hypergeometric function to work. Sometimes, you don’t want to know what the non-elementary integral looks like, but you just want to calculate a definite integral. The solution here is to use Meijer-G functions, which are on the list of things to possibly do at the end of the summer if I have time.
- Another bug that I plan on fixing (I haven’t done it yet, but I know how to do it and it will be trivial), is this (issue 1888):
In [18]: print integrate(f(x).diff(x)**2, x)
2*D(f(x), x)*f(x)/3 – 2*x*D(f(x), x, x)*f(x)/3 + x*D(f(x), x)**2/3
The problem is in the step where it computes the derivative of the components, it tries to compute the derivative of f(x).diff(x) in terms of a dummy variable, but it reduces to 0 because diff(x2, x) == 0. Thus, it treats f(x).diff(x) like something that has a 0 third derivative, i.e., x**2.
Well that’s it. I knew I couldn’t make a short blog post :). If you want to help, I have three branches that need review (1, 2, 3), and except for the last one, my work is based on top of the other two, so none of my integration work can be pushed in until those two reviewed positively.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 166, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939842164516449, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/188672/finding-equilibrium-points-of-a-differential-equation
|
# Finding equilibrium points of a differential equation
OK, this is a weak question, but I'm a bit rusty on differential equations, so here it is:
I need to find the general solution of x'=ax+3 and then find the equilibrium points as well as which points are equilibria sinks and sources (a is a parameter).
I found the general solution of x(t). It was: $x(t)=Ce^{at}-\frac{3}{a}$
Now, from the one or two lines in the text about equilibrium points, I'm seeing that that will be a "value" that makes x(t) = 0. Though, the way it's worded in the text makes it seem like a choice of "a" or "C" will be what makes an equilibrium solution.
This is where I'm confused. Do I solve the general solution for t by setting x = 0? Do I solve the original equation finding setting x' to 0 and finding x (this is what I saw on some website).
I'm just not sure what I'm supposed to do to find the equilibrium points, and then how to decide which are sinks and which are sources.
-
## 1 Answer
Your differential equation is $x' = ax+3$. An equilibrium solution is one for which $x'=0$ along the solution. As a counter-example, $x(t)=0$ (the zero function) is not an equilibrium solution because although $x'=0$ it is not true that $a(0)+3=x'$. The zero function is not a solution to the given differential equation. We need two things:
1.) the proposed solution has the property $x'=0$
2.) the proposed solution is in fact a solution (when you plug it into the DEQn it works)
Therefore, $x'=ax+3=0$ yields $x = -3/a$ as the equilbrium solution.
For more complicated differential equations the equilibrium solutions can be more interesting. Here this solution is actually the exceptional case in your general solution if you derived it by separation of variables. Notice,
$$\frac{dx}{dt} = ax+3$$
yields
$$\int \frac{dx}{ax+3}= \int dt$$
provided $ax+3 \neq 0$. Integrating,
$$\frac{1}{a}\ln |ax+3| = t+c$$
algebra,
$$|ax+3| = e^{at+ac} =e^{ac}e^{at}$$
Hence,
$$ax+3 = \pm e^{ac}e^{at}$$
Or, $x = Ce^{at} - 3/a$ where $C = \pm e^{ac}/a$. Notice that $C$ defined in this way ought not be zero. It is interesting that $C=0$ brings us the equilibrium solution.
Often we find equilibrium solutions between other classes of solutions. For example, see the logistic equation $P' = P(1-P/C)$. The equilibrium solution is attained by the constant solution $P=C$ whereas there are two other solution classes which asymptotically approach the equilibrium either from below or from above.
-
Thanks! I actually finished this up after finding a nice website with a clear description of the equilibrium solution (my text is awful), but it's nice to confirm that what I did worked! So thanks a bunch! – user25326 Aug 30 '12 at 5:45
glad to almost help :) – James S. Cook Aug 30 '12 at 5:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446679353713989, "perplexity_flag": "head"}
|
http://dialinf.wordpress.com/page/4/
|
# A Dialogue on Infinity
between a mathematician and a philosopher
## Model Theory and Category TheoryJuly 29, 2008
Posted by dcorfield in Uncategorized.
3 comments
David Kazhdan has some interesting things to say about model theory, and in particular its relationship to category theory, in his Lecture notes in Motivic Integration.
In spite of it successes, the Model theory did not enter into a “tool box” of mathematicians and even many of mathematicians working on “Motivic integrations” are content to use the results of logicians without understanding the details of the proofs.
I don’t know any mathematician who did not start as a logician and for whom it was “easy and natural” to learn the Model theory. Often the experience of learning of the Model theory is similar to the one of learning of Physics: for a [short] while everything is so simple and so easily reformulated in familiar terms that “there is nothing to learn” but suddenly one find himself in a place when Model theoreticians “jump from a tussock to a hummock” while we mathematicians don’t see where to “put a foot” and are at a complete loss.
(more…)
## Same but multifacetedJuly 11, 2008
Posted by Alexandre Borovik in Uncategorized.
6 comments
Continuing the topic of “sameness”, it is interesting to compare behaviour of two familiar objects: the field of real numbers $\mathbb{R}$ and the field of complex numbers $\mathbb{C}$.
$\mathbb{C}$ is uncountably categorical, that is, it is uniquely described in a language of first order logic among the fields of the same cardinality.
In case of $\mathbb{R}$, its elementary theory, that is, the set of all closed first order formulae that are true in $\mathbb{R}$, has infinitely many models of cardinality continuum $2^{\aleph_0}$.
In naive terms, $\mathbb{C}$ is rigid, while $\mathbb{R}$ is soft and spongy and shape-shifting. However, $\mathbb{R}$ has only trivial automorphisms (an easy exercise), while $\mathbb{C}$ has huge automorphism group, of cardinality $2^{2^{\aleph_0}}$ (this also follows with relative ease from basic properties of algebraically closed fields). In naive terms, this means that there is only one way to look at $\mathbb{R}$, while $\mathbb{C}$ can be viewed from an incomprehensible variety of different point of view, most of them absolutely transcendental. Actually, there are just two comprehensible automorphisms of $\mathbb{C}$: the identity automorphism and complex conjugation. It looks like construction of all other automorphisms involves the Axiom of Choice. When one looks at what happens at model-theoretic level, it appears that “uniqueness” and “canonicity” of a uncountable structure is directly linked to its multifacetedness. I am still hunting appropriate references for this fact. Meanwhile, I got the following e-mail from a model theorist colleague, Zoe Chatzidakis:
Models of uncountably categorical theories behave really like vector spaces: if inside a model $M$ you take a maximal independent set $X$ of elements realizing the generic type, and take any permutation of $X$, it extends to an automorphism of the model. So, if $M$ is of size $\kappa > \aleph_0$, then any basis has size $\kappa$, and its automorphism group has size $2^\kappa$.
I don’t know a reference, but it should be in any model theory book which talks about strongly minimal sets. Or maybe in the paper by ??? Morley ??? which shows that you have a notion of dimension and so on? I.e., that $\aleph_1$ categorical theories and strongly minimal sets are the same.
It is really a well-known result, so you probably don’t need a reference if you cite it in a paper.
## Facing EternityJuly 4, 2008
Posted by Alexandre Borovik in Uncategorized.
add a comment
Mikhail Zlatkovsky. Facing Eternity.
Under a changed title “Coat Star” the cartoon won first prize at Ken Sprague Fund competition at Earthworks 2008.
## Ultraproducts of fields, IIJune 28, 2008
Posted by Alexandre Borovik in Uncategorized.
1 comment so far
I continue my post on ultraproducts. So, we want to understand in what sense an ultraproduct of finite fields $F_i$ of unbounded order is a limit at infinity of finite fields. The answer now should be obvious: since ultraproducts are residue fields for maximal ideals in the cartesian product
$R = \prod F_i,$
the topology in question should be the canonical topology (Zariski topology) of the spectrum of the ring $\mbox{Spec}(R)$. It instantly follows from the description of ideals and maximal ideals in $R$ that this is the Stone topology on the set of ultrafilters on $\mathbb{N}$, or, what is th same, the Cech-Stone compactification $\beta\mathbb{N}$ of the set $\mathbb{N}$ with descrete topology. Therefore the answer is: an ultraproduct is the limit in the Cech-Stone compactification of a discrete countable set.
I have to admit that at this point I reached limits of my knowledge of set-theoretic topology and had to dip into Wikipedia. It happened that $\beta\mathbb{N}$, and even more so its non-principal part ${\mathbb{N}}^* = \beta{\mathbb{N}} \setminus {\mathbb{N}}$ is characterised by some unique properties: If the continuum hypothesis holds then ${\mathbb{N}}^*$ is the unique Parovicenko space, up to isomorphism.
According to Wikipedia, a Parovicenko space is a topological space X satisfying the following conditions:
• X is compact Hausdorff
• X has no isolated points
• X has weight c, the cardinality of the continuum (this is the smallest cardinality of a base for the topology).
• Every two disjoint open Fσ subsets of X have disjoint closures
• Every nonempty Gδ of X has non-empty interior.
As you can see, ${\mathbb{N}}^*$ is uniquely characterised by very natural properties.
It is yet another manifestation of of one of the most pardoxical properties of mathematical infinity: canonicity of workable constructions in the infinite domain.
## NHS invented a new type of infinity…June 21, 2008
Posted by Alexandre Borovik in Uncategorized.
4 comments
My posts are likely to become shorter and sparser — as a result of a work trauma, I have developed a medical condition (see a photo) which makes typing very difficult, and I depend on the kind help of my wife with all my typing needs. On the bright side, my experience gave me a new understanding of infinity. From a mathematical viewpoint, indefinitely long waiting lists for treatment on NHS (National Health Service) were not something new, they were just a special case of potentially infinite natural series
Week 1, Week 2, Week 3, … etc.
going on and on in a very old-fashioned way. The real contribution of NHS to mathematics of infinity is that they make patients to wait (indefinitely again) to be included on a waiting list. It is an infinity of natural numbers enhanced by an additional constraint: you are not allowed to start counting.
## A letter from a studentJune 17, 2008
Posted by Alexandre Borovik in Uncategorized.
27 comments
Hello Professor,
I hope everything is good and that you recovered from the accident in your
finger.
I just wanted to share some personal thoughts. Suppose we have a system that is discrete and finite. We have the natural numbers {1,2,3,…,T} where T is the symbol for the ”biggest natural number”. We will never have a value for T but we accept that it is a fixed natural number. We can also include 0 in our system. How much mathematics can we do?
We could define the usual addition and multiplication. But we will have a problem when the result is ”greater than T”. But nothing ”greater than T” exist…
Suppose for example that T=10. Then we have 2+3=5, 4+5=9, 5+5=10. But what about 5+7? We could just define 5+7= err (error). just like the small calculators do when they reach their limit. But err is not in our system… We could work modulo T OR we could just say that 5+7=10=T. and 8+9=T. and 7+8=10=T etc.
(If we choose the last option then obviously 1+T=T and T+T=T. So T has a similar behaviour as the infinite that we learned at school. But the big difference is that in our system T is a fixed natural number!)
I just can not see why we NEED infinite to make mathematics. Is it a matter of convenience? Is it just to make things simpler? If this is the case then we should accept that infinite and continuous entities are just tools, ideas, symbols that make our life easier. But we should remember that IDEAS themselves are FINITE since they live in the finite world of our brains!
I think the whole problem is more about aesthetics. I think that someone can accept that only discrete and finite things exist and, at the same time, that this belief does not destroy all the beautifull mathematics that have been created until now.
While I was searching the Internet I found this PDF document which I attached and I found amusing. I also found other people expressing similar views (against infinite) like for example Alexander Yessenin-Volpin.
## Ultraproducts of fields, IJune 16, 2008
Posted by Alexandre Borovik in Uncategorized.
4 comments
My immediate research interests more and more focus on an interplay between finite and infinite in algebra (at least this is where my chats with my PhD student drift to). In particular, I have to use frequently a specific construction, an ultrafilter product of fields. It is pretty sublime in the sense of David Corfield and leads to appearance of very interesting canonical objects.
We start with a family of fields $F_i$, $i \in I$ . For simplicity assume that all fields are finite of unbounded order and that the index set $I$ is just the set of natural numbers $\mathbb{N}$ (actually this is the case most interesting to me). We form the Cartesian product $R$ of $F_i$: this is just a set of infinite strings
$\{ (f_1, f_2, \dots ) \mid f_i \in F_i \}$
with componentwise operations of addition and multiplication. In what follows, we shall make frequent use of zero set of a string $f = (f_1, f_2, \dots )$, that is, the set of indices where the string components are zeroes:
$zero(f) = \{ i \in {\mathbb{N}} \mid f_i =0\}$.
Obviously, $R$ is a commutative ring with unity. Let us look at its ideals. One can easily see that an ideal $I$ in $R$ is uniquely determined by the set
$zero(I) = \{zero(f) \mid f \in I\}$;
indeed, non-zero components of a string $f \in I$ can be arbitrarily changed without moving $f$ outside of $I$ by multiplying by appropriate string of invertible elements. One can also instantly see that $zero(I)$ is a filter on $\mathbb{N}$, that is, it is a collection of non-empty subsets of $\mathbb{N}$ closed under taking finite intersections and supersets, and, moreover, that the correspondence
$I \mapsto zero(I)$
is a one-to-one correspondence between proper ideals in $R$ and filters of $\mathbb{N}$ which preserves embedding of ideals and embedding of filters. Therefore, maximal ideals in $R$ correspond to maximal filters on $\mathbb{N}$; the former and the latter exist by the Zorn Lemma, one of the equivalent formulations of the Choice Axiom. If now $I$ is a maximal ideal in $R$, the fact that the factor ring $R/I$ is a field and, in particular, has no zero divisors, translates in the fact that a maximal filter $\mathcal{F}$ is an ultrafilter: it has the property that, for any subset $X \subseteq \mathbb{N}$ either $X$ or its complement ${\mathbb{N}} \smallsetminus X$ belongs to $\mathcal{F}$.
Given an ultrafilter $\mathcal{F}$ on $\mathbb{N}$, the ultraproduct $F = \prod F_i/\mathcal{F}$ is nothing more than the corresponding residue field $R/I$. There are obvious principal (ultra)filters on $\mathbb{N}$, they consist of all subsets containing a given element $i \in \mathbb{N}$; obviously, the corresponding ultraproduct is just the original field $F_i$.
Non-principal filters do exists. One very interesting non-principal filter on $\mathbb{N}$ is the Frechet filter consisting of all subsets with finite complements.
But if we take an ultrafilter containing a non-principal filter (it exists by the Zorn Lemma), the corresponding ultraproduct $F$ has many marvelous properties. In particular, if all $F_i$ have different characteristics, $F$ turns out to be a field of characteristic zero (I leave the proof of this fact as an exercise to the reader).
In the next instalment of this post, I will discuss the meaning of a frequently used assertion that an ultraproduct $F$ is a limit at infinity of finite fields $F_i$.
## The SublimeJune 11, 2008
Posted by dcorfield in Uncategorized.
7 comments
There’s an interesting post – Whatever Happened to Sublimity? – at the blog Siris. It includes a quotation from Edmund Burke
But let it be considered that hardly any thing can strike the mind with its greatness which does not make some sort of approach toward infinity; which nothing can do while we are able to perceive its bounds; but to see an object distinctly, and to perceive its bounds, are one and the same thing. A clear idea is, therefore, another name for a little idea. (A Philosophical Inquiry into the Origin of Our Ideas of the Sublime and the Beautiful, Part II, Section V.)
A natural question to ask, then, is where do we encounter the sublime in mathematics? And an obvious answer, you might think, would be the mathematical infinite.
Joseph Dauben has an interesting section in his book – Georg Cantor: his mathematics and philosophy of the infinite, Harvard University Press, 1979 – on how Cantor, receiving such discouragement from his mathematical colleagues, found an audience in certain thinkers within the Catholic church. Where earlier in the nineteenth century any attempt to describe a completed infinity was viewed as a sacrilegious attempt to circumscribe God, some theologians were open to Cantor’s new hierarchy of infinities, with its unreachable Absolute Infinite leaving room for the divine.
Personally, set theory has rarely invoked in me a sense of the sublime. On the other hand, the following comment by Daniel Davis does:
Behrens and Lawson use stacks, the theory of buildings, homotopy fixed points, the above model category, and other tools to make it possible to use the arithmetic of Shimura varieties to help with understanding the stable homotopy groups of spheres.
There’s plenty about the infinite in that statement, it’s true, but there’s so much more to it than that.
Brandon Watson, the blogger at Siris, writes
…what often strikes me when I look around at the philosophical scene today is how foreign this has all become. There are a few exceptions, but sublimity has vanished as a serious concern.
With regard to the philosophy of mathematics in particular I couldn’t agree more.
## Axiom of Choice, quotesJune 1, 2008
Posted by Alexandre Borovik in Uncategorized.
2 comments
• The Axiom of Choice is obviously true, the Well-ordering theorem obviously false, and who can tell about Zorn’s lemma ?” ::— Jerry Bona -(This is a joke: although the axiom of choice, the well-ordering principle, and Zorn’s lemma are all mathematically equivalent, most mathematicians find the axiom of choice to be intuitive, the well-ordering principle to be counterintuitive, and Zorn’s lemma to be too complex for any intuition.)
• “The Axiom of Choice is necessary to select a set from an infinite number of socks, but not an infinite number of shoes.” ::— Bertrand Russell -(The observation here is that one can define a function to select from an infinite number of pairs of shoes by stating for example, to choose the left shoe. Without the axiom of choice, one cannot assert that such a function exists for pairs of socks, because left and right socks are (presumably) identical to each other.)
• “The axiom gets its name not because mathematicians prefer it to other axioms.” ::— A. K. Dewdney -( This quote comes from the famous April Fool’s Day article in the computer recreations column of the Scientific American , April 1989.)
[Source]
## Induction and recursion IIMay 28, 2008
Posted by David Pierce in Uncategorized.
3 comments
Formal logic provides examples of structures that allow proof by induction, but not necessarily definition by recursion. Let us consider, for example, the propositional logic that is apparently due to Łukasiewicz. We first define the (well-formed) formulas of the logic. We start with some set of so-called propositional variables. Then the set of formulas is the smallest set of strings that contains these variables and is closed under certain formation rules, namely:
1. the singulary operation converting a string S to its negation, ~S;
2. the binary operation converting a pair (S, T) of strings to the implication, (S → T).
The set of theorems of the logic is the smallest set of formulas that contains all of the axioms and is closed under a certain rule of inference. The axioms are the formulas of any of three forms:
1. (F → (G → F))
2. ((F → (G → H)) → ((F → G) → (F → H)))
3. ((~F → ~G) → (G → F))
Here F and G stand for formulas. The rule of inference is detachment (or modus ponens), the binary operation that converts a pair (F, (F → G)) of formulas into the formula G. As stated, this is only a partial operation; we can make it total by having it convert (F, H) to F whenever H does not have the form (F → G) for some formula G.
The “inductive” definition of theorems is thus similar to the inductive definition of formulas. In each case, we start with a set and close under one or more operations. Immediately, proof by induction is possible on the set so defined. For example, we can prove inductively that each formula features as many left as right brackets. A feature of theorems that is proved by induction will be given in a moment.
However, unlike the formation rules for formulas, our rule of inference for theorems is not obviously well-defined. To see that it is well-defined, we must establish “unique readability” of formulas, namely:
1. each formula is exactly one of the following: a variable, a negation, or an implication;
2. the formation rules are injective.
Here injectivity of the formation rules corresponds to injectivity of the successor-operation on the set of natural numbers. The partition of the set of formulas into variables, negations, and implications corresponds to the partition of the set of natural numbers into the successors and the initial number (zero or one, as one prefers).
Unique readability of formulas allows recursive definitions. Indeed, the definition of the rule of detachment can be understood as recursive: If we denote this operation by D, then we have
1. D(F, P) = F for all propositional variables P;
2. D(F, ~G) = F;
3. D(F, (F → G)) = G;
4. D(F, (H → G)) = F if H is not F.
Another example of a recursively defined function on formulas is a truth-assignment. Consider the set {true, false} as the two-element field {1, 0}. Given a function φ from the set of propositional variables into the this field, we have a unique function Φ on the set of all formulas that agrees with φ on the variables, that takes ~F to 1 + Φ(F), and that takes (F → G) to 1 + Φ(F) + Φ(F)Φ(G).
By induction, every theorem takes the value 1 under every truth-assignment. But the truth-assignment is not recursively defined on the set of theorems: it is recursively defined on the set of formulas. Since the rule of detachment is not injective, we do not automatically have recursively defined functions on the set of theorems.
As far as I know, we do not have an efficient algorithm to determine whether a given formula is indeed a theorem: this lack of an algorithm would appear to be connected to the non-injectivity of the rule of detachment. A formula carries within itself the method of its construction; but a theorem does not carry within itself its proof, and indeed it will have infinitely many proofs.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 78, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935206949710846, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/85035/cutting-surface-with-boundary
|
# Cutting Surface with Boundary
This is a very basic question:
I don't see the following:
If I cut a surface with boundary along non-contracible cycles into components with genus zero, how can those components have an unbounded number of boundary cycles?
Thank you
-
How are you counting "boundary cycles" -- are these elements of some chain complex, or perhaps some kind of equivalence classes? Your question is too imprecise to have an answer. I suppose if your surface had an "unbounded number of boundary cycles" before you cut, it would have the same number afterwards. Does "unbounded" mean infinitely-many ? – Ryan Budney Nov 23 '11 at 23:30
I shouldn't read papers in areas in which I don't understand much: I would guess that unbounded means that it can be arbitrarily larger than the size of the input (its an algorithmic paper btw) – stefan Nov 24 '11 at 0:27
What is the input? Perhaps include a link to the paper you're reading. – Ryan Budney Nov 24 '11 at 2:01
– stefan Nov 24 '11 at 2:15
What part of the paper are you referring to? Could you perhaps give a page and a line number? – Ryan Budney Nov 24 '11 at 3:12
show 1 more comment
## 1 Answer
Okay, I see what you're referring to now. The authors are addressing the issue that if you only cut along non-contractible curves, you could be cutting off annuli. This happens if your curve is "parallel" to a boundary curve. So if you repeatedly cut along curves that are parrallel to the boundary, you'll create possibly an endless list of annuli without ever simplifying the original surface.
If $S_{g,b}$ is a connected surface of genus $g$ with $b$ boundary components, and you cut it along a curve $C$, there are two possibilities:
• $S_{g,b}$ is cut into a connected surface of the form $S_{g-1,b+2}$
• $S_{g,b}$ is cut into two connected surfaces of the form $S_{g_1,b_1}$ and $S_{g_2,b_2}$ where $g_1+g_2=g$, and $b_1+b_2 = b+2$.
In particular, if you cut along a boundary-parallel curve you cut $S_{g,b}$ into an $S_{0,2}$ and an $S_{g,b}$.
The first case is when the curve $C$ is non-separating, the 2nd case is when $C$ is separating.
-
Thank you: Do you understand why their algorithm takes both an abstract topological Surface M and a edge weighted cellularly embedded graph G? (On top of page 3) Apart from the fact that I don't know what an abstract topological Surface is, wouldn't G be enough? – stefan Nov 24 '11 at 18:43
The algorithm is for finding shortest essential curves in surfaces if I'm not mistaken. So you need the information of what the surface is, so that you can talk about how long a curve is. – Ryan Budney Nov 24 '11 at 18:53
Oh thats interesting. This was exactly my concern about how I determine the length of curves. But then I thought that this information is encoded in the weights of the graph. You're saying that this information is encoded in the abstract surface? Why do I need the edge weights? Could you provide me with a reference or a short explanation. Thank you so much – stefan Nov 24 '11 at 19:13
This paper uses conventions for surfaces that are fairly common in part of computer science. Basically you view the surface as a graph $G$ with certain 2-dimensional cells attached (topologist's terminology). So one combinatorial representation of the surface would be as a graph with a cyclic ordering of the edges incident to each vertex. In this paper they seem to like to use a hybrid description, keeping a picture of the actual (topological) surface in mind but also keeping the graph theorist's version in mind. – Ryan Budney Nov 24 '11 at 19:37
Ok thanks. The first part of your answer i understand, you refer to the 2-cell embedded graph encode with as a rotation system, right? Do you know why it is important that this graph is weighted? But what about the second part. How do you store the actual surface in a data structure. And I think later in the paper this is not used anymore. – stefan Nov 24 '11 at 19:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555056095123291, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.