url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/75127/automorphisms-of-a-matrix-in-smith-normal-form
|
## Automorphisms of a matrix in Smith normal form?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Added: Amritanshu Prasad's answer makes it clear that I am really asking for a description of the group of integer unimodular matrices $P$ such that $D^{-1}PD$ is also integer. These matrices are characterized by the property that the elements below the diagonal satisfy certain divisibility properties, namely that for $j\lt i$, the element $p_{ij}$ is divisible by $d_i/d_j$. (The latter is integer by assumption on $D$.) My question was whether there is a simple set of generators for this group.
Amritanshu Prasad's answer provides a nice set of generators when the elements of each row, rather than being integers, are taken modulo a certain number. I will have to think about whether this helps with the problem that motivated the question originally. Meanwhile, I am still interested in finding out what is known about this question in the integer case.
Original post: Let $M$ be a nonsingular integer $n\times n$ matrix with invariant factors $d_1,\ldots,d_n$ satisfying $d_j\mid d_{j+1}$ for $1\le j\lt n$ and $d_j\gt0$ for $1\le j\le n$. Let $D=\mathrm{diag}(d_1,\ldots,d_n)$ be the Smith normal form of $M$. There is a pair of integer unimodular matrices $(P_1,Q_1)$ such that $P_1MQ_1=D$, but $(P_1,Q_1)$ is not uniquely determined. I am trying to understand this nonuniqueness.
Suppose that $P_1MQ_1=P_2MQ_2=D$. Define $P$ and $Q$ to be the integer unimodular matrices that satisfy $P_2=PP_1$ and $Q_2=Q_1Q$. Then $PDQ=D$. We call such a pair $(P,Q)$ an automorphism of $D$, and are interested in characterizing the group consisting of all automorphisms of $D$.
Define the elementary matrices $S_{ij}$, $N_i$, $L_{ij}(a)$ as follows:
1. $S_{ij}M$ interchanges rows $i$ and $j$ of $M$;
2. $N_iM$ multiplies row $i$ of $M$ by $-1$;
3. $L_{ij}(a)M$ adds $a$ times row $j$ of $M$ to row $i$ of $M$, where $a$ is a nonzero integer.
With these definitions, some elementary pairs that satisfy $PDQ=D$ are:
1. $(P,Q)=(S_{ij},S_{ij})$ for any $1\le i\lt j\le n$ such that $d_i=d_j$,
2. $(P,Q)=(N_i,N_i)$ for any $1\le i\le n$,
3. $(P,Q)=(L_{ij}(1),L_{ij}(-d_j/d_i))$ for any $1\le i\lt j\le n$,
4. $(P,Q)=(L_{ij}(-d_i/d_j),L_{ij}(1))$ for any $1\le j\lt i\le n$.
My question is: Do these four types of pair generate the entire automorphism group?
I initially thought that this would be a straightforward question to answer, and that the answer would be 'yes', but now I am fairly sure it is not so simple. For example, consider the smallest nontrivial form, $D=\begin{bmatrix}1 & 0\\ 0 & r\end{bmatrix}$ with $r>1$. Writing $P=\begin{bmatrix}a & b\\ c & d\end{bmatrix}$ with $\lvert ad-bc\rvert=1$, the relation $Q=D^{-1}P^{-1}D$ implies that $Q=(ad-bc)^{-1}\begin{bmatrix}d & -br\\ -c/r & a\end{bmatrix}$, which is integer when $r\mid c$. Hence the most general pair is $(P,Q)=\left(\begin{bmatrix}a & b\\ rc' & d\end{bmatrix},(ad-rbc')^{-1}\begin{bmatrix}d & -rb\\ -c' & a\end{bmatrix}\right)$ with $\lvert ad-rbc'\rvert=1$. For the subgroup satisfying $ad-rbc'=1$, we therefore require that $P$ be an element of the congruence subgroup $\Gamma_0(r)$ and that $Q=\rho(Q)$ where $\rho:\Gamma_0(r)\rightarrow\Gamma^0(r)$ is the map $\begin{bmatrix}a & b\\c & d\end{bmatrix}\mapsto\begin{bmatrix}d & -rb\\-c/r & a\end{bmatrix}$. We obtain the full automorphism group by including, in addition to the generators $(\gamma,\rho(\gamma))$ where $\gamma$ is a generator of $\Gamma_0(r)$, the generators $(N_1,N_1)$ and $(N_2,N_2)$.
The problem with this is that the set of generators 1–4 appears not to be adequate for the case $r=5$, for example. Andy Putman's question http://mathoverflow.net/questions/2757/generators-for-congruence-subgroups-of-sl-2 seems relevant in this regard, although it is concerned with generators of $\Gamma(r)$ rather than $\Gamma_0(r)$. The Grosswald and Frasch references in Ignat Soroko's answer to that question provide a set of generators that freely generates $\Gamma(p)$ for $p$ an odd prime; this set contains many generators in addition to 1–4, and the number of generators grows as $p^3$.
It would therefore appear that, if the picture for $\Gamma_0(r)$ is similar to that of $\Gamma(r)$, and if Frasch's requirement of free generation is not the origin of all this complication, then the answer to my question is no, at least in the case where $n=2$ and $r$ is a prime greater than 3. On the other hand, a remark in Andy Putman's question suggests to me that the situation may be considerably simpler for $n>2$, and that there's a chance that the generators 1–4 suffice. I am not, however, sure that congruence subgroups are the relevant concept for $n>2$. Also, for $n=2$, I wonder whether adding the single extra generator $L_{12}(1)$ to Frasch/Grosswald's set would generate all $P$?
This leads to the following additional questions:
1. Is the above understanding of $n=2$ correct? If so, what is the smallest set of generators one can write down?
2. Do 1–4 generate the automorphism group for $n>3$? If so, how and where is this proved?
-
Do you know what the automorphisms look like for (e.g. when n=2) the case of rational r and rational P and Q? Gerhard "Ask Me About System Design" Paseman, 2011.09.10 – Gerhard Paseman Sep 11 2011 at 5:02
I believe that questions like this are considerably more straightforward to answer over a field. We can let $P$ be any element of $\mathrm{GL}_n(\mathbf{Q})$; then $Q$, also in $\mathrm{GL}_n(\mathbf{Q})$, is determined by $Q=D^{-1}P^{-1}D$. – Will Orrick Sep 12 2011 at 14:15
## 1 Answer
If you have $P$, I think you can recover $Q$ as $(D^{-1}PD)^{-1}$. Therefore, you are looking for invertible integer matrices $P$ such that $D^{-1}PD$ is also invertible (i.e., $P\in GL_n(\mathbf Z)\cap D GL_n(\mathbf Z) D^{-1}$).
Going modulo the subgroup group consisting of $I+T$, where $T$ is an endomorphism of $\mathbf Z^n$ such that $T(\mathbf Z^n)\subset D\mathbf Z^n$, you get the automorphism group of the finite abelian group $A=\mathbf Z/d_1\mathbf Z\times\dotsb\mathbf\times Z/d_n\mathbf Z$. This group is a product of the automorphism groups of the primary components of $A$.
A $p$-primary component of $A$ is of the form $\mathbf Z/p^{\lambda_1}\mathbf Z\times\dotsb\times\mathbf Z/p^{\lambda_n}\mathbf Z$. This group is generated by the Birkhoff moves (see Subgroups of Finite Abelian Groups by Garrett Birkhoff in Proceedings of the London Math. Society, 1935):
1. Scaling any row/column by a $p$-free integer
2. Adding $\alpha$ times the $i$th row (column) to the $j$th row (column) so long as $p^{\max{0,\lambda_i-\lambda_j}}$ divides $\alpha$.
3. Interchanging rows or columns with the same invariant factors.
-
1
Which group is meant by "This is acually" Aut(A) ? If $D=\lambda \cdot I$ then $GL_n(\mathbb{Z}) \cap DGL_n(\mathbb{Z})D^{-1} = GL_n(\mathbb{Z})$ surely isn't Aut(A). – Ralph Sep 11 2011 at 12:09
2
One has a map from the group we seek to the automorphism group of $A$ with as kernel the group consisting of the maps that can be written as identity plus a homomorphism from ${\mathbb Z}^n$ to $D{\mathbb Z}^n$. – Wilberd van der Kallen Sep 11 2011 at 13:29
You are absolutely right. – Amritanshu Prasad Sep 11 2011 at 23:20
@Amritanshu Prasad : Thank you (and thanks to the other commenters) for your answer. While I continue to think about what you have said, I will ask what I'm sure is a naive question. A $p$-free integer is always a unit in $\mathbf{Z}/p^λ\mathbf{Z}$, but not in $\mathbf{Z}$. Therefore it would seem that move 1 is not generally invertible in the context of the original question. I confess that I have not yet fully understood certain aspects of your answer, so perhaps this is taken care of somehow? – Will Orrick Sep 12 2011 at 21:22
The moves work with matrices where the $i$th row is taken modulo $p^{\lambda_i}$. So, you only need to invert modulo some power of $p$.The Birkhoff moves are not moves on the matrices per se (so maybe my answer does not answer you question), but rather on matrices where the entries are taken modulo some congruences. – Amritanshu Prasad Sep 16 2011 at 10:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 129, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305027723312378, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/90731/commutator-baker-campbell-hausdorff-formula
|
## Commutator Baker-Campbell-Hausdorff formula
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider the Baker-Campbell-Hausdorff formula $\Phi(X,Y)\in\mathbb{Q}\langle\langle X,Y\rangle\rangle$ in non-commutative variables. Define $X*Y:=\Phi(X,Y)$ and $[X,Y]=(-X)*(-Y)*X*Y$, and then as usual define for any vector
$\mathbf{e}=(e_1,\ldots,e_r)\in\mathbb{N}^r$ the repeated commutator
$$[X,Y]{\mathbf{e}}:=[X,\underbrace{Y,\ldots,Y}_{e_1},\underbrace{X,\ldots,X}_{e_2},\ldots]$$ (here $[X_1,\ldots,X_r]$ is defined as $[[X_1,\ldots,X_{r-1}],X_r]$).
I think that there is a an analogous of the BCH formula on expressing $XY-YX$ in terms on the commutators $[X,Y]_\mathbf{e}$. That is, if for $\mathbf{e}=(e_1,\ldots,e_r)$ we define $<\mathbf{e}>=e_1+\ldots+e_r$ then there exist rational numbers $t_\mathbf{e}$ for all $\mathbf{e}\in\mathbb{N}^r$ and for all $r$ such that if we put $v_n(X,Y)=\sum_{<\mathbf{e}>=n}[X,Y]_\mathbf{e}$ then
$$XY-YX=\sum_{n\in\mathbb{N}}v_n(X,Y)$$.
-
3
Use `\langle x\rangle` instead of `<x>` to obtain $\langle x\rangle$ instead of $<x>$. The spacing of `<` is quite different because it is interpreted as a relation. – Mariano Suárez-Alvarez Mar 9 2012 at 17:05
1. Just to make sure: you mean that you define $[X,Y]$ as $\log(e^{-X}e^{-Y}e^Xe^Y)$, right? That $(-X)*(-Y)*X*Y$, though formally the same (?), keeps confusing me. 2. Can you prove your formula, or you expect it to be true? If the latter, did you check it up to some reasonable order, or it's just a feeling? – Vladimir Dotsenko Mar 9 2012 at 17:06
(Similarly, you can write `\mathbb Q\rangle\!\rangle X,Y\langle\!\langle` which gives $\mathbb Q\rangle\!\rangle X,Y\langle\!\langle$; `\newcommand` is your friend :) ) – Mariano Suárez-Alvarez Mar 9 2012 at 17:08
To add to Vladimir Dotsenko's comment, do I understand correctly that on the left hand side of your displayed equation, $[X,Y]$ means $\log(e^{-X} e^{-Y} e^X e^Y)$ but, on the right hand side, it is the standard commutator $XY-YX$? – David Speyer Mar 9 2012 at 17:17
Yes, I used this notation because when we work in a nilpotent Lie algebra then $*$ will be the group operation which makes into a group. This question follows from Corollary 3, Chap 6, of Segal's book: Polycyclic groups, where it is proved that if $x_1,\ldots,x_s\in Tr_1(n,k)$ then $(\log x_1,\ldots,\log x_s)=\log [x_1,\ldots,x_s]+\sum_i s_i \log v_i$ where each $v_i$ is repeated group commutator of length at least $s+1$ in $x_1,\ldots,x_s$ and each $s_i$ is a universal constant dependeing only on $n$ (here $Tr_1(n,k)$ is the unipotent group of upper triangular matrices with 1s in the diagonal – Diego Sulca Mar 9 2012 at 17:30
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.891003429889679, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/159407-recurrence-fibonacci-sequence.html
|
# Thread:
1. ## Recurrence and Fibonacci sequence
Hi. I have this problem that im stuck on.
Define a sequence $(G_n)_{n \ge0}$ by the recurrence $G_n = G_{n-1}+G_{n-2}$ for $n \ge2$. Subject to the initial values $G_0=2$, $G_1=1$. Let $(F_n)_{n \ge0}$ denote the Fibonacci sequence.
a) Write out explicitly $(G_n)_{n=0,...,10}$
I think this is easy.
{2,1,3,4,7,11,18,29,47,76}
b) Prove that $G_n=F_{n-1}+F_{n+1}$, $n\ge1$
How do I go about it?
I know $F_0=0$ and $F_1=1$ and $F_2=1$ and $F_3=2$
So if $n=1$, $G_1=F_0+F_3=1+2=3$, so it looks like its true.But how do I prove it explicitly?\
c) Let $\tau= \frac{1+ \sqrt{5}}{2}$ denote the golden ratio. Show $G_n= \tau^n+(-\tau)^{-n}$
I'm just a bit lost in this one...
d) The Fibonacci sequence counts pavings by monomers and dimers of an n-board. Conjecture what sort of pavings the sequence $(G_n)_{n \ge1}$ counts?Draw the objects corresponding to $G_3$.
$G_3=G_2+G_1$
$G_3=3+1=4$
But I'm not quite sure what it counts...
Any help would be great. Thank-you so much!!
2. In...
http://www.mathhelpforum.com/math-he...ce-154056.html
... it has been demonstrated that the general solution of the difference equation...
$x_{n} = x_{n-1} + x_{n-2}$ (1)
... is...
$\displaystyle x_{n}= c_{1} (\frac{1-\sqrt{5}}{2})^{n} + c_{2} (\frac{1+\sqrt{5}}{2})^{n}$ (2)
If You start setting $x_{0}= 0$ and $x_{1}=1$ You obtain the 'Fibonacci sequence'. If You set $x_{0}=2$ and $x_{1}=1$ and You obtain the 'G sequence'...
Kind regards
$\chi$ $\sigma$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8425154089927673, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/2012/computing-eigenvectors-and-eigenvalues/2040
|
# Computing eigenvectors and eigenvalues
I have a (non-sparse) $9 \times 9$ matrix and I wish to obtain its eigenvalues and eigenvectors. Of course, the eigenvalues can be quite a pain as we will probably not be able to find the zeros of its characteristic polynomial.
Actually, what I really want is find the eigenvector belonging to the largest eigenvalue. Would the following code give me this: `Eigenvectors[matrix, 1]`?
How does Mathematica do this? Is there some kind of algorithm of which's existence I am unaware that computes the eigenvector belonging to the largest eigenvalue?
Can the largest eigenvalue be computed numerically (in modulus or even throw away all the complex ones), and then the eigenvectors pseudo-analytically using `Eigenvectors`?
Added: The matrix is symbolical.
-
## 4 Answers
The function to obtain both the eigenvalues and the eigenvectors is `Eigensystem`. Use it as `{eigVals,eigVecs} = Eigensystem[matrix]`.
If the matrix is symbolic, then the output (if you wait long enough for it to churn out an answer!) will only be as a list of general solutions for the roots of a 9th order polynomial with unknown coefficients, and there are no closed form solutions for polynomials with orders greater than 4. The results will not have any particular ordering.
On the other hand, a `9x9` numerical matrix is a piece of cake (even if you were to solve the characteristic polynomial), so you should have no problems.
To obtain the largest (first) eigenvalue and the corresponding eigenvector, use the optional second argument as `Eigensystem[matrix, 1]`. Here's an example (with a smaller matrix to keep the output small):
````mat = RandomInteger[{0, 10}, {3, 3}];
{eigVals, eigVecs} = Eigensystem[mat] // N
(* Out[1]= {{21.4725, 6.39644, 0.131054}, {{1.3448, 0.904702, 1.},
{0.547971, -1.99577, 1.}, {-0.935874, -0.127319, 1.}}} *)
{eigVal1, eigVec1} = Eigensystem[mat, 1] // N
(* Out[2]= {{21.4725}, {{1.3448, 0.904702, 1.}}} *)
````
-
Well, the matrix is not numerical but largely symbolical. Mathematica is unable to obtain the eigenvalues except if I set a few terms to $0$. If I ask Mathematica to output the characteristic polynomial it claims it consists out of more than 50k terms! – Jonas Teuwen Feb 19 '12 at 18:41
Thanks! My matrix is symbolical but I do have numerical values for the parameters and hence I have a numerical matrix. How does Mathematica find eigenvalues for a $9 \times 9$ matrix? How can it solve the characteristic polynomial? Anyway, I wanted to have symbolic expressions for the eigenvalues as I wanted to let one parameter vary and plot the result. Would the best way now be (using Mathematica) that for a few data points I compute the eigenvalues and make a plot of that? – Jonas Teuwen Feb 19 '12 at 18:59
@JonasTeuwen What is the structure of your matrix? Which of the 81 elements is your free parameter (and the rest numerical)? I'm afraid, I don't know how Mathematica calculates the eigenvalues symbolically. – rm -rf♦ Feb 19 '12 at 19:05
@JonasTeuwen Is the only reason you want a symbolic solution that you want to plot it? If so, use `With[{mat = matrix}, f[x_?NumericQ] := Eigenvalues[mat, 1]]`, then `Plot[f[x], {x, xmin, xmax}]` (assuming that `x` is the parameter according to which you want to plot). Don't compute it for selected data points manually, just let `Plot` work it's adaptive sampling magic. Also note that which root is the largest might easily depend on the parameter, so the plot could have a discontinuous derivative. – Szabolcs Feb 19 '12 at 19:17
@JonasTeuwen That `With` from my previous comment won't work for injecting the value (localization kicks in), but just copy the matrix expression manually into `Eigenvalues` when you define the function. – Szabolcs Feb 19 '12 at 19:45
show 11 more comments
If the matrix is completely numerical (not symbolic), then `Eigenvalues` will return eigenvalues by descending magnitude. Therefore `Eigenvalues[matrix, 1]` will always give the largest eigenvalue and `Eigenvector[matrix, 1]` will give the corresponding eigenvector. As R.M. said, both can be obtained at the same time using `Eigensystem`.
There are different numerical methods for obtaining the eigenvector that corresponds to the largest eigenvalue (by magnitude), the most common being power iteration.
If the matrix is symbolic, the eigenvalues/vectors are not ordered (as far as I know).
-
If the matrix is symbolic, the eigenvectors are unordered, correct. – kkm Feb 20 '12 at 9:42
Re the question as to how Mathematica finds eigenvalues: in the Documentation Center look up "implementation"; you'll find a link to the page `tutorial/SomeNotesOnInternalImplementation`. And there, in the section Exact Numerical Linear Algebra you'll find the explanation, "`Eigenvalues` works by interpolating the characteristic polynomial."
-
`Eigensystem` will give you the eigenvalues and eigenvectors, the rest is a matter of sorting and extracting that data.
````(* Calculates the eigenvector with the largest eigenvalue *)
largestEV[m_] := Sort[
Transpose@Eigensystem[m],
#1[[1]] < #2[[1]] &
][[1, 2]]
m = {{1, 1, 0}, {0, 2, 0}, {0, 2, 3}}
largestEV[m]
````
```` {1, 0, 0}
````
-
4
Why complicate when you can use the second argument for `Eigensystem`? See my answer... – rm -rf♦ Feb 19 '12 at 18:46
This generalizes easier, suppose you wanted the eigenvector closest to the average eigenvalue etc. Also, I'm not sure how the standard ordering works for complex eigenvalues. – David Feb 19 '12 at 18:49
1
For complex eigenvalues, the ordering is decreasing `Abs[eigenvalues]` – rm -rf♦ Feb 19 '12 at 18:56
Why would it generalize easier? Eigensystem[mat, All] – ruebenko Feb 20 '12 at 8:55
@ruebenko That's what I did. – David Feb 20 '12 at 17:54
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8499022722244263, "perplexity_flag": "middle"}
|
http://nrich.maths.org/1830
|
### At a Glance
The area of a regular pentagon looks about twice as a big as the pentangle star drawn within it. Is it?
### Pent
The diagram shows a regular pentagon with sides of unit length. Find all the angles in the diagram. Prove that the quadrilateral shown in red is a rhombus.
### Pentakite
ABCDE is a regular pentagon of side length one unit. BC produced meets ED produced at F. Show that triangle CDF is congruent to triangle EDB. Find the length of BE.
# Dodecawhat
##### Stage: 4 Challenge Level:
We are going to make a pentagon. By making twelve such pentagons you can construct a dodecahedron like the one in the picture.
For each pentagon, you will need a piece of A4 paper and then follow the instructions below.
Fold the paper in half both ways to find the centre O. Fold along the red line so A touches O. Fold C to O similarly. Fold B and D to O. Next fold along PQ. As the two halves come together,tuck the flap from corner D behind the flap from corner B to make 'pockets' (see the diagrams below).
Fold R and S up to the centre line EO, so that they meet to form a straight line and make a pentagon.
If you make 12 pentagons in this way and assemble them, using your 'flaps' and 'pockets', you can make a dodecahedron.
#### Now for the problem:
If you use A4 paper for this construction and try to make regular pentagons there is a small error in the angle at E. Find this error and find the dimensions of the paper which you would need to use to get an accurate regular pentagon and hence an accurate regular dodecahedron.
A4 paper has sides in the ratio$\sqrt2$ to $1$.Prove that when you fold A4 paper in half (to get A5) or in quarters (to get A6) the rectangles you get have side lengths in the same ratio.
Extra Resources:
1) You can construct other platonic solids using paper and this article explains how.
2) Have a look at the October 2000 Article titled Classifying Solids using Angle Deficiency
3) You can download a demo version of Stella , a computer program which lets you create and view polyhedra on the screen, then print out the nets required to build your own models out of paper. Small Stella and Great Stella are available from the Stella Website.
4) Alternatively, print out the models from this this pdf available at the British Crystallographic Association's Website
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9034438133239746, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/157701/direct-product-of-center-of-group
|
# direct product of center of group
Let $Z(G)$ denote the center of a group $G$, let $J_n=Z(G)\times\dots \times Z(G)$, is it true that as a subset of external direct product $G\times\dots\times G$, $J_n$ is a subgroup?normal sybgroup?is it isomorphic to $Z(G)\times\dots \times Z(G)$ ($(n-1))$ times ? I know $Z(G)$ is a subgroup, so I hope $J_n$ will be so, but I am not sure about the other options.thank you for your help. Well, about the isomorphism I think projection map will work?
-
## 1 Answer
Let $G$ and $H$ be groups, let $A<G$ and $B<H$. The group operation in $G\times H$ is defined to be $$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2).$$ Clearly, the identity of the operation is $(e_G,e_H)$ where $e_G$ is the identity of $G$ and $e_H$ is the identity of $H$, and for any $(g,h)\in G\times H$, its inverse is $(g^{-1},h^{-1})$.
• Because $A$ is a subgroup of $G$ and $B$ is a subgroup of $H$, we have that $e_G\in A$ and $e_H\in B$, so that $(e_G,e_H)\in A\times B$.
• For any $(a_1,b_1),(a_2,b_2)\in A\times B$, we have that $a_1,a_2\in A$ and $b_1,b_2\in B$, so that $a_1a_2\in A$ and $b_1b_2\in B$ because $A$ and $B$ are subgroups of $G$ and $H$ respectively, so that $(a_1,b_1)(a_2,b_2)=(a_1a_2,b_1b_2)\in A\times B$.
• For any $(a,b)\in A\times B$, we have that $a\in A$ and $b\in B$, so that $a^{-1}\in A$ and $b^{-1}\in B$ because $A$ and $B$ are subgroups of $G$ and $H$ respectively, so that $(a^{-1},b^{-1})\in A\times B$.
Thus, $A\times B$ is a subgroup of $G\times H$ for any subgroups $A<G$ and $B< H$. In fact, for any family of groups and subgroups (not just finite ones), the direct product of the subgroups is a subgroup of the direct product of the groups - just modify the above argument.
Now, we consider the case of normal subgroups. Let $N\triangleleft G$, $K\triangleleft H$, so that for any $g\in G$ and $h\in H$, we have $gNg^{-1}=N$ and $hKh^{-1}=K$. For any $(g,h)\in G\times H$, we have that $$(g,h)(N\times K)(g,h)^{-1}=(g,h)(N\times K)(g^{-1},h^{-1})=\{(g,h)(a,b)(g^{-1},h^{-1})\mid (a,b)\in N\times K\}=\{(gag^{-1},hbh^{-1})\mid (a,b)\in N\times K\}\subseteq N\times K$$ and by symmetry we get $(g,h)(N\times K)(g,h)^{-1}=N\times K$. Thus $(N\times K)\triangleleft (G\times H)$.
In fact, for any family of groups and normal subgroups (not just finite ones), the direct product of the normal subgroups is a normal subgroup of the direct product of the groups - just modify the above argument. Or, you can use induction to show that this is true for a finite collection of groups $G_1,\ldots,G_n$ and normal subgroups $N_1\triangleleft G_1,\ldots,N_n\triangleleft G_n$. Letting $G_1=G_2=\cdots=G_n=G$ and $N_1=N_2=\cdots=N_n=Z(G)$, you have that $J_n$ is a normal subgroup of $G\times\cdots\times G$.
It certainly need not be the case that $J_n=\underbrace{Z(G)\times\cdots\times Z(G)}_{n\text{ times}}$ is isomorphic to $\underbrace{Z(G)\times\cdots\times Z(G)}_{n-1\text{ times}}$. They don't even necessarily have the same cardinality. For example, letting $G=C_2$ be the cyclic group of order 2, so that $|G|=|Z(G)|=2$, then the former has $2^n$ elements and the latter has $2^{n-1}$ elements.
-
thank you very much for the answer in so nicely. – Taxi Driver Jun 13 '12 at 9:20
1
@Zev, don't you want to add that $Z(G \times \cdot\cdot\cdot \times G) = Z(G) \times \cdot\cdot\cdot \times Z(G)$? – Nicky Hekster Jun 13 '12 at 19:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468790292739868, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/143583/is-there-a-natural-way-to-multiply-measures/143642
|
Is there a natural way to multiply measures?
Given two measures $\mu$ and $\nu$ on some measurable space $X$, is there a way to multiply them to get $\mu \cdot \nu$, another measure on $X$ (and not $X \times X$, as for the usual notion of product measure)?
Here's a case where I know how to give a definition: if both $\mu$ and $\nu$ are absolutely continuous with respect to some common measure $\lambda$, then we can take their Radon–Nikodym derivatives with respect to that measure to obtain two functions $f_\mu$ and $f_\nu$, so that $\mu = \int f_\mu d\lambda$, $\nu = \int f_\nu d\lambda$ which we can then multiply, to give us $\mu \cdot \nu = \int (f_\mu \cdot f_\nu) d\lambda$.
This came up in the context of Monte–Carlo integration, and in particular Monte–Carlo path tracing. In this case, the measure space could be, say, the set of angles at which an incoming light ray bouncing off an object could be reflected, $\mu$ would be a probability measure describing the probability of outgoing angles, and $\nu$ would be a measure describing the light sources in the scene visible from that point of reflection. The idea of the multiplication $\mu \cdot \nu$ is to produce something which describes the sampling of light sources at that point, depending on the incoming ray (and $\mu$, on top of just $\nu$).
-
What plays the role of the BRDF in this formalization of path tracing? Is it not a measure on $X\times X$? – Rahul Narain May 10 '12 at 17:45
I'm fixing the point of contact and the incident angle. What is left then is simply a measure on the set of outgoing angles, and is what I'm taking to be the (somewhat generalised notion of the) BRDF. – Will May 10 '12 at 17:50
2
I don't think your approach works that well. If we replace the measure $\lambda$ by $2\lambda$ we would get the derivatives $1/2 f_\mu$ and $1/2f_\nu$ instead. Now $\int (1/2 f_\mu)(1/2 f_\nu)d2\lambda$ equals $1/4\int f_\mu f_\nu d2\lambda$ equals $1/2\int f_\mu f_\nu d\lambda$, so the product depends on the underlying measure. – Michael Greinecker May 10 '12 at 17:51
I guess I don't really understand your approach. If the incident direction is fixed, there is only one light source visible in that direction, and there is no $\nu$. Perhaps you want to multiply the distribution of outgoing directions with that of the light sources, but I don't understand why you would want that. – Rahul Narain May 10 '12 at 18:09
3
I think the best starting point would be to try to figure out what the product of two finite measures on a finite discrete space should be. There is a conceptual issue that has nothing to do with sophisticated machinery. – Michael Greinecker May 10 '12 at 18:32
show 3 more comments
2 Answers
The case you describe is the general case since every measures $\mu$ and $\nu$ are absolutely continuous with respect to $\mu+\nu$. More precisely, there exists $h_{\mu,\nu}$ with $0\leqslant h_{\mu,\nu}\leqslant1$ everywhere such that $\mu=h_{\mu,\nu}(\mu+\nu)$ and $\nu=(1-h_{\mu,\nu})(\mu+\nu)$. Thus one can define an intrinsic product $\mu\odot\nu$ by $$\mu\odot\nu=h_{\mu,\nu}(1-h_{\mu,\nu})(\mu+\nu).$$ When $\mu$ and $\nu$ are absolutely continuous with respect to the Lebesgue measure (or any other measure of reference) with densities $f$ and $g$ respectively, then $\mu\odot\nu$ is absolutely continuous with respect to the Lebesgue measure with density $f\odot g$ defined as follows: on $[f+g=0]$, $f\odot g=0$, and, on $[f+g\ne0]$, $$f\odot g=\frac{fg}{f+g}.$$ This product $\odot$ on measures is commutative (good), associative (good?), the total mass of $\mu\odot\nu$ is at most $\frac14$ times the sum of the masses of $\mu$ and $\nu$, in particular the product of two probability measures is not a probability measure (not good?), $\mu\odot\mu=\frac12\mu$ for every $\mu$, and finally $\mu\odot\nu=0$ if and only $\mu$ and $\nu$ are mutually singular (good?) since $\mu\odot\nu$ is always absolutely continuous with respect to both $\mu$ and $\nu$.
Edit To normalize things, another idea is to consider $\mu\Diamond\nu=2(\mu\odot\nu)$. In terms of densities, this corresponds to a harmonic mean, since $\mu\Diamond\nu$ has density $f\Diamond g$, where $$\frac1{f\Diamond g}=\frac1{2(f\odot g)}=\frac12\left(\frac1f+\frac1g\right).$$ In particular, this new intrinsic product $\Diamond$ is idempotent (good?), commutative (good), and not associative (not good?).
Edit A canonical product concerns probability measures and transition kernels. That is, one is given a measured space $(X,\mathcal X,\mu)$, a measurable space $(Y,\mathcal Y)$ and a function $\pi:X\times\mathcal Y\to[0,1]$ such that, for every $x$ in $X$, $\pi(x,\ )$ is a probability measure on $(Y,\mathcal Y)$. Then, under some regularity conditions, the product $\mu\times\pi$ is the unique measure on $(X\times Y,\mathcal X\otimes\mathcal Y)$ such that, for every $A$ in $\mathcal X$ and $B$ in $\mathcal Y$, $$(\mu\times \pi)(A\times B)=\int_A\mu(\mathrm dx)\pi(x,B).$$ In particular, $B\mapsto(\mu\times\pi)(X\times B)$ is a probability measure on $(Y,\mathcal Y)$.
When $\mu$ has density $f$ with respect to a measure $\xi$ and each $\pi(x,\ )$ has density $g(x,\ )$ with respect to a measure $\eta$, $\mu\times\pi$ has density $(x,y)\mapsto f(x)g(x,y)$ with respect to the product measure $\xi\otimes\eta$.
-
I need to think about my problem more to see if this completely addresses my issues, but this is really as thorough an answer as is possible. Thanks. – Will May 12 '12 at 20:49
@LVK Thanks for (the appreciation underlying the award of) this bounty. – Did Aug 25 '12 at 21:05
No, if you mean to multiply them in the freshman sense. Given two measures $u$ and $v$ on some measurable space $X$, define the "product" $w = u \cdot v$ as $w(E) = u(E) \cdot v(E)$ for all $E \in X$. We would like to see if $w$ is a measure or find a counterexample.
Consider the interval $E = (0,2)$ on $X = \mathbb R$, and let $u$ be the standard length measure and $v$ be the measure of the area under the function $f = |x|$. Suppose $w$ is the "product" of $u$ and $v$, $w = u \cdot v$, as above.
Then, $$w((0,2)) = u((0,2))\cdot v((0,2)) = 2\cdot 2 = 4$$ I can split up the interval $(0,2)$, and a measure stays the same, but: $$w((0,2)) = w((0,1]\cup (1,2)) = w((0,1]) + w((1,2)) = u((0,1])\cdot v((0,1]) +u((1,2))\cdot v((1,2)) = 1\cdot \frac 1 2 + 1 \cdot \frac 3 2 = 2 \not= 4$$
So $w$ is not a measure. Of course, you probably knew that.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 104, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9617124795913696, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/24083/list
|
2 edited tags
1
# reversible Turing machines
Hello, Let T be a Turing machine such that
1) it operates on the alphabet {0,1},
2) its set of states is A
3) the language it accepts is $L$ .
Does there exists a Turing machine S which also operates on the alphabet {0,1} and such that the language it accepts is L (the set of states might be different though) and such that, crucially, S is reversible?
By reversible I mean "the computational paths of S are disjoint". More precisely, the transition table of S gives rise to a map $K_S: \text{Tapes}\times B \to \text{Tapes} \times B$, where Tapes is the subset of the infinite product $\{0,1\}^Z$ consisting of those sequences which have a finite number of 1's, and B is the set of states of S. S is reversible iff, by definition, $K_S$ is injective on the set $$\bigcup_{i=0}^\infty K_S^{i}(\text{Tapes}\times \{Initial \} ),$$ where $Initial\in B$ is the initial state of S.
If the answer to the above question is "no" then what if we allow S to operate on an alphabet which is larger then {0,1}?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468058347702026, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/tiling+number-theory
|
# Tagged Questions
3answers
942 views
### Minimum number of integer-sided squares needed to tile an $m$ by $n$ rectangle.
Let $T(m,n)$ for integers $m,n$ be the least number of integer-sided squares needed to tile an $m\times n$ rectangle. Clearly $T(kx,ky)\leq T(x,y)$. Are there integers $x,y,k\gt 1$, such that ...
1answer
65 views
### Tiling a minimal perimeter region with $n$ unit squares
Suppose I have $n$ identical unit squares and I want to use them all to tile a region with minimal perimeter $p(n)$. For instance I guess $p(n^2)=4n$, by arranging them im a $n\times n$ square. Is ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.866281270980835, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/58967/list
|
## Return to Answer
4 edited body
Can't we base-change $f : X \to Y$ with $Z$ and obtain: $g : f^{-1}(Z) \to Z$? This also is a universal homeomorphism by construction, right? So now we have a universal homeomorphism to a regular scheme, but a regular scheme is weakly normal, see A. Andreotti and E. Bombieri, Sugli omeomorfismi delle varietà algebriche''.
Therefore $g$ is an isomorphism at least as long as $f^{-1}(Z)$ is reduced and the map $g$ is birational.
EDIT: My argument that $f^{-1}(Z)$ was reduced was junk. I shouldn't have tried to do math while on the run. But as long as $f^{-1}(Z)$ is reduced and $g$ is birationalreduced, then I think things are ok.
3 added 52 characters in body
Can't we base-change $f : X \to Y$ with $Z$ and obtain: $g : f^{-1}(Z) \to Z$? This also is a universal homeomorphism by construction, right? So now we have a universal homeomorphism to a regular scheme, but a regular scheme is weakly normal, see A. Andreotti and E. Bombieri, Sugli omeomorfismi delle varietà algebriche''.
Therefore $g$ is an isomorphism at least as long as $f^{-1}(Z)$ is reduced and the map $g$ is birational.
EDIT: My argument that $f^{-1}(Z)$ was reduced was junk. I shouldn't have tried to do math while on the run. But as long as $f^{-1}(Z)$ is and $g$ is birational reduced, then I think things are ok.
2 deleted 83 characters in body
Can't we base-change $f : X \to Y$ with $Z$ and obtain: $g : f^{-1}(Z) \to Z$? This also is a universal homeomorphism by construction, right? So now we have a universal homeomorphism to a regular scheme, but a regular scheme is weakly normal, see A. Andreotti and E. Bombieri, Sugli omeomorfismi delle varietà algebriche''.
Therefore $g$ is an isomorphism at least as long as $f^{-1}(Z)$ is reduced. But
EDIT: My argument that $f^{-1}(Z)$ has to be reduced, because if it isn't was reduced this would imply inseparability of $f$ at some (possibly generic) point of $Z$, which contradicts the universal homeomorphism of $f$.was junk. I shouldn't have tried to do math while on the runfor a couple hours. But as long as $f^{-1}(Z)$ is reduced, but did then I misunderstand the question?think things are ok.
Post Undeleted by Karl Schwede
Post Deleted by Karl Schwede
1
Can't we base-change $f : X \to Y$ with $Z$ and obtain: $g : f^{-1}(Z) \to Z$? This also is a universal homeomorphism by construction, right? So now we have a universal homeomorphism to a regular scheme, but a regular scheme is weakly normal, see A. Andreotti and E. Bombieri, Sugli omeomorfismi delle varietà algebriche''.
Therefore $g$ is an isomorphism at least as long as $f^{-1}(Z)$ is reduced. But $f^{-1}(Z)$ has to be reduced, because if it isn't reduced this would imply inseparability of $f$ at some (possibly generic) point of $Z$, which contradicts the universal homeomorphism of $f$.
I have to run for a couple hours, but did I misunderstand the question?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 4, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9747785329818726, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/13609/models-of-zfc-set-theory-getting-started/29693
|
## Models of ZFC Set Theory - Getting Started
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For just any first-order theory: What are the sets I am supposed/allowed to think of when thinking of models as sets (of something + additional structure)?
Provided:
1. I can think of models of any theory (other than set theory) as of sets from the (ZFC-based) von Neumann universe.
2. I can think of models of any theory as of sets of terms and formulas.
But what are the sets I am supposed/allowed to think of when thinking of models of (ZFC) set theory itself?
-
Please give a hint for down-voting. – Hans Stricker Feb 1 2010 at 0:54
I do not understand what relation the paragraph starting with 'Privided' has with the other two. – Mariano Suárez-Alvarez Feb 1 2010 at 1:56
I don't understand what issue is driving the question. What is "supposed/allowed to" getting at? Is there some miasma emanating from forcing sets, say? – Charles Stewart Feb 1 2010 at 8:58
I don't understand the question, especially the terminology like "supposed/allowed/think". Moreover, the models of the theory of ZFC are sets with relational structure (in this case, one binary relation interpreted as membership) just the same as the models of any first order theory. – Pete L. Clark Feb 1 2010 at 9:22
1
@Hans: Your remark to Pete is indeed incorrect. Allowing quantification over subsets of the domain (or functions or relations) is called a second-order quantifier, and there are various versions of second order set theory. For example, try a google search for Bernays-Goedel set theory or Kelly Morse set theory. In particular, KM set theory is strictly stronger than ZFC in consistency strength, largely because of the power of its second order quantifiers. – Joel David Hamkins Feb 1 2010 at 13:57
show 4 more comments
## 5 Answers
According to Godel's incompleteness theorem, ZFC cannot prove its own consistency. Therefore, it is relatively consistent with ZFC that there are not any set models of ZFC. In this case, there is still a proper class model of ZFC, namely the von Neumann universe, V, itself, among others (i.e. L, forcing extensions of V). However, the fact that V is a model of ZFC cannot be proven formally within ZFC. Indeed, truth in V cannot be defined in V due to a result of Tarski.
If we allow for some stronger axioms, then we can get set models of ZFC. For instance, if there exists an inaccessible cardinal, $\kappa$, then $V_\kappa$ is a set model of ZFC.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You're supposed to think of sets. Definitely.
Here's an analogy you might find helpful: let's use the name "ZFC-" for the axioms of ZFC but without the axiom of infinity. Now, if I suddenly decreed everything that isn't a member of $V_\omega$ is no longer a set, ZFC- would still be satisfied. That is, the members of $V_\omega$, taken as a collection, are a model of ZFC-.
If we wrap those members up into a set (which happens to be $V_\omega$ itself), that set is considered a model of ZFC-. Technically we also have to provide a relation (set of ordered pairs) that tells us what $\epsilon$ means, but if $\epsilon$ in the model means the same thing as $\epsilon$ in the "outer" set theory we can just mention that fact and proceed.
So, now that you know that $V_\omega$ is a model of ZFC-, perhaps you can imagine what a model of ZFC might look like. It's a really, really, really big set -- let's call it "M" -- such that all the sets inside it, taken together, are enough to satisfy the axioms of ZFC. But you don't need "M" itself to satisfy ZFC.
That's what a model of ZFC looks like.
-
This answer is going to be a bit too informal, but I hope it helps.
Imagine we have the collection of all sets. Let us call them the real sets, and their membership relation the real set membership. The empty set is "actually" empty, and the class of all ordinals is "actually" a proper class.
Now that we have the real sets we can use them as the "ontological substratum" upon which everything else will be built from. And this, of course, includes formal theories and their models.
A model of any first-order theory is then only a real set. This applies to your favorite set theory too. So the models of your set theory are only real sets (but the models don't know it, just as they don't know if their empty sets are actually empty or if their set membership is the real one).
This view fits well, for example, with the idea of moving from a transitive model to a generic extension of it or to one with a constructible universe: we are simply moving from a class of models to another one, each one consisting of real sets.
But this view also leaves us with too many entities, and maybe here we have an opportunity to apply Occam's razor. It looks like we have two kind of theories: one for the real sets, which is made of things that are not sets (we can formalize our informal talk about them, but that does not make essentially any difference), another one for the models of set theory, which is made of sets.
The real sets and the theory of the real sets belong to a world where there are real sets, but there are also pigs and cows, and human languages and many other things. We don't need all that to do mathematics, do we? So why not diving into the wold of the real sets and ignore everything else?
If this story sounds too platonistic, I am sure it must have a formalistic counterpart.
With my question:
http://mathoverflow.net/questions/28869/how-to-think-like-a-set-or-a-model-theorist
I expected to obtain an official view about all this stuff. I somehow succeeded on this, but as you can see, I'm still working on it.
Here is a related answer to a related question which I also find useful:
http://mathoverflow.net/questions/15685/is-it-necessary-that-model-of-theory-is-a-set/15713#15713
-
From comment: how do we get from "the abstract" to "the concrete"?
In my partly informed opinion, not by formal model theory! The ability of set theory to describe its own models is one of the pillars of its success in the foundations of mathematics, but while its model theory helps us to understand the structure of set theory, it mostly doesn't help us understand what believing in the axioms of set theory commits us to.
I think Goedel's constructible universe helps us do that, particularly since it helps us understand the cumulative hierarchy. Fränkel-Mostowski models do too: permutability of urelements cast useful light on what's at stake with the axiom of choice. But while these two results are model-theoretic, they don't have much to do with the current direction on modelling set theory in itself.
We get more insight from looking at set theory from below: what is lost with weaker set theories like KP, IZF, and CZF? This gives me more feeling of getting at the concrete commitments made by set theory.
-
Thanks for taking my question serious and for this concise answer! Thanks too for your mention of urelements (since in some way they are the most abstract mathematical objects I can think of). – Hans Stricker Feb 1 2010 at 11:59
Suppose You start with something easier. Lets start with Peano Arithmetic. It is defined by some certain set of axioms, and definitely it does not say anything about numbers at least from formal point of view! It only says about some "hypothetical objects" for which several operations is defined, for example successor of existing object. In fact, objects which obeys PA axioms may not be a numbers at all ( numbers as we use of course) and it is a matter of taste, from that point of view, if we say: if some structure obeys PA axioms we should call it natural numbers.
That is the matter of model theory: every structure in which PA axioms are satisfied is called the model of Peano Arithmetic, and among them, natural numbers are some kind of natural and intuitive model for PA axioms.
Now lets start with ZFC. ZFC has its own axiom set called Zermelo-Frankel axioms. In fact from conservative and educational point of view everything is ok, relation $x \in y$ has meaning "x is element of y" but when You want to say something about models of ZFC definitely You should drop that way of description. The much proper way would be to use some abstract symbol for this relation, say $xRy$, forgive the meaning "x is element of y" but use "x is in relation with y" instead. So $R$ is abstract binary relation used in ZFC axiom set!
Then You may look for structures which satisfies ZFC axioms, in pure formal sense. And of course, normal universe of sets satisfy it, so it is candidate for model of ZFC theory, beside fact that there is problem with Cantor paradox which destroys such structure ( "class od sets") from being the proper ZFC model....
What a luck! If set theory could be the proper model of ZFC, then it would be inconsistent, as for set theory( based on Tim Chow's article "A Beginner's Guide to Forcing")
"by a result known as the completeness theorem, the statement that ZFC has any models at all is equivalent to the statement that ZFC is consistent "
So if it had happened we may prove completes of ZFC inside ZFC which naturally lead us to inconsistency...
So objects You refer as "sets" in Your question are "near model" of ZFC theory which states about objects which obeys ZFC axioms in term of binary relation $R$. If You find other "universes" satisfying ZFC axioms, You may call them "sets". But in fact it relation to ZFC is exactly the same as between objects from nonstandard models of Peano Arithmetic and natural numbers, or as between non-isomorphic objects satisfying group theory axioms ( models of group theory axioms), that is nonisomorfic groups and so on.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9600394368171692, "perplexity_flag": "middle"}
|
http://amathew.wordpress.com/tag/geodesics/
|
# Climbing Mount Bourbaki
Thoughts on mathematics
November 26, 2009
## Towards Cartan’s fixed point theorem: Cosine inequalities and derivative of the distance
Posted by Akhil Mathew under differential geometry, MaBloWriMo | Tags: Cartan fixed point theorem, cosine inequality, geodesics, negative curvature |
[2] Comments
I am now aiming to prove an important fixed point theorem:
Theorem 1 (Elie Cartan) Let ${K}$ be a compact Lie group acting by isometries on a simply connected, complete Riemannian manifold ${M}$ of negative curvature. Then there is a common fixed point of all ${k \in K}$.
There are several ingredients in the proof of this result. These will provide examples of the techniques that I have discussed in past posts.
Geodesic triangles
Let ${M}$ be a manifold of negative curvature, and let ${V}$ be a normal neighborhood of ${p \in M}$; this means that ${\exp_p}$ is a diffeomorphism of some neighborhood of ${0 \in T_p(M)}$ onto ${V}$, and any two points in ${V}$ are connected by a unique geodesic. (This always exists by the normal neighborhood theorem, which I never proved. However, in the case of Cartan’s fixed point theorem, we can take ${V=M}$ by Cartan-Hadamard.)
So take ${a=p,b,c\in V}$. Draw the geodesics ${\gamma_{ab}, \gamma_{ac}, \gamma_{bc}}$ between the respective pairs of points, and let ${\Gamma_{ab}, \Gamma_{bc}}$ be the inverse images in ${T_p(M) = T_a(M)}$ under ${\exp_p}$. Note that ${\Gamma_{ab}, \Gamma_{ac}}$ are straight lines, but ${\Gamma_{bc}}$ is not in general. Let ${a',b',c'}$ be the points in ${T_p(M)}$ corresponding to ${a,b,c}$ respectively. Let ${A}$ be the angle between ${\gamma_{ab}, \gamma_{ac}}$; it is equivalently the angle at the origin between the lines ${\Gamma_{ab}, \Gamma_{ac}}$, which is measured through the inner product structure.
Now ${d(a,b) = l(\gamma_{ab}) = l(\Gamma_{ab}) = d(a',b')}$ from the figure and since geodesics travel at unit speed, and similarly for ${d(a,c)}$. Moreover, we have ${d(b,c) = l(\gamma_{bc}) \geq l(\Gamma_{bc}) \leq d(b',c')}$, where the first inequality comes from the fact that ${M}$ has negative curvature and ${\exp_p}$ then increases the lengths of curves; this was established in the proof of the Cartan-Hadamard theorem.
We have evidently by the left-hand-side of the figure
$\displaystyle d(b',c')^2 = d(a',c')^2 + d(a',b')^2 - 2d(a',b')d(a',c') \cos A.$
In particular, all this yields
$\displaystyle \boxed{d(b,c)^2 \geq d(a,c)^2 + d(a,b)^2 - 2d(a,b)d(a,c) \cos A.}$
So we have a cosine inequality.
There is in fact an ordinary plane triangle with sides ${d(a,b), d(b,c),d(a,c)}$, since these satisfy the appropriate inequalities (unless ${a,b,c}$ lie on the same geodesic, which case we exclude). The angles ${A',B',C'}$ of this plane triangle satisfy
$\displaystyle A \leq A'$
by the boxed equality. In particular, if we let ${B}$ (resp. ${C}$) be the angles between the geodesics ${\gamma_{ab}, \gamma_{bc}}$ (resp. ${\gamma_{ac}, \gamma_{bc}}$), then by symmetry and ${A'+B'+C=\pi}$
$\displaystyle \boxed{A + B+ C \leq \pi.}$
This is a fact which I vaguely recall from popular-math books many years back. The rest is below the fold. (more…)
November 24, 2009
## The second variation formula
Posted by Akhil Mathew under differential geometry, MaBloWriMo | Tags: calculus of variations, curvature tensor, energy integral, geodesics, second variation formula |
I’m going to keep the same notation as before. In particular, we’re studying how the energy integral behaves with respect to variations of curves. Now I want to prove the second variation formula when ${c}$ is a geodesic.Now to compute ${\frac{d^2}{d^2 u} E(u)|_{u=0}}$, for further usage. We already showed$\displaystyle E'(u) = \int_I g\left( \frac{D}{dt} \frac{\partial}{\partial u} H(t,u), \frac{\partial}{\partial t} H(t,u) \right)$ Differentiating again yields the messy formula for ${E''(u)}$:
$\displaystyle \int_I g\left( \frac{D}{du} \frac{D}{dt} \frac{\partial}{\partial u} H, \frac{\partial}{\partial t} H \right) + \int_I g\left( \frac{D}{dt} \frac{\partial}{\partial u} H, \frac{D}{du}\frac{\partial}{\partial t} H(t,u) \right).$
Call these ${I_1(u), I_2(u) }$.
${I_2}$
Now ${I_2(0)}$ is the easiest, since by symmetry of the Levi-Civita connection we get$\displaystyle I_2(0) = \int g\left( \frac{D}{dt} \frac{\partial}{\partial u} H(t,u), \frac{D}{dt}\frac{\partial}{\partial u} H(t,u) \right) = \int g\left( \frac{D}{dt} V, \frac{D}{dt} V \right).$ For vector fields along ${c}$ ${E,F}$ with ${E(a)=F(a)=E(b)=F(b)=0}$, we have$\displaystyle \int g\left( \frac{D}{dt} E, \frac{D}{dt} F \right) = - \int g\left( \frac{D^2}{dt^2} E , F \right).$ This is essentially a forum of integration by parts. Indeed, the difference between the two terms is
$\displaystyle \frac{d}{dt} g\left( \frac{D}{dt} E, F \right).$
So if we plug this in we get
$\displaystyle \boxed{ I_2(0) = -\int g\left( \frac{D^2}{dt^2} V , V \right).}$
${I_1}$
Next, we can write
$\displaystyle I_1(0) = \int_I g\left( \frac{D}{du} \frac{D}{dt} \frac{\partial}{\partial u} H(t,u) |_{u=0}, \dot{c}(t) \right)$ Now ${R}$ measures the difference from commutation of ${\frac{D}{dt}, \frac{D}{du}}$. In particular this equals
$\displaystyle \int_I g\left( \frac{D}{dt} \frac{D}{du} \frac{\partial}{\partial u} H(t,u) |_{u=0}, \dot{c}(t) \right) + \int_I g\left( R(V(t), \dot{c}(t)) V(t), \dot{c}(t)) \right).$
By antisymmetry of the curvature tensor (twice!) the second term becomes
$\displaystyle \int_I g\left( R( \dot{c}(t), V(t)) \dot{c}(t), V(t), \right).$
Now we look at the first term, which we can write as
$\displaystyle \int_I \frac{d}{dt} g\left( \frac{D}{du} \frac{\partial}{\partial u} H(t,u), \dot{c}(t)\right)$
since ${\ddot{c} \equiv 0}$. But this is clearly zero because ${H}$ is constant on the vertical lines ${t=a,t=b}$. If we put everything together we obtain the following “second variation formula:”
Theorem 1 If ${c}$ is a geodesic, then$\displaystyle \boxed{\frac{d^2}{du^2}|_{u=0} E(u) = \int_I g\left( R( \dot{c}(t), V(t)) \dot{c}(t) - \frac{D^2 V}{Dt^2}, V(t) \right).}$
Evidently that was some tedious work, and the question arises: Why does all this matter? The next goal is to use this to show when a geodesic cannot minimize the energy integral—which means, in particular, that it doesn’t minimize length. Then we will obtain global comparison-theoretic results.
November 15, 2009
## Hopf-Rinow II and an application
Posted by Akhil Mathew under differential geometry, MaBloWriMo | Tags: geodesic completeness, geodesics, homotopy, Hopf-Rinow theorem |
[2] Comments
Now, let’s finish the proof of the Hopf-Rinow theorem (the first one) started yesterday. We need to show that given a Riemannian manifold ${(M,g)}$ which is a metric space ${d}$, the existence of arbitrary geodesics from ${p}$ implies that ${M}$ is complete with respect to ${d}$. Actually, this is slightly stronger than what H-R states: geodesic completeness at one point ${p}$ implies completeness.
The first thing to notice is that ${\exp: T_p(M) \rightarrow M}$ is smooth by the global smoothness theorem and the assumption that arbitrary geodesics from ${p}$ exist. Moreover, it is surjective by the second Hopf-Rinow theorem.
Now fix a ${d}$-Cauchy sequence ${q_n \in M}$. We will show that it converges. Draw minimal geodesics ${\gamma_n}$ travelling at unit speed with
$\displaystyle \gamma_n(0)=p, \quad \gamma_n( d(p,q_n)) = q_n.$ (more…)
November 14, 2009
## The Hopf-Rinow theorems and geodesic completeness
Posted by Akhil Mathew under differential geometry, MaBloWriMo | Tags: completeness, geodesic completeness, geodesics, Hopf-Rinow theorem |
1 Comment
Ok, yesterday I covered the basic fact that given a Riemannian manifold ${(M,g)}$, the geodesics on ${M}$ (with respect to the Levi-Civita connection) locally minimize length. Today I will talk about the phenomenon of “geodesic completeness.”
Henceforth, all manifolds are assumed connected.
The first basic remark to make is the following. If ${c: I \rightarrow M}$ is a piecewise ${C^1}$-path between ${p,q}$ and has the smallest length among piecewise ${C^1}$ paths, then ${c}$ is, up to reparametrization, a geodesic (in particular smooth). The way to see this is to pick ${a,b \in I}$ very close to each other, so that ${c([a,b])}$ is contained in a neighborhood of ${c\left( \frac{a+b}{2}\right)}$ satisfying the conditions of yesterday’s theorem; then ${c|_{[a,b]}}$ must be length-minimizing, so it is a geodesic. We thus see that ${c}$ is locally a geodesic, hence globally.
Say that ${M}$ is geodesically complete if ${\exp}$ can be defined on all of ${TM}$; in other words, a geodesic ${\gamma}$ can be continued to ${(-\infty,\infty)}$. The name is justified by the following theorem:
Theorem 1 (Hopf-Rinow)
The following are equivalent:
• ${M}$ is geodesically complete.
• In the metric ${d}$ on ${M}$ induced by ${g}$ (see here), ${M}$ is a complete metric space (more…)
November 13, 2009
## Geodesics are locally length-minimizing
Posted by Akhil Mathew under differential geometry, MaBloWriMo | Tags: Gauss lemma, geodesics |
Fix a Riemannian manifold with metric ${g}$ and Levi-Civita connection ${\nabla}$. Then we can talk about geodesics on ${M}$ with respect to ${\nabla}$. We can also talk about the length of a piecewise smooth curve ${c: I \rightarrow M}$ as
$\displaystyle l(c) := \int g(c'(t),c'(t))^{1/2} dt .$
Our main goal today is:
Theorem 1 Given ${p \in M}$, there is a neighborhood ${U}$ containing ${p}$ such that geodesics from ${p}$ to every point of ${U}$ exist and also such that given a path ${c}$ inside ${U}$ from ${p}$ to ${q}$, we have
$\displaystyle l(\gamma_{pq}) \leq l(c)$
with equality holding if and only if ${c}$ is a reparametrization of ${\gamma_{pq}}$.
In other words, geodesics are locally path-minimizing. Not necessarily globally–a great circle is a geodesic on a sphere with the Riemannian metric coming from the embedding in $\mathbb{R}^3$, but it need not be the shortest path between two points. (more…)
November 4, 2009
## Geodesics and the exponential map
Posted by Akhil Mathew under differential geometry, MaBloWriMo | Tags: connections, exponential map, geodesics, ordinary differential equations |
[4] Comments
Ok, we know what connections and covariant derivatives are. Now we can use them to get a map from the tangent space ${T_p(M)}$ at one point to the manifold ${M}$ which is a local isomorphism. This is interesting because it gives a way of saying, “start at point ${p}$ and go five units in the direction of the tangent vector ${v}$,” in a rigorous sense, and will be useful in proofs of things like the tubular neighborhood theorem—which I’ll get to shortly.
Anyway, first I need to talk about geodesics. A geodesic is a curve ${c}$ such that the vector field along ${c=(c_1, \dots, c_n)}$ created by the derivative ${c'}$ is parallel. In local coordinates ${x_1, \dots, x_n}$, here’s what this means. Let the Christoffel symbols be ${\Gamma^k_{ij}}$. Then using the local formula for covariant differentiation along a curve, we get
$\displaystyle D(c')(t) = \sum_j \left( c_j''(t) + \sum_{i,k} c_i'(t) c_k'(t) \Gamma^j_{ij}(c(t)) \right) \partial_j,$
so ${c}$ being a geodesic is equivalent to the system of differential equations
$\displaystyle c_j''(t) + \sum_{i,k} c_i'(t) c_k'(t) \Gamma^j_{ij}(c(t)) = 0, \ 1 \leq j \leq n.$ (more…)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 133, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229128956794739, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?s=1e361ec2d93607a340b51c20d338d7f7&p=4263307
|
Physics Forums
Page 2 of 2 < 1 2
Mentor
## Classical Fields and Newton's 2nd Postulate of Motion
Quote by dr_k where Newton's 2nd Postulate is ${\bf F}_{net} = f(q_I,m_I) {\bf a}$ , here $m_I$ is the inertial mass
This doesn't work. By DEFINITION inertial mass is the proportionality between force and acceleration: $F=m_I a$ (by definition of inertial mass). So if you also have $F=f(q_I,m_I)a$ (by postulate) then you immediately get $f(q_I,m_I)=m_I$, which is not what you want.
You can have $f(q_I,m_{grav})$ and still have an interesting question, but whatever m and q you put in there, by definition $m_I=f(q,m)$
It seems to me that your question reduces to asking about the equivalence of gravitational and inertial mass.
Blog Entries: 3
Quote by DaleSpam This doesn't work. By DEFINITION inertial mass is the proportionality between force and acceleration: $F=m_I a$ (by definition of inertial mass). So if you also have $F=f(q_I,m_I)a$ (by postulate) then you immediately get $f(q_I,m_I)=m_I$, which is not what you want. You can have $f(q_I,m_{grav})$ and still have an interesting question, but whatever m and q you put in there, by definition $m_I=f(q,m)$ It seems to me that your question reduces to asking about the equivalence of gravitational and inertial mass.
Bear with me here, $m_I$ is a new definition that denotes one of the two intrinsic properties of matter, $m_I$ and $q_I$, such that the "new" $m_I$ is not defined to be "the proportionality between net force and acceleration", but part of a scalar function $f(q_I, m_I)$ that expands/contracts the acceleration in Newton's 2nd Postulate. A scalar function with a first dominant term whose current interpretation is "the proportionality between force and acceleration". This new $m_I$ may or may not be equivalent to $m_{grav}$.
For those who design langrangians to see what the action spits out, it may be an interesting exercise.
This is just a thought experiment that changes axiomatic definitions and postulates, to see how/if things change.
Mentor
Quote by dr_k Bear with me here, $m_I$ is a new definition
Then don't call it "inertial mass", because that is already defined and is f(q,m). I wouldn't even use a subscript "I" for it, since people will naturally assume that you mean "inertial mass".
Blog Entries: 3
Quote by DaleSpam Then don't call it "inertial mass", because that is already defined and is f(q,m). I wouldn't even use a subscript "I" for it, since people will naturally assume that you mean "inertial mass". You should think about how you would measure your third mass.
I wouldn't interpret there being 3 masses, just 2. Let $m_I$ and $q_I$ stand for the intrinsic properties of matter, mass and charge, for an isolated test particle. They are part of the scalar function $f(q_I,m_I)$ that expands/contracts the acceleration in Newton's 2nd Postulate of Motion. $m_{grav}$, on the other hand, is defined by Newton's Postulate of Gravity, and $q_{Coulomb}$ is defined by Coulomb's Electrostatic Postulate. The series expansion of $f(q_I,m_I)$ gives, to first order, a term $m_I$ which is currently called the "inertial mass", where $m_{inertial}\equiv m_I$.
This is my fault, for being so vague w/ my words. I appreciate your input. Thanks.
Blog Entries: 1
Recognitions:
Gold Member
Science Advisor
Quote by dr_k Thanks for your input. I wouldn't interpret there being 3 masses, just 2. Let $m_I$ and $q_I$ stand for the intrinsic properties of matter, mass and charge, for an isolated test particle. They are part of the scalar function $f(q_I,m_I)$ that expands/contracts the acceleration in Newton's 2nd Postulate of Motion. $m_{grav}$, on the other hand, is defined by Newton's Postulate of Gravity, and $q_{Coulomb}$ is defined by Coulomb's Electrostatic Postulate. The series expansion of $f(q_I,m_I)$ gives, to first order, a term $m_I$ which is currently called the "inertial mass", where $m_{inertial}\equiv m_I$. This is my fault, for being so vague w/ my words. I appreciate your input. Thanks.
You don't seem to have responded to my description of the GR take on this (this is the relativity forum, not the classical physics forum). In GR, it is already true that a charged particle of mass m contributes differently as a gravitational source than a neutral particle (where m is the the inertial mass defined by force and proper acceleration).
So, if you take m to be gravitational source mass, and f(m,q) to be <= m(grav), then GR already incorporates your idea, in a way. If you want to make f(m,q) >= m(grav), then it appears to me your concept is inherently counter factual, given the strength of evidence for GR.
Mentor
Quote by dr_k The series expansion of $f(q_I,m_I)$ gives, to first order, a term $m_I$ which is currently called the "inertial mass", where $m_{inertial}\equiv m_I$.
No, what is currently called the "inertial mass" is your $f(q_I,m_I)$. This is not just to first order, this is exact. The DEFINITION of inertial mass, m, is m=F/a.
You are positing a third mass, "intrinsic mass", which is related to inertial mass, m, by $m=f(q_I,m_I)$.
Do you see that now? I don't know how I can be more clear.
You can measure the gravitational mass using a balance scale. You can then measure the inertial mass by dropping the object and measuring the acceleration. I cannot think of a way to measure the intrinsic mass.
Blog Entries: 3
Quote by PAllen You don't seem to have responded to my description of the GR take on this (this is the relativity forum, not the classical physics forum). In GR, it is already true that a charged particle of mass m contributes differently as a gravitational source than a neutral particle (where m is the the inertial mass defined by force and proper acceleration). So, if you take m to be gravitational source mass, and f(m,q) to be < m, then GR already incorporates your idea, in a way. If you want to make f(m,q) >= m(grav), then it appears to me your concept is inherently counter factual, give the strength of evidence for GR.
Dale and PAllen,
I placed my thread in this forum to get expert GR opinions; I appreciate your input. GR was not my specialty. I will ponder your recent comments. Thanks again.
Mentor You are welcome. I would strongly encourage you to think about how to measure your "intrinsic mass". Unless you can come up with some independent way to measure it then all you have is $m=f(q_I,m_I)$ which is kind of one equation in two unknowns (f and $m_I$). You simply won't have enough information to do it.
Recognitions: Science Advisor As DaleSpam said, there is inertial mass u, gravitational charge m, and electrical charge q. I don't believe there is any mathematical necessity in classical physics for u to be proportional to m or q. In Newtonian mechanics, the equivalence principle is put in by hand u=m. In GR the equivalence principle is also put in by hand using minimal coupling. In quantum mechanics, there is apparently Weinberg's low energy theorem for relativistic spin 2 particles in which the equivalence principle is derived. So for each particle the force laws should go something like: Gm1m2/r2 + Gm1m3/r2 + ... + Kq1q2/r2 + Kq1q3/r2 + ... = u1a1 I haven't thought it through, but I wonder if DaleSpams's concern about the number of equations for arbitrary ui for each particle can be answered with enough particles and particle configurations?
Mentor
Quote by atyy I wonder if DaleSpams's concern about the number of equations for arbitrary ui for each particle can be answered with enough particles and particle configurations?
I considered that, but I can't think of how you would determine an unknown function of an unknown parameter regardless of the amount of data. I think if you know the function then you can fit the parameter with sufficient data, and if you know the parameter you can approximate the function as close as you like with sufficient data, but I don't see how you can do both even with an infinite amount of data.
Recognitions:
Science Advisor
Quote by DaleSpam I considered that, but I can't think of how you would determine an unknown function of an unknown parameter regardless of the amount of data. I think if you know the function then you can fit the parameter with sufficient data, and if you know the parameter you can approximate the function as close as you like with sufficient data, but I don't see how you can do both even with an infinite amount of data.
Yes, that doesn't seem possible for an arbitrary function since that would be (assuming analyticity) an infinite number of Taylor coefficients.
Page 2 of 2 < 1 2
Thread Tools
| | | |
|----------------------------------------------------------------------------|----------------------------------------|---------|
| Similar Threads for: Classical Fields and Newton's 2nd Postulate of Motion | | |
| Thread | Forum | Replies |
| | Classical Physics | 20 |
| | High Energy, Nuclear, Particle Physics | 1 |
| | General Physics | 0 |
| | General Physics | 0 |
| | Quantum Physics | 11 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513928294181824, "perplexity_flag": "middle"}
|
http://nrich.maths.org/7112
|
### Telescoping Series
Find $S_r = 1^r + 2^r + 3^r + ... + n^r$ where r is any fixed positive integer in terms of $S_1, S_2, ... S_{r-1}$.
### Incircles
The incircles of 3, 4, 5 and of 5, 12, 13 right angled triangles have radii 1 and 2 units respectively. What about triangles with an inradius of 3, 4 or 5 or ...?
### Cushion Ball
The shortest path between any two points on a snooker table is the straight line between them but what if the ball must bounce off one wall, or 2 walls, or 3 walls?
# Spring Frames
##### Stage: 5 Challenge Level:
Light springs with the same spring constant are attached to light particles and joined to the corners of the following rigid frames, drawn on unit grids (assume that the pentagon and the small triangle are regular):
In each case, where will the particle be at equilibrium and how long will the springs be?
Can you devise any frames for which the equilibrium point would lie outside the frame? On the frame? Can you devise any frames with multiple equilibrium points?
Imagine pulling the particle slightly in each case. In which directions will the particles oscillate to and fro in a straight line? For what sorts of frames would this always be possible?
Extension: If one of the springs has double the spring constant of the other springs in each case, where would the equilibrium points lie?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9037548899650574, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/3562/when-do-finite-element-method-provide-considerable-advantage-over-finite-differe
|
# When do Finite Element method provide considerable advantage over Finite Differences for option pricing?
I'm looking for concrete examples where a Finite Element method (FEM) provides a considerable advantages (e.g. in convergence rate, accuracy, stability, etc.) over the Finite Difference method (FDM) in option pricing and/or calculating sensitivities.
I know that in general Finite Element is more powerful numerical method for solving PDE's (e.g. it allows to work on complex geometries), but I'm specifically interested if we can fully utilize it for option pricing. I've skimmed through the book Financial Engineering with Finite Elements, but it only states some general considerations about benefits of FEM in the introductory section, without going into details.
Currently I'm aware about only one case, when FEM outperforms FDM in convergence rate, namely in pricing under a Variance gamma model. But this advantage can be neglected by introducing certain enhancements in the FDM.
-
## 3 Answers
FDMs represent PDEs over a simple grid shape; the different implementations are just different recurrence relations to approximate the solutions to the PDE between boundary values (e.g., for options pricing, $T=[t_\mathrm{now},t_\mathrm{maturity}]$ and $S=[\mathrm{deep\_itm},\mathrm{deep\_otm}])$.
FEM is a general name for a lot of different implementations of "adaptable" grids, where approximations in different regions are not necessarily the same over the entire (e.g., T,S) space. One (of many) examples where such approaches are important is path-dependent option valuations (e.g., Asian Options, see Zhang, 2001/3 which is FDM, imagine if we wanted to better model the smoothing effect of the averaging over time, or jump processes as described by Merton).
-
As far as PDEs (deterministic) are concerned we have the notion of a "strong solution" (directly solving the differential operator in the strong formulation of the problem) and the "weak solution" that deals with a weak formulation of the problem.
For the strong formulation, finite differences are the way to go since they are the natural discretization of the differential operator.
For the weak formulation, finite element methods are the way to go since they directly tackle the weak formulation of the problem by restricting it to a finite dimensional space (depending on which type of "element-functions" or basis-functions you choose for this space).
The finite difference method has problems with complex geometries and adaptive meshes - the geometry will not be a problem in option pricing since you always consider the rectangle $[0,T]\times[S_\text{min}, S_\text{max}]$. Local refinement can be a problem - but it depends on the equation and the initial/boundary condition. Further more, for some (nonlinear) differential equations, problems with disconutities occur and you can end up with oscillation effects. There are schemes that circumvent problems like this but you have to invest into the algorithm.
The finite element method is a more general notion. Since SPDEs are defined by their (Ito-) integral formulation an approach that approximates an integral-formulation will feel more natural. Thats because the payoff of a European Call (S-Variable) and the Brownian motion paths (t-Variable) are both not differentiable. Using finite difference methods for SPDEs most natural discretizations for "differential operators" does not give you the right scheme as far as I know. That would be another hint that points into the finite element direction a little bit.
One problem with comparing the performance of the two is definitely that there are so many different schemes (implicit/explicit of different ordes for finite differences and choices of basis functions and local refinement techniques for finite elements) that it is impossible to say. Maybe for some choice of model and initial/boundary condition one method will outperform the other but I think its hard to generalize.
-
I have no personal experience comparing the two methods. However, I have heard that FEM might be preferable to FDM in connection with degenerate PDEs such as the ones that occur in the Hobson-Rogers model.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289090633392334, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/51498/dynamics-of-a-vertical-mass-spring-simple-harmonic-oscillator-with-gravity?answertab=votes
|
# Dynamics of a Vertical Mass-Spring Simple Harmonic Oscillator with Gravity
I am having some trouble obtaining the elastic potential energy and gravitational potential energy of a simple mass spring system.
In this experiment, masses attached to a spring were dropped from a position in which the spring had not be extended. For example, the 20g mass had an equilibrium position of $y=-34.00cm$ and it reached a maximum vertical displacement of $y=-35.83cm$.
Based on this, I found the amplitude of oscillation to be $17.92cm$
Then, using the spring constant and the mass, I determined the natural frequency: $\omega=21.21 rad{\cdot}s^{-1}$
I was able to create the following inital value problem: $$y(t)=c_{1}\cos({\omega}t) + c_{2}\sin({\omega}t)$$ $$y(0)=0$$ $$A=17.92\times 10^{-3}m$$
I solved it by To begin the solution considering the case $y=0$: $$y(t)=c_{1}cos(({\omega})(0)) + c_{2}sin(({\omega})(0))$$
$$y(0)=c_{1}$$
$$0=c_{1}$$
Now, I used the amplitude to determine that $c_{2}=17.91\times 10^{-3}$
Skipping a few simple steps, I created to the following function:
$$y(t)=17.92\times 10^{-2}\cos(21.21t)-17.92\times 10^{-2}$$
Now, onto the elastic potential energy,
$$E_{e}=\frac{k\times y(t)^2}{2}$$
$$E_{e}=\frac{(9)(17.92\times 10^{-2}\cos(21.21t)-17.92\times 10^{-3})^2}{2}$$
$$E_{e}=0.15-0.29 \cos(21.21 t)+0.14 \cos^2(21.21 t)$$
This function does not at all resemble what it should look like, a simple periodic function.
I have a feeling that my problem is due to my assigned coordinate system.
Any help at all would be immensely appreciated.
-
## 2 Answers
I am a little confused as to how you found your amplitude of oscillation. The amplitude is the maximum distance from the equilibrium position. If you say that the equilibrium position is $-34.00$cm, and the maximum vertical displacement is $-35.83$cm, then the amplitude is $1.83$cm.
It appears that you are confusing $y$, the height you measured from some arbitrary point (like the table, or the ground), with the distance from the equilibrium height, $x = y - y_{eq}$. Your working is also suspect. In your derivation you say $0 = c_1$, yet in the next line you say $c_1 = 1.791 \times 10^{-3}$. You should recheck and make sure everything you do makes sense.
Anyway, to answer your question about the potential energy not being a 'simple periodic function', it does not satisfy the same simple harmonic equation $d^2x/dt^2 = - \omega^2 x$that the distance from the equilibrium point does. However, it Is still periodic, but with half the period, or twice the frequency, $2\omega$.
-
the coordinate system you use is arbitrary, and is valid as long as you keep clear what quantities you are measuring... – nervxxx Jan 17 at 22:19
no. when you say height, you are measuring it from some reference point. Meaningful quantities only come from differences of height. If you dropped the spring from rest 34cm from the equilibrium position (place where the mass comes to a rest/oscillates about), then the amplitude is 34cm. – nervxxx Jan 17 at 22:35
ah when I meant 'come to a rest' I meant after a long long time when damping has removed all its kinetic energy and it stops moving forever. that's the equilibrium point. I didn't mean the height at which it first comes to rest at the bottom of the oscillation, sorry about that. – nervxxx Jan 17 at 23:06
although... I'm a little confused. So you mean to say that you dropped it from y = 0, and the furthest distance it reached was -35.83cm (presumably on the first descent). Then after letting it bounce up and down for many times and when it finally settles down and stops moving completely, it is at y = -34cm? – nervxxx Jan 17 at 23:09
I suggest you redo the experiment, if you can, because the results you have seem inconclusive. if you're stuck with the derivation/math, I'll be happy to help you, but try to sort it out by yourself first. – nervxxx Jan 17 at 23:51
You should try to express the solution as (check that it is a solution of the harmonic oscillator equation)
$$y(t)=A \cos(\omega t+\phi)$$
where $A$ is the amplitude and $\phi$ is the initial phase. Let $t=0$ so
$$y(0)=A\cos(\phi) \rightarrow \cos(\phi)=\frac{y(0)}{A}$$ can you continue from here?
-
According to my coordinate system, the vertical displacement at time zero is equal to zero. So, should the phase $\phi$ not also equal zero? If so, my solution is equal to the one described by you. Or, is my coordinate system inherently flawed? – Richard P Jan 17 at 22:12
Remember that $cos(\phi)=0$ means that $\phi=\frac{\pi}{2},\frac{3\pi}{2},...$ I'm going to check the numbers in your first post, be sure to check @nervxxx answer – Nivalth Jan 17 at 22:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371630549430847, "perplexity_flag": "head"}
|
http://gowers.wordpress.com/2013/02/28/whither-polymath/
|
# Gowers's Weblog
Mathematics related discussions
## Whither Polymath?
Over at the Polymath blog, Gil Kalai recently proposed a discussion about possible future Polymath projects. This post is partly to direct you to that discussion in case you haven’t noticed it and might have ideas to contribute, and partly to start a specific Polymathematical conversation. I don’t call it a Polymath project, but rather an idea I’d like to discuss that might or might not become the basis for a nice project. One thing that Gil and others have said is that it would be a good idea to experiment with various different levels of difficulty and importance of problem. Perhaps one way of getting a Polymath project to take off is to tackle a problem that isn’t necessarily all that hard or important, but is nevertheless sufficiently interesting to appeal to a critical mass of people. That is very much the spirit of this post.
Before I go any further, I should say that the topic in question is one about which I am not an expert, so it may well be that the answer to the question I’m about to ask is already known. I could I suppose try to find out on Mathoverflow, but I’m not sure I can formulate the question precisely enough to make a suitable Mathoverflow question, so instead I’m doing it here. This has the added advantage that if the question does seem suitable, then any discussion of it that there might be will take place where I would want any continuation of the discussion to take place.
### Fast parallel sorting.
I am in the middle of writing an article about Szemerédi’s mathematical work (for a book about Abel Prize winners), and one of his results that I am writing about is a famous parallel sorting network, discovered with Ajtai and Komlós, that sorts $n$ objects in $C\log n$ rounds. This is a gorgeous result, but unfortunately the constant $C$ is quite large, and remains so after subsequent improvements, which means that in practice their sorting network does not work as well as a simpler one that runs in time $C(\log n)^2$ for a much smaller $C$.
The thought I would like to explore is not a way of solving that problem — obtaining the AKS result with a reasonable constant — but it is closely related. I have a rough idea for a randomized method of sorting, and I’d like to know whether it can be made to work. If it gave a good constant that would be great, but I think even proving that it works with any constant would be nice, unless it has been done already — in which case congratulations to whoever did it and I very much like your result.
The rough idea is this. As with any fast parallel sorting method, the sorting should take place in rounds, where each round consists in partitioning all the pairs (or a constant fraction of them, but there isn’t much to be lost by doing extra comparisons) and swapping the objects round if the one that should come before the other in fact comes after it.
An obvious thing to try is a random partition. So you start with $n$ objects arranged in an arbitrary order. Your target is to rearrange them so that they are arranged according to some linear ordering that you don’t know, and all you are allowed to do is pairwise comparisons. To visualize it, I like to imagine that the objects are $n$ rocks that all look fairly similar, that you want to lay them out in a line going from lightest to heaviest, and that all you have to help you is a set of $n/2$ balances that can fit at most one rock on each side.
With this picture, the random method would be to partition the rocks randomly into pairs and do the corresponding $n/2$ comparisons. When you have compared rocks $x$ and $y$, you put them back in the same two places in the line, but with the lighter one to the left of the heavier one (so they may have been switched round).
What happens if we do $C\log n$ random comparisons of this kind? The answer is that it doesn’t work very well at all. To see why not, suppose that the ordering is approximately correct, in the sense that for every $m$ the $m$th lightest rock is in roughly the $m$th place. (To be more concrete, we could say that it is in the $r$th place and that $|r-m|\leq\sqrt{n}$ or something like that.) Then when we do a random comparison, the probability that we move any given rock is extremely small (in the concrete case around $n^{-1/2}$ at most). Basically, we are wasting our time comparing rocks when we are pretty certain of the answer in advance.
This suggests an adjustment to the naive random strategy, which is to have a succession of random rounds, but to make them gradually more and more “localized”. (Something like this is quite similar to what Ajtai, Komlós and Szemerédi do, so this idea doesn’t come out of nowhere. But it is also very natural.) That is, at each stage, we would choose a distance scale $d$ and pair up the rocks randomly in a way that favours distances that are of order of magnitude $d$, and we would make $d$ shrink exponentially quickly as we proceed.
The thing that makes it a bit complicated (and again this has strong echoes in the AKS proof) is that if you just do a constant number of rounds at one distance scale, then a few points will get “left behind” — by sheer bad luck they just don’t get compared with anything useful. So the challenge is to prove that as the distance scale shrinks, the points that are far from where they should be get moved with such high probability that they “catch up” with the other points that they should be close to.
It seems to me at least possible that a purely random strategy with shrinking distance scales could sort $n$ objects in $C\log n$ rounds, and that if one devised the random partitions in a particularly nice way then the proof might even be rather nice — some kind of random walk with drift might be taking place for each rock.
So my questions are these.
(i) Has something like this been done already?
(ii) If not, is anyone interested in exploring the possibility?
A final remark is that something that contributed a great deal to the health of the discussion about the Erdős discrepancy problem was the possibility of doing computer experiments. That would apply to some extent here too: one could devise an algorithm along the above lines and just observe experimentally how it does, whether points catch up after getting left behind, etc.
### Like this:
This entry was posted on February 28, 2013 at 1:19 pm and is filed under polymath. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 38 Responses to “Whither Polymath?”
1. Uwe Stroinski Says:
February 28, 2013 at 5:54 pm | Reply
(i) I do not know. My knowledge about sorting does not go far beyond ‘quicksort’.
(ii) I would be interested.
2. g Says:
February 28, 2013 at 10:11 pm | Reply
This seems highly reminiscent of “Shell sort” — take a decreasing sequence a1,a2,…, ending in 1; for each k and each b mod ak, insertion-sort elements (t.ak+b); note that these “sub-sorts” can run in parallel — which has the following mildly discouraging features.
1. The (worst-case or average) execution time depends very strongly on the exact choice of gap sequence.
2. The way in which this dependence works is poorly understood.
3. No gap sequence produces a worst-case execution time that’s O(N log N). No gap sequence is known to produce an average execution time that’s O(N log N).
Now, maybe Shell sort is too rigid somehow, and the sort of randomized schedule you propose would somehow average things out in a helpful way. A bit of googling turns up a paper by Michael Goodrich describing what he calls “randomized Shellsort”, which allegedly runs in time O(N log N) and sorts correctly with very high probability. This looks really quite similar to what you propose here.
3. g Says:
February 28, 2013 at 10:13 pm | Reply
Er, sorry, of course I should have given a link to Goodrich’s paper. It’s at https://www.siam.org/proceedings/soda/2010/SODA10_101_goodrichm.pdf .
4. gowers Says:
February 28, 2013 at 11:06 pm | Reply
Thanks for that interesting reference, which I didn’t know about. It does indeed look similar, though if I understand correctly the depth is greater than $C\log N$. Also, I was hoping for a sorting network that sorts every single permutation rather than just almost every permutation. But the complicated argument in this paper suggests that that may be hard to achieve.
Having said that, I don’t want to rule out the possibility that one might be able to do things cleverly somehow and make the argument simpler.
5. Gerhard Paseman Says:
March 1, 2013 at 6:25 am | Reply
I was motivated to ask http://mathoverflow.net/questions/31364/inversion-density-have-you-seen-this-concept by a problem similar to yours, but without the random element. The references listed in the question may be of use; Poonen’s analysis is general enough that it might apply. However,
you or a student of yours might try an experiment comparing, say, combsort with a randomized version which changes the gap length by +- 1 each time a comparison is done.
6. Gil Kalai Says:
March 1, 2013 at 2:50 pm | Reply
At the time I felt that the best chances for a definite success in the various suggestions before agreeing on EDP was the first case of polynomial HJT. http://gowers.wordpress.com/2009/11/14/the-first-unknown-case-of-polynomial-dhj/
(Of course some propose projects were more “open ended”)
• gowers Says:
March 2, 2013 at 3:51 pm
I still very much like that question and would be interested in exploring it at some point.
7. Alec Edgington Says:
March 2, 2013 at 8:54 am | Reply
Sounds like a nice question: I’d be interested in exploring it.
8. gowers Says:
March 2, 2013 at 3:26 pm | Reply
From the response so far, it’s not clear whether a Polymath project based on this question would get off the ground (I mean that both positively and negatively), so let me try to be more specific about the question. The weakness in the proposed proof so far is that I don’t have a very precise idea about why it might be reasonable to hope that the elements that get “left behind” should eventually catch up. However, I do have a vague idea, and it might be interesting to try to turn that into a more precise heuristic argument, by which I mean turn it from a qualitative argument into a quantitative one, perhaps making some plausible assumptions along the way.
The vague idea is this. Suppose that we have a linearly ordered set $x_1<\dots<x_n$ and an initial permutation of that set. Let us write $\pi_r(m)$ for the place occupied by $x_m$ after $r$ rounds of sorting. And suppose that $r$ is such that $|\pi_r(m)-m|\leq t$ for all but a small percentage of $m$. What we expect is that $t$ should be $ne^{-cr}$ for some positive constant $c$, and indeed I think that something like that should be easy to prove, at least for small $r$.
That means that by the $r$th stage, the algorithm should by and large be working at distance scale $ne^{-cr}$. Suppose now that the element $x_m$ is one of the few that is way out of position: for convenience, let us assume that $\pi_r(m)-m$ is much bigger than $ne^{-cr}$. Then at time $r$, $x_m$ will be surrounded (at a distance scale of $ne^{-cr}$) by elements that are much larger, so practically all comparisons we ever make will result in $x_m$ being moved closer to place $m$. (To be more precise, I would expect about half the comparisons to be with objects in later places, which would result in no movement at all, and half to be with objects in earlier places, which would result in significant movement towards $m$. Putting those two together should give significant movement towards $m$.)
So the thing that gives me hope about this is that the elements that are far from their correct positions are much more likely to move towards their correct positions.
To be slightly more precise still, let’s say that an element is unlucky at stage $r$ if it doesn’t get moved significantly nearer to its correct position. At the beginning of the process, we would normally expect an element to be unlucky with constant positive probability. If that remained true, then even after logarithmically many steps we would expect at least some elements to be unlucky. But by then the distance scale would have contracted to $n^\alpha$ for some $\alpha<1$ and it starts to look very difficult for that element to catch up. However, one of the assumptions I have just made is too pessimistic, since after a few steps the probability that an element is unlucky should no longer be a constant.
Having written that, I realize that it is not quite correct as I’ve stated it. Suppose that at each stage the probability that the element in place $a$ is compared with an element in place $b$ for some $b>a$ is $1/2$. Then if $\pi_1(m)$ is significantly greater than $m$, there is a constant probability at each stage that $x_m$ will be compared with an element in some place that is in a place that is greater than $\pi_1(m)$, in which case $x_m$ is unlucky. So for my vague argument to work, the random comparisons need to be organized in a way that guarantees that roughly half the comparisons should be with an element to the left and roughly half with an element to the right.
One possible way of doing that would be to pick at each stage a random number $s$ at the right distance scale, partition into residue classes mod $s$, and do two rounds, one where the pairs are $(a,a+s)$, $(a+2s,a+3s)$, etc. and the other where the pairs are $(a+s,a+2s)$, $(a+3s,a+4s)$, etc.
9. gowers Says:
March 2, 2013 at 3:50 pm | Reply
Here’s the kind of heuristic calculation I’d like to see done. First, we should work out what we expect the distribution of $\pi_s(u)-u$ to look like. Next, suppose that an element $x_m$ starts in a position that is $\theta n$ places to the right of $m$: that is, $\pi_1(m)-m=\theta n$. Then we should work out the probability that $\pi_r(m)-m$ is large assuming that $\pi_s(u)-u$ has roughly the expected distribution at all intermediate stages.
What makes it a bit tricky is that the probability in question affects the distribution: it looks as though the distribution is some kind of fixed point, and therefore quite hard to guess. The distribution itself depends on the distribution of the random comparisons, but that is less problematic — one could choose the comparisons in as nice a way as possible in the hope of simplifying the calculations. This could be a case where some computer experimentation would be interesting: if we choose some natural seeming collection of comparisons, such as the ones suggested at the end of the previous comment, with $s$ normally distributed with standard deviation $e^{-cr}n$ for some smallish constant $c$, what is the distribution of $\pi_s(u)-u$ like?
I’ve just seen an additional complication, which is that it is not clear what I mean by “the distribution”. In particular, am I talking about a random initial arrangement of the $x_i$ or the worst-case initial arrangement? It is the latter that I unconsciously had in mind. So I suppose that what I’m looking for is an “upper bound” for the distribution: that is, for each $k$ an upper bound for the proportion of $m$ such that $|\pi_1(m)-m|\geq k$, the upper bound being valid whatever the initial sequence is.
10. gowers Says:
March 2, 2013 at 4:39 pm | Reply
I’ve just realized that there is a problem with my suggested random scheme for sorting if we want to sort all sequences rather than just almost all. Given any place $p$, the number of elements $x_i$ that can be at $p$ after $r$ rounds of comparison is at most $2^r$. So if $r$ is less than logarithmic, then we can fix it so that the element initially in place $p$ is much too large, and all the elements that could possibly end up at place $p$ are even larger.
This implies that after, say, $\log n/2$ rounds, we may have some elements that are out of place by $n/2$. But by that stage, the distance scale has dropped to $n^\alpha$, which means that at least $n^{1-\alpha}$ steps will need to be taken by those elements, which is impossible in only logarithmically many rounds.
This seems to suggest that if the difference scales decrease in such a way that after a short time it becomes very unlikely that comparisons between distant places are ever made again, then we are in trouble. So for an approach like this to work, what seems to be required is for a typical comparison to become more and more focused on small distance scales without losing sight of the large distance scales. Exactly how this should be done is not clear to me, though it might be possible to get some insight into the problem by looking at the AKS scheme and seeing how they cope with elements that are left behind. Maybe some randomized version of what they do would work, though if it was too close to their approach then it wouldn’t be very interesting.
• Gil Kalai Says:
March 2, 2013 at 5:40 pm
A few remarks: 1) There is a simplification of AKS due to M. S. Paterson: Improved sorting networks with O(log N) depth, Algorithmica 5 (1990), no. 1, pp. 75–92. Perhaps there are further works.
2) There is a beautiful area of random sorting networks. See Random Sorting Networks by Omer Angel, Alexander E. Holroyd, Dan Romik, Balint Virag http://arxiv.org/abs/math/0609538
3) If you take a random sorting network of depth C log n (or more generally depth m) can we have a heuristic estimation for the probability p that this network will sort every n numbers? (Or perhaps simpler an expectation and variance for the number of orderings it will sort?) Then we can see when p is larger than 1/(n!)^m.
• gowers Says:
March 2, 2013 at 5:58 pm
A quick further remark. The following is a slightly daring conjecture, but I don’t see an obvious argument for its being false. One could simply do what I was suggesting and then do it again. My remark above shows that the first time round it can’t sort everything, but the second time round the initial ordering will on average be very close to correct, and then it isn’t clear that a second pass won’t tidy everything up.
• Gil Kalai Says:
March 3, 2013 at 10:58 am
Maybe it will be useful to to restate (glue and paste) or link to the explicit description of “what I was suggesting”. So we will be clear about your conjecture.
Let me remark that on thing that we can do is that anytime we compare 2 rocks we put them back in a random 1/2 1/2 position and then the quastion is how can we guarantee with such a small depth network, that the distribution is very close to uniform.
11. Polymath proposal (Tim Gowers): Randomized Parallel Sorting Algorithm | The polymath blog Says:
March 2, 2013 at 4:41 pm | Reply
[...] sorting network for \$n\$ numbers of depth \$O(log N)\$. rounds where in each runs \$n/2\$. Tim Gowers proposes to find collectively a randomized sorting with the same [...]
12. Gerhard Paseman Says:
March 2, 2013 at 11:55 pm | Reply
I want to see how your idea would work on a simple input, say [n,1,2,3,4,...,n-1]. My concern is that the sorting network will do a lot of unsorting (say [d,1,2,3,...,n,...]) and not make any provable progress in a logarithmic number of rounds.
• gowers Says:
March 3, 2013 at 12:25 am
That’s a nice example to think about. It’s not necessarily a major problem if d gets moved to the beginning, because it will have a strong tendency to move back. But of course, that will move other numbers out of place, and exactly how things pan out is not clear.
13. gowers Says:
March 3, 2013 at 10:32 am | Reply
I’ve thought about Gerhard’s example now and am in a position to explain why I don’t find it too threatening. What I’m going to do is consider a highly idealized and unrealistic model of what the randomized network would do, which to me makes it believable that a actual randomized network would have a chance of success.
To describe the idealized model, let’s assume that $n$ is a power of 2. For convenience I’ll call the numbers 1 to $2^k$ and the places 0 to $2^k-1$. The initial arrangement is $2^k,1,2,3,\dots,2^k-1$. The worry about this arrangement is that almost all the numbers are in order, so very few comparisons will actually change anything. Meanwhile, those few comparisons that do change anything seem to be messing things up.
However, these two worries in a sense cancel each other out: the messing up is necessary to get all of $1,2,\dots,2^k-1$ to shift to the left.
Let’s now do the following sequence of rounds. We will begin by comparing place $r$ with place $2^{k-1}+r$ for each $r=0,1,\dots,2^{k-1}-1$. Note that these are precisely those $r$ for which $\epsilon_{k-1}=0$ if we represent them as $\sum_{i=0}^{k-1}\epsilon_i2^i$ with each $\epsilon_i\in\{0,1\}$.
Next, we compare place $r$ with place $2^{k-2}+r$ for each $r$ such that $\epsilon_{k-2}=0$ (that is, the first and third quarters of all the $r$). And we continue in this way.
After the first round, everything is as it was before, except that now we have $2^k$ in place $2^{k-1}$ and $2^{k-1}$ in place 0. Now we have two copies of the original ordering, one in the left half and one in the right half. What’s more, in each half we do the sequence of rounds corresponding to $k-1$. Therefore, by induction after $k$ rounds everything is sorted.
What is happening here is that as we proceed, we replace one sequence that is messed up at a distance scale $d$ by a concatenation of two sequences that are messed up in precisely the same way at a distance scale $d/2$. In a sense, this concatenation is messier than the original sequence, but because we can operate on it in parallel, this does not matter.
I think it would be very interesting to try to analyse what a more randomized set of rounds would do with this initial ordering.
14. Alec Edgington Says:
March 3, 2013 at 3:04 pm | Reply
Even if the goal is to find (or prove the existence of) small-depth circuits that sort all initial inputs (and not merely ‘almost all’), I wonder if it might be useful to think about the more general case. A proof that depended upon following some properties of the input distribution through the rounds might be easier to make work in that case. On the other hand, it might be possible to control some properties (e.g. some measure of sortedness) possessed by all potential inputs to each round.
I also wonder if we are after a (randomized construction of a) circuit that works with high probability, or just with positive probability? Or are these somehow equivalent?
15. Gil Kalai Says:
March 3, 2013 at 4:36 pm | Reply
As I heard from Michael Ben-Or, there is a randomized construction by Tom Leighton of a sorting network with 7.2log n rounds that sort all orderings except exponentially small fraction of them. (The problem with repeating is that it seems that a log number of repeats are needed.)
• Gil Kalai Says:
March 3, 2013 at 5:11 pm
The book by Leighton: :Introduction to Parallel Algorithms and Architectures Arrays, Trees, Hypercubes,” devotes a chapter to this construction. See also this paper by Leighton and Plaxton: http://epubs.siam.org/doi/pdf/10.1137/S0097539794268406
• gowers Says:
March 3, 2013 at 9:51 pm
Do you know why a log number of repeats is needed? One might think that if after a first round the number of elements that are out of place was $n^{1-c}$ (with similar statements at all distance scales), then at the next repetition the probability that tidying up fails becomes very small after only a constant number of rounds.
Just to be clear, what I’m suggesting is that one does a randomized sorting procedure with the distance scales shrinking exponentially, and when one reaches a distance scale of 1, one repeats the entire process.
16. Gil Kalai Says:
March 4, 2013 at 1:33 am | Reply
I suppose roughly the reason is that for $n^{1-c)t}$ to be small you need t in the order of log n.
• gowers Says:
March 4, 2013 at 10:43 am
Hmm … it depends what we mean by small. I was thinking of “small” as meaning $n^{-C}$, so that for $n^{-ct}$ to be small we just need $t$ to be constant. But I can see that it isn’t obvious that that will work for all sequences rather than just almost all.
17. gowers Says:
March 4, 2013 at 10:48 am | Reply
It may not be feasible, but I quite like the idea of attempting something along the following lines. Given some kind of random comparator network, try to design a permutation, depending on the network, that will not be correctly sorted. With the help of this process, try to develop a sufficient condition for the permutation not to exist, and then try to show that that condition holds with positive probability.
To obtain the sufficient condition, what one might try to do is find a number of necessary conditions (saying things like that the network must “mix well” at every distance scale), and hope that eventually you have enough necessary conditions that taken together they are sufficient.
• Alec Edgington Says:
March 5, 2013 at 7:50 am
I wonder what lower bounds are known for the constant $C$. There’s a trivial lower bound of $2 / \log 2$ (from the fact that each round can have at most $n/2$ comparisons, and if $K$ is the total number of comparisons then we must have $2^K \geq n!$, and Stirling’s formula). Can anything more be said in this direction? What are the obstacles to creating such an efficient circuit?
18. gowers Says:
March 4, 2013 at 11:10 am | Reply
Earlier, Gil asked for a precise statement of “what I am suggesting”. I’m not sure I can provide that yet, because there are a number of precise statements that I would be happy with. However, let me give a precise statement that I would be delighted to prove, but that might easily be false. If it is false, then I would want to find a modification of it that is true.
The precise statement is as follows. Let me define an $m$-round of type 0 to be a set of comparisons of the following kind. First, partition $\mathbb{Z}$ into intervals of the form $[2m(r-1), 2mr-1]$, then partition each such interval into pairs $(x,x+m)$ (which can be done in exactly one way), and finally, do a comparison for all such pairs for which both $x$ and $x+m$ are between 1 and $n$. Define an $m$-round of type 1 to be the same except that the initial partition is into intervals of the form $[2m(r-1)+m,2mr-1+m]$. If we do an $m$-round of both types, then every place $x$ between 1 and $n$ will be compared with both place $x-m$ and place $x+m$, provided both those places are between 1 and $n$.
Now I propose the following comparator network. We choose a random sequence of integers $m_0,m_1,\dots,m_t$, where $m_s$ is uniformly distributed in the interval (of integers) $[1,ne^{-cs}]$ and $t$ is maximal such that $ne^{-ct}\geq 1$. Then at step $s$ we do an $m_s$ round of type 0 followed by an $m_s$ round of type 1.
When we have finished this, we repeat the process once.
My probably false suggestion is that for some fixed $c>0$ and with reasonable probability this defines a sorting network.
Incidentally, there is a detail in the above suggestion that reflects a point I have only recently realized, which is that if one is devising a random sorting network of roughly this kind — that is, with increasingly local comparisons — then it is important that there should be correlations between what happens in two different places at the same small distance scale. If not, then when we get to small distance scales, there are many opportunities for something bad to happen. That is, it might be that each little piece of network has a quite high probability of behaving well, but that if you choose many little pieces independently, then some of them will misbehave. The use of $m$-rounds above is an attempt to get round this problem by making the network do essentially the same thing everywhere at each distance scale. (It doesn’t do this exactly, because of the ends of the intervals of length $2m$, but I’m hoping that that minor variation is not too damaging.)
19. gowers Says:
March 4, 2013 at 3:41 pm | Reply
I now think that the suggestion in the previous comment is unlikely to give a depth better than $\log n \log\log n$. Here is why. The sum of the GP $1, e^{-c}, e^{-2c},\dots$ is approximately $c^{-1}$. So if an element starts out at a distance $\theta n$ from where it should be and is not moved at any time during the first approximately $r=2c^{-1}\log(c^{-1})$ rounds, then at the end of all rounds it will still be at a distance proportional to $n$ from where it should be. If we start with a random permutation, then the chances of this happening look to me like at least $10^{-2^r}$, since we only care about the values in at most $2^r$ places.
So I would expect that after the first round, the proportion of points that are a long way from where they should be will be at least $10^{-2^r}$, which is an absolute constant (depending on the absolute constant $c$ that we choose at the beginning).
After the next round, something similar can be said, except that this time the probability that a point in one of the $2^r$ places is a bad point has gone down to more like $10^{-2^r}$. But that still gives us a probability of $(10^{-2^r})^{2^r}=10^{-2^{2r}}$ that the point will end up way out of place. And after $k$ repetitions, we get to $10^{-2^{kr}}$. For that to be smaller than $1/n$ we need $k\approx\log\log n$.
• gowers Says:
March 4, 2013 at 3:44 pm
To be clear, when I say that the suggestion in the above comment is unlikely to give a depth better than $\log n\log\log n$, what I actually mean is that the suggestion itself looks false, and to rescue it I think one would need to repeat at least $\log\log n$ times rather than just once.
20. Weekly links for March 4 | God plays dice Says:
March 5, 2013 at 2:46 am | Reply
[...] Gowers suggests a polymath project based on parallel sorting; Alexander Holroyd has a gallery of pictures of sorting [...]
21. mixedmath Says:
March 19, 2013 at 10:04 am | Reply
I’m a bit late to this post, but the question sounds interesting and I would support it as well. Unfortunately, I don’t know much about it either.
- David Lowry-Duda
22. mixedmath Says:
March 19, 2013 at 10:17 am | Reply
I’d just like to throw something out there, so I beg forgiveness if this is a bit nonsensical. But I was reading up on the AKS network sort, and I read that a key idea in the method is to construct an expander graph. As a student of modular forms, I’ve heard that there is a way to construct expander graphs that is intimately connected to modular forms (somehow). While I don’t know much about this yet, I wonder if there is something to be gained from examining that connection a bit more closely.
23. Gerhard Paseman Says:
March 24, 2013 at 7:35 pm | Reply
Here is a test comment on another post, as requested.
24. michelledelcourt Says:
March 26, 2013 at 1:52 pm | Reply
25. vznvzn Says:
March 31, 2013 at 6:07 pm | Reply
“A final remark is that something that contributed a great deal to the health of the discussion about the Erdős discrepancy problem was the possibility of doing computer experiments. That would apply to some extent here too: one could devise an algorithm along the above lines and just observe experimentally how it does, whether points catch up after getting left behind, etc.”
huge fan of empirical/experimental approaches here too & think they are too much underutilized. an interesting strategy along these lines is something like what is called a “magnification lemma” seen in jukna’s new book on circuit theory. basically it is a way of translating or scaling problems of size N to size log(N) which might it more accessible to computer experiments of the results.
have recently collected various refs on experimental math on my blog home page links & am planning to do a post on the subject. theres also an excellent article on simons web site profiling zeilberger.
suggest that future polymath projects be launched with an eye toward supporting empirical attacks to support building technical intuition.
also, have been thinking that nobody has addressed this issue much, but basically polymath is an attempt to find “viral-like” projects which have high popularity. but there is currently no good way to sort out this popularity. so, suggest a voting system be built up so that projects can gain “critical mass” through measuring interest via votes. [similar to stackexchange mechanisms].
think that this difficulty in achieving “critical mass” is one of the key issues facing further polymath success. which by the way to my eye, speaking honestly/critically but hopefully also constructively, does not seem to be “clicking” so much since the original/initial gowers project.
26. vznvzn Says:
March 31, 2013 at 6:17 pm | Reply
one other point. it would be helpful if someone put together a survey of the status of polymath problems launched since the original gowers Hales-Jewett DHJ problem—its hard to keep track of them all. analyzing in particular not so much the results but the crowd level of participation and enthusiasm for each, which imho is a key ulterior factor of project success, a social-cultural aspect which is so far seems largely not addressed by any participants.
27. mixedmath Says:
April 17, 2013 at 3:59 pm | Reply
I’d like to say something else about expander graphs and that approach. I recently heard a talk by Dr. Cristina Ballantine at ICERM where she explained that she could construct expander bigraphs, i.e. low degree highly connected biregular bipartite graphs.
• mixedmath Says:
April 17, 2013 at 4:04 pm
I did not mean to end that post already – so I continue in a comment to myself.
From my (somewhat limited) experience with computer science algorithms, bigraphs made things much better than just highly connected graphs. Here, having a graph be biregularly bipartite (meaning that the vertices are of two families; in each, all the vertices have the same degree – but the two families can have different degrees from each other – and are directly connected only to the other family) seems like it might behave well with parallelizing the work, maybe.
Maybe this is something to consider?
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 171, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543566107749939, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/284600/trigonometric-substitution-arc-length/284610
|
# Trigonometric Substitution - Arc Length
Good day to everyone,
I'm not so good in maths as I wish. But, I'm doing my job to improve it. So, be patience and thanks in advance for any detailed clue.
Description of the problem:
The arc is described by this parabola: $\ x=y^2$. Having the integral of the length of the arc, in Leibniz notation. The term "$\ dx/dy$" is $\ 2y$. (in this case $\ x=g(y)$ )
In the integral, there is a step I cannot understand. They make the following substitution: "$\ y=(1/2)*Tan(\theta)$ "
Question: Why $\ Tan(\theta)$ ?
I still cannot understand how to choose "the right trigonometric substitution".
-
## 2 Answers
Fabian's post seems fine, but to clear up your confusion about which trig sub to use, I'll mention three cases.
When you see integrals of the form $$\sqrt{a^2+x^2}$$
where $a \in \mathbb R$, the substitution $x = a \tan \theta$ usually cleans up the integral quite nicely and makes it easier to work with. Note that the new integral may require another common integration method (such as integration by parts.)
Similarly, if you see integrals of the form $$\sqrt{a^2-x^2}$$
try the substitution $x = a \sin \theta$. For integrals of the form $$\sqrt{x^2-a^2}$$ try the substitution $x = a \sec \theta$.
Addendum Based on Stewart's text, the problem is to find the length of the arc of the parabola $y^2 = x$ from $(0,0)$ to $(1,1)$.
Arc length is given by $$L = \int_a^b \sqrt{1 + \left(\frac{dx}{dy}\right)^2} \ dy$$
We are given $$x = y^2 \implies \frac{dx}{dy} = 2y$$
$$L = \int_a^b \sqrt{1 + \left(\frac{dx}{dy}\right)^2} \ dy = \int_0^1 \sqrt{1 + (2y)^2} \ dy = \int_0^1 \sqrt{1 + 4y^2} \ dy$$
$$\int_0^1 \sqrt{1 + 4y^2} \ dy = \int_0^1 \sqrt{4\left(\frac{1}{4} + y^2\right)}\ dy = \int_0^1 2\sqrt{\left(\frac{1}{4} + y^2\right)} \ dy$$
The key to this is to note that the part involving $x^2$ cannot have a constant out front - so we need to factor and be clever to get in in a form that matches our three templates we know.
So, in general, if you see $$\sqrt{a^2 + kx^2} \quad a,k \in \mathbb R$$
Factor out $k$ to be left with $\sqrt{k\left(\frac{a^2}{k} + x^2 \right)}.$ Most of the time in nice examples (I imagine most of those picked out by Stewart are), $k$ will be a perfect square, which means we can factor it out of the square root as I did in your example. This helps clean our integral up nicely!
Now since $\left(\frac{1}{2}\right)^2 = \frac{1}4$, it should make sense that this follows the $a^2 + u^2$ template above. Carry out the substitution of $y = \frac{ \tan \theta}{2} \implies dy = \frac{\sec^2 \theta}{2} \ d \theta$. You end up needing to deal with
$$\frac{1}{2}\int_{y = 0}^{y=1} \sec^3 \theta \ d\theta$$
This is a common integration by parts example done in a Calc 2 course, in which many proofs of it can be found online. I've answered a question involving this integral here.
-
Thanks Joe!!! You clear me up in another way, thanks for that. But, in the book, they use that substitution, literaly $\ y=1/2*Tan(\theta)$. And I don't understand where or how that comes or what was behind that substitution. Thanks so much for any clue. – David Alejandro Jan 22 at 23:40
What book are you using? This seems to be something that would be easier if you posted all of the details in the problem/proof, or gave me an idea of where to look in a book if I can find it online, so I can better help you. I assume the $tan \theta$ substitution comes from the definition of the arc length involving an integral as Fabian pointed out. Are you sure the $\frac{1}{2}$ is not to be confused with the differential $dx$ in the integral? – Joe Jan 22 at 23:42
ok, Thanks so much for being so nice. The book is James Stewart Calculus 7e, if you want to find it to your personal use, you can find it by this web address gen.lib.rus.ec. The page of the book is: 564. Thanks so much in advance. – David Alejandro Jan 22 at 23:51
I actually have the book on hand - it's what I used for my calculus courses. I'll give it a look in one minute. – Joe Jan 22 at 23:55
I think the addendum should help clear up your confusion. Let me know if something is unclear. – Joe Jan 23 at 0:24
show 3 more comments
I guess you have misunderstood something. The arclength is given by $$s=\int_0^{x_0} \sqrt{1+ \left( \frac{dy}{dx}\right)^{2}} dx$$ and not by $\int (dy/dx) dx$ as I read from your post. This last expression equals $$s=\int_0^{x_0} \sqrt{1+ 4 x^2} dx$$ for the parabola. Substitution $x=\tfrac12 \tan\theta$ yields $$s= \int_0^{\arctan(2x_0)} \frac{\sqrt{1+\tan^2\theta}}{2\cos^2\theta} d\theta = \int_0^{\arctan(2x_0)} \frac{d\theta}{2\cos^3 \theta}.$$
An easier substitution is $x=\tfrac12\sinh\theta$ which yields $$s=\frac12\int_0^{\text{asinh}( 2 x_0)} \cosh^2 \theta \,d\theta.$$
-
Thanks for answering so fast, by I'm using the integral of arclenght in this case: $\ x=g(y)$. Maybe, I didn't myself clear. But, I still cannot understand how to choose the "right trigonometric substitution". – David Alejandro Jan 22 at 23:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427064657211304, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/119448/self-complementary-cartesian-products
|
## Self complementary cartesian products
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given two graphs $G$ and $H$ is there a nice way to check whether the cartesian product $G\Box H$ is self complementary without directly computing its complement and searching for isomorphism? For example, how can one show that $K_3\Box K_3$ is self complementary?
-
This looks very much like a homework question. One could show $K_3\Box K_3$ was self complementary by a couple of drawings. – Chris Godsil Jan 21 at 12:56
@Chris godsil: Yes, thats why I said without computing the complement, may be using some arguments on the degrees of verices and using the fact that it is a cartesian product; and this is not a homework. – pritam Jan 22 at 6:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920283854007721, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/12418/list
|
## Return to Answer
2 added 10 characters in body
There is a simple argument by comparing to the unit ball of $\ell_1^n$.
Let $K$ be the unit ball of $\ell_1^n$, i.e. the set of points with sum of coordinates (in absolute value) bounded by $1$. Then $K$ is the disjoint union of $2^n$ simplices (one per octant), and each simplex has volume $1/n!$.
Now the Euclidean unit ball is contained in $\sqrt{n}K$, so its volume is at most $n^{n/2}2^n/n!$. This tends to $0$ and behaves like $(c/\sqrt{n})^n$ for some constant $c$.
The value is sharp up to the value of $c$, as shown by the dual argument : the unit ball contains the cube $[-1/\sqrt{n},1/\sqrt{n}]^n$.
1 [made Community Wiki]
There is a simple argument by comparing to the unit ball of $\ell_1^n$.
Let $K$ be the unit ball of $\ell_1^n$, i.e. the set of points with sum of coordinates (in absolute value) bounded by $1$. Then $K$ is the disjoint union of $2^n$ simplices (one per octant), and each simplex has volume $1/n!$.
Now the unit ball is contained in $\sqrt{n}K$, so its volume is at most $n^{n/2}2^n/n!$. This tends to $0$ and behaves like $(c/\sqrt{n})^n$ for some constant $c$.
The value is sharp up to the value of $c$, as shown by the dual argument : the unit ball contains the cube $[-1/\sqrt{n},1/\sqrt{n}]^n$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9126671552658081, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/56386-isomorphisms.html
|
# Thread:
1. ## isomorphisms
How does one show that a G is isomorphic to the external direct product of H and K if G=HK and if H and K have unique elements.
Also how does one apply this to proving U(15) is isomorphic to the external direct product of U(3) and U(5) and how would you show if U(15) is cyclic or not?
2. Originally Posted by morganfor
How does one show that a G is isomorphic to the external direct product of H and K if G=HK and if H and K have unique elements.
Also how does one apply this to proving U(15) is isomorphic to the external direct product of U(3) and U(5) and how would you show if U(15) is cyclic or not?
Are you asking to prove if $H,K \triangleleft G$ with $G = HK$ and $H\cap K = \{ e \}$ then $G\simeq H\times K$?
3. yes but H and K are not normal to G
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398494958877563, "perplexity_flag": "head"}
|
http://www.wikiwaves.org/Category:Symmetry_in_Two_Dimensions
|
From WikiWaves
# Introduction
If we consider the problem of an incident wave from the left on a body, or group of bodies, which are symmetric about $x=0$ then we can decompose the problem into an symmetric and anti-symmetric problem. The first problem can be thought of as having Neuman boundary conditions at $x=0$ and the second as having Dirichlet boundary conditions at $x=0$. The solution to the original problem is then found by adding these two problems together thanks to the superposition principle.
# Original Problem
We consider here a body or a group of bodies which are geometrically symmetric about $x=0$. An incident wave coming from the left will be reflected on the left and transmitted on the right side of the domain. The body extend up to $|x|=L$ (where we will place a matching boundary) and the finite depth $h$.
# Symmetric solution
Symmetric solution
We consider here a solution where the potential and the corresponding velocity is symmetric about the plane $x=0$. The symmetric problem consists of two identical incident waves coming from each side of the symmetric body. Their horizontal spatial dependence is respectively $\exp\,(-k_{0}(x+L))$ and $\exp\,(k_{0}(x-L))$ and they have both unit amplitude and same phase. Two waves are emitted at the edges of the body and propagate away from it on each side, again with the same amplitude and phase.
The symmetry of velocity field implies that the normal velocity is zero on the symmetry plane at $x=0$. We need to solve the problem on the plane $x<0$ only and then we can build the solution on the other plane simply by symmetry.
The symmetric solution on the left half plane can now be obtained from the solution with a semi-infinite body. To that extent, we first translate horizontally the origin for the open-water region and we write the symmetric potential $\, \phi_s$ as
$\phi_s(x,z)=e^{-k_{0}(x+L)}\phi_{0}\left( z\right) + \sum_{m=0}^{\infty}a_{m}e^{k_{m}(x+L)}\phi_{m}(z), \;\;x<-L$
In the region under the body, for $-L<x<0$, the potential satisfies a Neumann condition at $x=0$. This will change the modes of the semi-infinite solution into some even functions of $x$.
The matching conditions lead to a set of equations very similar to the semi-infinite one.
# Anti-symmetric solution
Anti-symmetric solution
We consider now a solution where the corresponding velocity is anti-symmetric about the plane $x=0$. Or equivalently the potential is an odd function of the horizontal direction. The anti-symmetric problem consists of two incident waves coming from each side of the symmetric body, with the same amplitude but out of phase. Their horizontal spatial dependence is respectively $\exp\,(-k_{0}(x+L))$ and $-\exp\,(k_{0}(x-L))$. Two waves are emitted at the edges of the body and propagate away from it on each side, again out of phase.
The fact that the potential is odd field implies that it is equal to zero on the symmetry plane at $x=0$. We need to solve the problem on the plane $x<0$ only and then we can build the solution on the other plane simply by anti-symmetry.
The anti-symmetric solution on the left half plane can here also be obtained from the solution with a semi-infinite body. The potential in the open-water region is identical as the one in the symmetric solution.
In the region under the body, for $-L<x<0$, the anti-symmetric potential $\, \phi_a$ now satisfies a Dirichlet condition at $x=0$. This again changes the modes of the semi-infinite solution into odd functions of $x$.
The matching conditions lead to a set of equations very similar to the semi-infinite one.
# Solution to the original problem
The solution for the symmetric body with an incident wave coming from the left can be evaluated from the previous symmetric and anti-symmetric solutions by using the principle of superposition. The velocity potential in the left open-water region, for $x<0$, is simply
$\phi(x,z) = \frac{1}{2} \left( \phi_s(x,z) + \phi_a(x,z) \right)$
whereas in the right-open water region, for $x>0$
$\phi(x,z) = \frac{1}{2} \left( \phi_s(x,z) + \phi_a(x,z) \right) = \frac{1}{2} \left( \phi_s(-x,z) - \phi_a(-x,z) \right)$
In the last expression, the first equality comes from the superposition principle and the second one is obtained considering that the symmetric potential is an even function of $x$ whereas the anti-symmetric potential is an odd function.
# Application
This theory of symmetry is used for the case of a finite dock (see Eigenfunction Matching for a Finite Dock), for two identical docks (see Two Identical Docks using Symmetry)
## Pages in category "Symmetry in Two Dimensions"
The following 4 pages are in this category, out of 4 total.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9072092175483704, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/37849/closedness-of-finite-dimensional-subspaces
|
## Closedness of finite-dimensional subspaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is the (algebraic) span a finite set of vectors in a Hausdorff topological vector space over a complete field always closed?
I suspect yes, but I can't come up with a proof, and it seems like locally convex might be needed to get this.
-
Yes, this is true. I assigned it as a HW problem in a course last spring and a student solved it. I asked him to type it up, but apparently I don't have it. :( Anyway, sure, the proof that works over $\mathbb{R}$ (found e.g. in Rudin's Functional Analysis) goes over verbatim. – Pete L. Clark Sep 6 2010 at 8:48
## 2 Answers
This holds indeed for complete fields: see Theorem 2, Section I.2.3, of Bourbaki's "Espaces Vectoriels Topologiques".
Here is the argument.
Let $K$ be a (not necessarily commutative) field equipped with a complete nontrivial absolute value $x\mapsto|x|$, let $n$ be a positive integer, let $\tau$ be a Hausdorff vector space topology on $K^n$, and let $\pi$ be the product topology on $K^n$.
THEOREM $\tau=\pi$.
REMINDER A topological group $G$ is Hausdorff iff {1} is closed. [Proof: {1} is closed $\Rightarrow$ the diagonal of $G\times G$ is closed (because it's the inverse image of {1} under $(x,y)\mapsto xy^{-1}$) $\Rightarrow$ $G$ is Hausdorff.]
LEMMA The Theorem holds for $n=1$.
The Lemma implies the Theorem. We argue by induction on $n$. The continuity of the identity from $K^n_\pi$ to $K^n_\tau$ (obvious notation) is clear (and doesn't use the Lemma). To prove the continuity of the identity from $K^n_\tau$ to $K^n_\pi$, it suffices to prove the continuity of an arbitrary nonzero linear form $f$ from $K^n_\tau$ to $K_\pi$. By induction hypothesis, the kernel of $f$ is closed, and the Theorem follows from the Reminder and the Lemma.
Proof of the Lemma. We'll use several times the fact that $K^\times$ contains elements of arbitrary large and arbitrary small absolute value. As already observed, we have $\tau\subset\pi$. If $x$ is in $K^\times$, write $B_x$ for the open ball of radius $|x|$ and center 0 in $K$. Let $a$ be in $K^\times$, and let $\tau_0$ be the set of those $U$ such that $0\in U\in\tau$.
It suffices to check that $B_a$ contains some $U$ in $\tau_0$.
We can find a $b$ in $K^\times$ and a $V$ in $\tau_0$ such that $a$ is not in $B_bV$, and then a $c$ in $K$ with $|c|>1$ and a $W$ in $\tau_0$ such that $a$ is not in $B_cW$. Then $U:=c^{-1}W$ does the job.
-
Umm, I wouldn't have known how to prove this result, but I don't see how it addresses my question, either. – Ricky Demer Sep 8 2010 at 7:11
You asked if a finite dimensional space F in a Hausdorff topological vector space over a complete field is always closed. The result I prove (following Bourbaki) shows that F (equipped with the induced topology) is complete, and thus closed. – Pierre-Yves Gaillard Sep 8 2010 at 7:32
How do you show that every complete field has an absolute value that induces its topology? – Ricky Demer Sep 8 2010 at 8:30
I'm afraid I misunderstood your question. I took it for granted that you considered only fields complete with respect to a nontrivial absolute value. Sorry. [I know nothing about other kinds of complete fields.] – Pierre-Yves Gaillard Sep 8 2010 at 9:01
Your answer is still more general (and so better) than Robin's. [If you don't know what another kind of complete field would be, see Definition, Completeness, and Example 3 at en.wikipedia.org/wiki/Uniform_space ] – Ricky Demer Sep 8 2010 at 9:46
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For real/complex vector spaces, this is Theorem 1.21 in Rudin's Functional Analysis (2nd ed.). I believe the proof works for any complete field, but haven't checked in detail.
-
2
This holds indeed for complete fields: see Theorem 2, Section I.2.3, of Bourbaki's "Espaces Vectoriels Topologiques". – Pierre-Yves Gaillard Sep 6 2010 at 8:47
Very interesting. Clearly Bourbaki was the right place! – Pietro Majer Sep 6 2010 at 8:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282324910163879, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-statistics/165776-poisson-processes.html
|
# Thread:
1. ## Poisson processes
A radioactive source emits particles according to a Poisson process with rate $\lambda=2$ particles per minute. What is the expected time of emission of the second particle given that three particles have been emitted in the first four minutes?
2. Originally Posted by adnaps1
A radioactive source emits particles according to a Poisson process with rate $\lambda=2$ particles per minute. What is the expected time of emission of the second particle given that three particles have been emitted in the first four minutes?
Use Bayes theorem to find: $P(t_2|t_4\le 4)$ then the expected time of emmision of the second particle is:
$\displaystyle \int_0^4 t_2 P(t_2|t_4\le 4)\ dt_2$
CB
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9002538323402405, "perplexity_flag": "head"}
|
http://cms.math.ca/Events/winter12/abs/egs
|
2012 CMS Winter Meeting
Fairmont Queen Elizabeth (Montreal), December 7 - 10, 2012
Enumerative Geometry and String Theory
Org: Keshav Dasgupta and Johannes Walcher (McGill)
[PDF]
PER BERGLUND, University of New Hampshire
Global Embeddings for Branes at Toric Singularities and Moduli Stabilization [PDF]
We discuss recent work on the realization of gauge theories from D3-branes at toric singularities in compact Calabi-Yau manifolds. In particular, we show how a new type of Euclidean D3-brane instanton supported on a non-Spin four-cycle in type IIB orientifolds can give rise to a non-perturbative contribution to the superpotential. This allows the Kähler moduli to be stabilized consistent with the assumption that the four-cycle vanishes for the toric singularity, while the overall volume of the Calabi-Yau manifold is exponentially large.
VINCENT BOUCHARD, University of Alberta
Topological recursion and double Hurwitz numbers [PDF]
Double Hurwitz numbers count covers of the Riemann sphere by genus $g$ Riemann surfaces with arbitrary ramification over $0$ and $\infty$, and simple ramification elsewhere. We show that generating functions for certain classes of double Hurwitz numbers satisfy the Eynard-Orantin topological recursion, which completely determines them recursively through complex analysis on particularly simple spectral curves. We also argue that double Hurwitz numbers can be obtained in the "infinite framing" limit of Gromov-Witten invariants on certain orbifolds, in parallel to a similar limit relating simple Hurwitz numbers and Gromov-Witten invariants of $\mathbb{C}^3$. This is joint work with Dani Hernandez Serrano and Motohico Mulase.
GUILLAUME LAPORTE, Mcgill University
Monodromy of an Inhomogeneous Picard-Fuchs Equation [PDF]
The global behaviour of the normal function associated with van Geemen's family of lines on the mirror quintic is presented, as well as how the limiting value of the normal function at large complex structure is an irrational number expressible in terms of the di-logarithm.
ARNAUD LEPAGE-JUTIER, McGill
Smoothed Transitions in Higher Spin AdS Gravity [PDF]
We consider CFTs conjectured to be dual to higher spin theories of gravity in AdS${}_3$ and AdS${}_4$. Two dimensional CFTs with $W_N$ symmetry are considered in the $\lambda=0$ ($k \to \infty$) limit where they are described by continuous orbifolds. The torus partition function is computed, making reasonable assumptions, and equals that of a free field theory. We find no phase transition at temperatures of order one; the usual Hawking-Page phase transition is removed by the highly degenerate light states associated with conical defect states in the bulk. Three dimensional Chern-Simons Matter CFTs with vector-like matter are considered on $T^3$, where the dynamics is described by an effective theory for the eigenvalues of the holonomies. Likewise, we find no evidence for a Hawking-Page phase transition at large level $k$.
SHUNJI MATSUURA, McGill University
Classification of gapless topological phases [PDF]
TBA
RUXANDRA MORARU, University of Waterloo
Stable bundles on complex nilmanifolds [PDF]
Let $G$ be a connected, simply connected nilpotent Lie group, and let $\Gamma \subset G$ be a discrete, co-compact subgroup. The quotient manifold $\Gamma\backslash G$ is called a {\em nilmanifold}. If $N = \Gamma \backslash G$ is equipped with a complex structure $I$ induced by a left-invariant complex structure on $G$, then $(N,I)$ is called a {\em complex nilmanifold}. Other than complex tori, examples of complex nilmanifolds are given by Kodaira surfaces and Iwasawa manifolds. Moreover, although all complex nilmanifolds have holomorphically trivial canonical bundles, only complex tori admit Kaehler metrics. Nonetheless, many non-Kaehler complex nilmanifolds admit balanced metrics. In this talk, I will describe some of the interesting geometric properties that moduli spaces of stable bundles on non-Kaehler complex nilmanifolds possess.
JIHYE SEO, Physics Department, McGill University and CRM MathPhysics lab
Exactly stable non-BPS spinors in heterotic string theory on tori [PDF]
Considering SO(32) heterotic string theory compactified on tori, stability of non-supersymmetric states is studied. A non-supersymmetric state with robust stability is constructed, and its exact stability is proven in a large region of moduli space against all the possible decay mechanisms allowed by charge conservation. Using various T-duality, we translate various selection rules about conserved charges into simpler problems resembling partition and parity of integers.
ALISHA WISSANJI, University of Alberta, Department of Mathematical and Statistical Sciences
IIA Perspective On Cascading Gauge Theory [PDF]
We study the N=1 supersymmetric cascading gauge theory found in type IIB string theory on p regular and M fractional D3-branes at the tip of the conifold, using the T-dual type IIA description. We reproduce the supersymmetric vacuum structure of this theory, and show that the IIA analog of the non-supersymmetric state found by Kachru, Pearson and Verlinde in the IIB description is metastable in string theory, but the barrier for tunneling to the supersymmetric vacuum goes to infinity in the field theory limit. We also comment on the N=2 supersymmetric gauge theory corresponding to regular and fractional D3-branes on a near-singular K3, and clarify the origin of the cascade in this theory.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8451550602912903, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/94822/question-about-notation
|
Question about Notation
What does $\mathbb{R}∗\mathbb{R}$ mean? I'm sure this has been asked before, but I do not know how to search for notations in past questions.
-
I have never seen that before. Possibly the same as $\mathbb{R}\times\mathbb{R}$ as $*$ is sometimes used to denote multiplication in other settings, but that's just a guess. – Alex Becker Dec 28 '11 at 23:28
2
Could you tell us more about the context where you saw this notation? – Álvaro Lozano-Robledo Dec 28 '11 at 23:29
7
May be a free product – Norbert Dec 28 '11 at 23:30
1
I was reading the this section of StackExchange, and saw something that said $f: \mathbb{Z} * \mathbb{Z} \rightarrow \mathbb{Z} \times \mathbb{Z}$. – UserUsingTheInterwebs Dec 28 '11 at 23:31
1
Thinking back, I think Norbert is right. I'm pretty sure I've seen this used for either free product or abelian free product. – Alex Becker Dec 28 '11 at 23:33
3 Answers
In group theory, $G ∗ H$ is defined as the free product of $G$ and $H$. It is an operation that constructs a new group which contains both $G$ and $H$ as subgroups, since it is generated by the elements of these groups. For definition, maybe the following link can be of help: http://mathworld.wolfram.com/FreeProduct.html.
-
Hatcher uses $X*Y$ to denote the join of two spaces, that is, the quotient of $X \times Y \times I$ by the identifications $(x, y_1, 0)$ with $(x, y_2, 0)$ and $(x_1, y, 1)$ with $(x_2, y, 1)$. This is the space of all line segments joining points of $X$ with points of $Y$. See page 9 of Hatcher's book on algebraic topology for more information.
If $X$ and $Y$ are closed intervals, the cube $X \times Y \times I$ gets collapsed to a tetrahedron. In the case of $X=Y=\mathbb{R}$, I guess we get an "infinite tetrahedron".
-
This is not an answer to the question. Since it is not easy to search for mathematical notation, perhaps this can help to confirm where the OP saw the notation. I am making this CW - feel free other occurrences from math.SE which may be relevant.
Searching for "\mathbb Z \ast \mathbb Z" site:math.stackexchange.com lead me to Georges Elencwajg's comment
Dear Akhil, I don't know about the existence of a purely algebraic proof. What I know is that until a few years ago there was no purely algebraic computation of the algebraic fundamental group of the projective complex line minus three points (namely $\pi_1^{alg}(\mathbb P^1_{\mathbb C} \setminus {0,1, \infty})=\widehat{\mathbb Z\ast \mathbb Z}$).
Searching for "\mathbb Z * \mathbb Z \to \mathbb Z \times \mathbb Z" site:math.stackexchange.com gives this question:
The first is, Is $i_*: \pi_1(S^1 \vee S^1) \to \pi_1(S^1 \times S^1)$ injective? My intuition is that no, this is not injective because $\pi_1(S^1 \vee S^1) = \mathbb{Z} * \mathbb{Z}$, the free group on two generators and $\pi_1 (S^1 \times S^1) = \mathbb{Z}\times \mathbb{Z}$. However, I am not sure if this is in fact true and I am trying to figure out the best way to go about showing it.
And from an answer to the same question:
Consider a homomorphism $f: \mathbb{Z} \ast \mathbb{Z} \to \mathbb{Z} \times \mathbb{Z}$. If $a$ and $b$ are the generators of $\mathbb{Z} \ast \mathbb{Z}$ then consider what $ab$ and $ba$ map to:
$f(ab) = f(a)f(b) = f(b) f(a) = f(ba)$
In both cases it seems to denote free product.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462140202522278, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4261202
|
Physics Forums
## A possible Coasting Universe model vs ΛCDM
Two Coasting Universe model and comparison of the ΛCDM as well as Rh = ct models with Ia supernovae, GMB and CMB events. Both theory lead to the same result and fundamentals.
[Journal references below]
First model:
Rh = ct universe
"The backbone of standard cosmology is the Friedmann-Robertson-Walker solution to Einstein’s equations of general relativity (GR). In recent years, observations have largely confirmed many of the properties of this model, which is based on a partitioning of the universe’s energy density into three primary constituents: matter, radiation, and a hypothesized dark energy which, in ΛCDM, is assumed to be a cosmological constant Λ. Yet with this progress, several unpalatable coincidences (perhaps even inconsistencies) have emerged along with the successful confirmation of expected features. One of these is the observed equality of our gravitational horizon Rh(t0) with the distance ct0 light has traveled since the big bang, in terms of the current age t0 of the universe. This equality is very peculiar because it need not have occurred at all and, if it did, should only have happened once (right now) in the context of ΛCDM. In this paper, we propose an explantion for why this equality may actually be required by GR, through the application of Birkhoff’s theorem and the Weyl postulate, at least in the case of a flat spacetime. If this proposal is correct, Rh(t) should be equal to ct for all cosmic time t, not just its present value t0. Therefore models such as ΛCDM would be incomplete because they ascribe the cosmic expansion to variable conditions not consistent with this relativistic constraint. We show that this may be the reason why the observed galaxy correlation function is not consistent with the predictions of the standard model. We suggest that an Rh = ct universe is easily distinguishable from all other models at large redshift (i.e., in the early universe), where the latter all predict a rapid deceleration."
The Rh = ct Universe
http://arxiv.org/pdf/1109.5189.pdf
Fitting the Union2.1 SN Sample with the Rh = ct Universe
http://arxiv.org/pdf/1206.6289.pdf
The Rh = ct Universe Without Inflation
http://arxiv.org/pdf/1206.6527.pdf
Angular Correlation of the CMB in the Rh = ct Universe
http://arxiv.org/pdf/1207.0015.pdf
High-Z Quasars in the Rh = ct Universe
http://arxiv.org/pdf/1301.0017.pdf
The Gamma-Ray Burst Hubble Diagram and Its Cosmological Implications
http://arxiv.org/pdf/1301.0894.pdf
[Journal refs.: Monthly Notices of the Royal Astronomical Society (MNRAS) & Astronomical Journal - IOP Science]
Second model:
The model of a flat (Euclidean) expansive homogeneous and isotropic relativistic universe in the light of the general relativity, quantum mechanics, and observations
"Assuming that the relativistic universe is homogeneous and isotropic, we can unambiguously determine its model and physical properties, which correspond with the Einstein general theory of relativity (and with its two special partial solutions: Einstein special theory of relativity and Newton gravitation theory), quantum mechanics, and observations, too."
http://arxiv.org/pdf/1301.0894.pdf
[Journal ref.: Astrophysics and Space Science]
The two theory fundamentals:
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Gold Member Nice articles going to take me a bit to go through them
Recognitions: Gold Member Science Advisor I have argued for the benefits of the coasting cosmological model in the past. Viz: 1. No need for Inflation; the horizon, density and smoothness problems that Inflation was developed to explain would not be there in the first place. 2. A natural explanation for the Age of the Universe = Hubble Time coincidence. 3. It alleviates any Age problem in the early universe. 4. The longer BBN epoch yields results in a higher baryon density that may explain Dark Matter as baryonic in nature. ( Must still be dark in the form of IMBHs perhaps). It would leave a Deuterium problem but resolve the Lithium Problem. 5. It would explain a low power deficiency in the CMB power spectrum when compared with the LCDM prediction. First proposed by Kolb in 1989 as a way of avoiding Inflation A coasting cosmology , it was then taken up by an Indian team who found it a concordant alternative to LCDM. A Concordant “Freely Coasting” Cosmology, recently it has been advocated by Melia et al. in the papers cited in the OP. There are problems, but no more than the standard model that has had to invoke Inflation, DM and DE, all unconfirmed in the laboratory. Just a thought, Garth
## A possible Coasting Universe model vs ΛCDM
First proposed by Kolb in 1989 as a way of avoiding Inflation A coasting cosmology , it was then taken up by an Indian team who found it a concordant alternative to LCDM. A Concordant “Freely Coasting” Cosmology, recently it has been advocated by Melia et al. in the papers cited in the OP.
And this:
http://www.akamaiuniversity.us/PJST12_1_214.pdf
Recognitions:
Science Advisor
Quote by Garth There are problems, but no more than the standard model that has had to invoke Inflation, DM and DE, all unconfirmed in the laboratory.
I would say the problems are vastly, vastly greater, as the coasting model requires either extremely exotic matter or an entirely new law of gravity.
Recognitions:
Gold Member
Science Advisor
Quote by Chalnoth I would say the problems are vastly, vastly greater, as the coasting model requires either extremely exotic matter or an entirely new law of gravity.
And the $\Lambda$CDM model does not? DM?? DE??
Linear expansion requires DE with an equation of state $\omega = - \frac{1}{3}$.
Kolb suggested K matter with $p_k = - \frac{1}{3} \rho_k$
Such as might be provided by cosmic string networks.
As for an entirely new law of gravity, Self Creation Cosmology maybe??
(published in 'Horizons in World Physics, Volume 247: New Developments in Quantum Cosmology Research', Nova Science Publishers, Inc. New York)
Just a thought...
Garth
Recognitions:
Science Advisor
Quote by Garth And the $\Lambda$CDM model does not? DM?? DE??
Dark matter isn't exotic at all by these measures. And $\Lambda$ has been a part of GR from the beginning (though it was usually assumed to be zero).
Quote by Garth Linear expansion requires DE with an equation of state $\omega = - \frac{1}{3}$. Kolb suggested K matter with $p_k = - \frac{1}{3} \rho_k$ Such as might be provided by cosmic string networks.
Yes. But detailed observations have now disproven the possibility that cosmic strings are a large fraction of the matter density of the universe.
I don't think you can fit the coasting universe with the CMB, at all.
Dear Chalnoth and Garth! In the coasting universe, dark energy does not exist (Λ=0)! Only dark matter, onto which is the only possible candidate the weakly interacting axion, or the continuously and in increasing quantities generated, appearing and disappearing virtual particle pairs.
Recognitions:
Gold Member
Science Advisor
Quote by petergreen Dear Calnoth and Garth! In the coasting universe, dark energy does not exist!
Well that depends on how you obtain a coasting, i.e. strictly linear, expansion.
The empty universe, the Milne FRW model, contains no source of gravitation and therefore there is nothing to decelerate it, so it expands linearly.
If matter is present (as indeed there is) then that produces a gravitational field that would decelerate the universe. To obtain linear expansion something must counter act this effect.
In the Dirac-Milne universe equal amounts of matter and anti-matter exist and it is assumed that they repel each other. The universe instantly separates out into matter and anti-matter zones and this mutual repulsion cancels out the gravitational attraction within each zone.
Otherwise you need a medium of high negative pressure, otherwise known as dark energy and they are several hypothetical candidates that might be the source of this, the cosmological constant being one of them.
I hope this helps.
Garth
Garth! Here the negative energy is the gravitational energy! The zero-energy universe hypothesis states that the total amount of energy in the universe is exactly zero. The positive energy of the matter is exactly balanced by the negative energy of the gravitational field. If in an expansive Universe relativistic and quantum-mechanical properties are complementary, permanent constant maximum possible increase of negative energy of gravitational field must arise and simultaneously, permanent constant maximum possible increase of positive energy of the matter must occur, which compensate each other; hence, total energy of the Universe is equal to zero. Therefore, the expanding Universe must be non-decelerative and non-accelerative, i.e., during the whole expansive evolution phase, it must expand by constant maximum possible velocity v = c. So the universe expands linearly.
Recognitions:
Gold Member
Science Advisor
Quote by petergreen Garth! Here the negative energy is the gravitational energy! The zero-energy universe hypothesis states that the total amount of energy in the universe is exactly zero. The positive energy of the matter is exactly balanced by the negative energy of the gravitational field. If in an expansive Universe relativistic and quantum-mechanical properties are complementary, permanent constant maximum possible increase of negative energy of gravitational field must arise and simultaneously, permanent constant maximum possible increase of positive energy of the matter must occur, which compensate each other; hence, total energy of the Universe is equal to zero. Therefore, the expanding Universe must be non-decelerative and non-accelerative, i.e., during the whole expansive evolution phase, it must expand by constant maximum possible velocity v = c. So the universe expands linearly.
The zero-energy universe (depending on how you measure energy) is the flat universe.
The expansion rate is determined by the FRW equation in which curvature effects and expansion rates are convoluted. You have to solve the equations carefully and use the equation of state to separate out the two effects. It is possible to balance the eos using a certain amount of cosmological constant, (which I take to be a form of DE) but you then have to show that that model is observationally concordant. It here that Chalnoth and I disagree.
Garth
Recognitions:
Science Advisor
Quote by Garth The zero-energy universe (depending on how you measure energy) is the flat universe.
To be pedantic, it's a closed universe.
Quote by Garth The expansion rate is determined by the FRW equation in which curvature effects and expansion rates are convoluted. You have to solve the equations carefully and use the equation of state to separate out the two effects. It is possible to balance the eos using a certain amount of cosmological constant, (which I take to be a form of DE) but you then have to show that that model is observationally concordant. It here that Chalnoth and I disagree.
Disagree how now? Because it sounds to me like you're saying I don't think the correct model should be chosen by the data. What I do say is that I sincerely doubt the coasting model can explain cosmological perturbations we see in the CMB. Crucially, early-on, cosmic strings were thought to be a potential source of cosmological perturbations. Cosmic strings have an equation of state of $w=-1/3$, and would produce a coasting universe. Except the pattern of perturbations predicted by cosmic string models was completely and utterly different from the pattern predicted in inflationary models, and observations demonstrated that it was the inflationary models that fit the data.
There still may be some cosmic strings, but they don't make up a significant fraction of the energy density of the universe.
What I do say is that I sincerely doubt the coasting model can explain cosmological perturbations we see in the CMB.
http://arxiv.org/pdf/1207.0015.pdf
Recognitions:
Science Advisor
Quote by petergreen
It's telling, I think, that they use the first-year WMAP release to make that claim, despite this paper by the WMAP team being available at the time of writing:
http://lambda.gsfc.nasa.gov/product/..._anomalies.pdf
And the main discrepancy between the coasting universe and ΛCDM is likely to be found in the power spectrum, which is not shown in the above paper.
Recognitions:
Gold Member
Science Advisor
Quote by Chalnoth And the main discrepancy between the coasting universe and ΛCDM is likely to be found in the power spectrum, which is not shown in the above paper.
But there are others who might disagree.
Observational Constraints of a Matter-Antimatter Symmetric Milne Universe
A Concordant “Freely Coasting” Cosmology
The main point we make in this article is that in spite of a significantly different evolution, the recombination history of a linearly coasting cosmology can be expected to give the location of the primary acoustic peaks in the same range of angles as that given in Standard Cosmology.
Garth
Recognitions:
Science Advisor
Quote by Garth But there are others who might disagree. Observational Constraints of a Matter-Antimatter Symmetric Milne Universe A Concordant “Freely Coasting” Cosmology
If by disagree you mean ignoring the issue, then sure, they disagree. It doesn't count if they don't fully account for the production and propagation of the primordial perturbations.
Tags
cosmology, theory
Thread Tools
| | | |
|-----------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: A possible Coasting Universe model vs ΛCDM | | |
| Thread | Forum | Replies |
| | Cosmology | 3 |
| | Cosmology | 18 |
| | Introductory Physics Homework | 4 |
| | General Astronomy | 0 |
| | General Physics | 0 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9149044156074524, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=204013
|
Physics Forums
## Fresnel coeff
1. The problem statement, all variables and given/known data
The question involves the fresnel equations which I have derived. However, I seem to be missing something in the simplification. I arrive at these:
and I am trying to simplify to:
3. The attempt at a solution
No matter how I use snell's law I can't seem to get them to simplify properly. Is there a trig identity that I'm missing? Currently I'm only interested in the coefficients if the field is polarized parallel to the plane of incidence.
Thanks for any input with the mathematics.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
It's probably easier to start from the simplified forms and turn them into the unsimplified forms. Then you'll be able to see how to go the other way. The only trig identities you should need are: $$sin(\theta1 - \theta2) = sin(\theta1)cos(\theta2) - sin(\theta2)cos(\theta1)$$ $$cos(\theta1 - \theta2) = cos(\theta1)cos(\theta2) + sin(\theta2)sin(\theta1)$$ (and the similar identities for addition) $$sin^2(\theta) + cos^2(\theta) = 1$$ $$tan(\theta) = sin(\theta)/cos(\theta)$$ and Snell's law, of course. Good luck!
Thanks for the help! I'm still having some difficulty. Not sure what I'm missing. For example, I keep ending up with: r|| = (Sin[2*Theta1] - Sin[2*Theta2])/(Sin[2*Theta1] + Sin[2*Theta2]) I'm assuming Nair = 1.
## Fresnel coeff
But you're very close to the answer. Don't use the identity $$sin(2\theta)=2sin(\theta)*cos(\theta)$$, go back a step and write those terms out. Then, look at the equation you're trying to turn it into. It has a form like:
$$((something)*cos(\theta_i) - (something else)*cos(\theta_t)) / ((something)*cos(\theta_i) + (something else)*cos(\theta_t))$$
and your equation has this same form. Maybe you can find a way, by multiplying the numerator and denominator by the same thing and using Snell's law, to make them match?
Thanks for the help! I'm still not seeing something with this one: $$(sin(\theta_i)+sin(\theta_i))*cos(\theta_i) - (sin(\theta_t)+sin(\theta_t))*cos(\theta_t)) / (sin(\theta_i)+sin(\theta_i))*cos(\theta_i) + (sin(\theta_t)+sin(\theta_t))*cos(\theta_t))$$ This is driving me nuts! I really appreciate your help.
Scratch that worked it out! Muchas Gracias!
thanks both of you! i was just working on the same exact problem and having the same trouble. this helped a lot.
Thread Tools
Similar Threads for: Fresnel coeff
Thread Forum Replies
General Physics 0
General Physics 14
Precalculus Mathematics Homework 8
Introductory Physics Homework 4
Introductory Physics Homework 2
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458787441253662, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=Divergence_Theorem&oldid=6668
|
# Divergence Theorem
### From Math Images
Revision as of 10:38, 1 July 2009 by Bjohn1 (Talk | contribs)
Fountain Flux
The water flowing out of a fountain demonstrates an important theorem for vector fields, the Divergence Theorem.
Fountain Flux
Field: Calculus
Created By: Brendan John
# Basic Description
Consider a fountain like the one pictured, particularly its top layer. The rate that water flows out of the fountain's spout is directly related to the amount of water that flows off the top layer. Because something like water isn't easily compressed like air, if more water is pumped out of the spout, then more water will have to flow over the boundaries of the top layer. This is essentially what The Divergence Theorem states: the total the fluid being introduced into a volume is equal to the total fluid flowing out of the boundary of the volume.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Some multivariable calculus
[Click to view A More Mathematical Explanation]
The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considere [...]
[Click to hide A More Mathematical Explanation]
The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considered a vector field because at each point the water has a position and a velocity vector. Faster moving water is represented by a larger vector in our field. The divergence of a vector field is a measurement of the expansion or contraction of the field; if more water is being introduced then the divergence is positive. Analytically divergence of a field $F$ is
$\nabla\cdot\mathbf{F} =\partial{F_x}/\partial{x} + \partial{F_y}/\partial{y} + \partial{F_z}/\partial{z}$,
where $F _i$ is the component of $F$ in the $i$ direction. Intuitively, if F has a large positive rate of change in the x direction, the partial derivative with respect to x in this direction will be large, increasing total divergence. The divergence theorem requires that we sum divergence over an entire volume. If this sum is positive, then the field must indicate some movement out of the volume through its boundary, while if this sum is negative, the field must indicate some movement into the volume through its boundary. We use the notion of flux, the flow through a surface, to quantify this movement through the boundary, which itself is a surface.
The divergence theorem is formally stated as:
$\iiint\limits_V\left(\nabla\cdot\mathbf{F}\right)dV=\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .$
The left side of this equation is the sum of the divergence over the entire volume, and the right side of this equation is the sum of the field perpendicular to the volume's boundary at the boundary, which is the flux through the boundary.
### Example of Divergence Theorem Verification
The following example verifies that given a volume and a vector field, the Divergence Theorem is valid.
Consider the vector field $F = \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix}$.
For a volume, we will use a cube of edge length two, and vertices at (0,0,0), (2,0,0), (0,2,0), (0,0,2), (2,2,0), (2,0,2), (0,2,2), (2,2,2). This cube has a corner at the origin and all the points it contains are in positive regions.
• We begin by calculating the left side of the Divergence Theorem.
Step 1: Calculate the divergence of the field:
$\nabla\cdot F = 2x$
Step 2: Integrate the divergence of the field over the entire volume.
$\iiint\nabla\cdot F\,dV =\int_0^2\int_0^2\int_0^2 2x \, dxdydz$
$=\int_0^2\int_0^2 4\, dydx$
$=16$
• We now turn to the right side of the equation, the integral of flux.
Step 3: We first parametrize the parts of the surface which have non-zero flux.
Notice that the given vector field has vectors which only extend in the x-direction, since each vector has zero y and z components. Therefore, only two sides of our cube can have vectors normal to them, those sides which are perpendicular to the x-axis. Furthermore, the side of the cube perpendicular to the x axis with all points satisfying x = 0 cannot have any flux, since all vectors on this surface are zero vectors.
We are thus only concerned with one side of the cube since only one side has non-zero flux. This side is parametrized using
$X=\begin{bmatrix} x \\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} 2 \\ u\\ v\\ \end{bmatrix}\, , u \in (0,2)\, ,v \in (0,2)$
Step 4: With this parametrization, we find a general normal vector to our surface.
To find this normal vector, we find two vectors which are always tangent to (or contained in) the surface, and are not collinear. The cross product of two such vectors gives a vector normal to the surface.
The first vector is the partial derivative of our surface with respect to u: $\frac{\part{X}}{\part{u}} = \begin{bmatrix} 0\\ 1\\ 0\\ \end{bmatrix}$
The second vector is the partial derivative of our surface with respect to v: $\frac{\part{X}}{\part{v}} = \begin{bmatrix} 0\\ 0\\ 1\\ \end{bmatrix}$
The normal vector is finally the cross product of these two vectors, which is simply $N = \begin{bmatrix} 1\\ 0\\ 0\\ \end{bmatrix}.$
Step 5: Integrate the dot product of this normal vector with the given vector field.
The amount of the field normal to our surface is the flux through it, and is exactly what this integral gives us.
$\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .$
$= \int_0^2 \int_0^2 F \cdot N \,dsdt$
$= \int_0^2 \int_0^2 \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 0\\ 0\\ \end{bmatrix} \,dsdt = \int_0^2 \int_0^2 x^2dsdt = \int_0^2 \int_0^2 4 \,dsdt$
$=16$
• Both sides of the equation give 16, so the Divergence Theorem is indeed valid here. ■
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186040163040161, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/17285/pca-to-decorrelate-variables
|
# PCA to decorrelate variables
I have 2 variables that I want to decorrelate. I was told I can use PCA to do so. I did PCA on the data and got all the parameters. Now how do I get the new set of transformed data that no longer correlates with the second variable? I want to use this set for further analysis. thanks,
-
## 3 Answers
With just two variables $X$ and $Y$, there are two sample variances $s^2$ and $t^2$, respectively, and the sample correlation coefficient, $r$. If you standardize the variables in the usual way to have unit variances, so that $\xi = X/s$ and $\eta = Y/t$, then the two principal components are
$$PC_1 = \xi+\eta = X/s + Y/t, \quad PC_2 = \xi-\eta = X/s - Y/t.$$
As a check, note that Covar($PC_1, PC_2$) = Var($X/s$) - Var($Y/t$) = $1-1=0$, proving the components are orthogonal (uncorrelated).
Visually: when you plot a scatterplot of $X$ and $Y$ in which the coordinate axes are expressed in standard units and have an aspect ratio of 1:1, then the axes of the point cloud fall along diagonal lines parallel to $X=Y$ and $X=-Y$.
In this example the variances are $s^2 = 0.98$, $t^2 = 7.90$ and the correlation is $r=-0.67$. Because $X$ and $Y$ are plotted on standardized scales with unit aspect ratio, the major axis of the cloud is diagonal (downward, due to negative correlation). This is the first principal component, $X/s-Y/t$. The minor axis of the cloud is also diagonal (upward) and forms the second principal component, $X/s+Y/t$.
-
thanks. I know the 2 components are not correlated, but I want one transformed variable to be uncorrelated with the second untrasformed variable. – Abeer Oct 21 '11 at 11:19
@Abeer And that's exactly what I have shown for $X/s-Y/t$ and $X/s+Y/t$. – whuber♦ Oct 21 '11 at 14:45
It varies with your software, but you should have something like a component score matrix. Multiply that with your original variables to get the new set of transformed data.
-
with R you do get a score matrix. in my case with 2 columns/components. I tried that and I get some weird scatter plot. I plot variable1 vs variable2*score.component1 – Abeer Oct 21 '11 at 11:47
You have loading for each component (P1, P2,....Pi).
$$P_1=l_1x_1+l_2x_2+...+l_jx_j$$ $$P_2=l_1x_1+l_2x_2+...+l_jx_j$$ $$.$$ $$.$$ $$P_i=l_1x_1+l_2x_2+...+l_jx_j$$
where $x$ is orginal data and $P_i$ is rotated component. The important is loadings ($l_1,l_2,...,l_j$). If you mulitple them with orginal data then you will get rotated Principle components. For more information check this URL.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.914421021938324, "perplexity_flag": "head"}
|
http://skullsinthestars.com/2008/08/05/the-discovery-rediscovery-and-re-rediscovery-of-computed-tomography/
|
The intersection of physics, optics, history and pulp fiction
## The discovery, rediscovery, and re-rediscovery of computed tomography
Posted on August 5, 2008 by
Note: This post is my contribution to The Giant’s Shoulders #2, to be held at The Lay Scientist. I thought I’d cover something a little more recent than my previous entries to the classic paper carnival; in truth, I need a break from translating 30-page papers written in antiquated German/French!
One of the fascinating things about scientific progress is what you might call its inevitability. Looking at the history of a crucial breakthrough, one often finds that a number of researchers were pushing towards the same discovery from different angles. For example, Isaac Newton and Gottfried Leibniz developed the theory of calculus independently and nearly simultaneously. Another example is the prehistory of quantum mechanics: numerous experimental researchers independently began to discover ‘anomalies’ in the behavior of systems on the microscopic level.
I would say that the development of certain techniques and theories become ‘inevitable’ when the discovery becomes necessary for further progress and a number of crucial discoveries pave the way to understanding (in fact, one might say that this is the whole point of The Giant’s Shoulders). Occasionally it turns out that others had made a similar discovery earlier, but had failed to grasp the broader significance of their result or were missing a crucial application or piece of evidence to make the result stand out.
A good example of this is the technique known as computed tomography, or by various other aliases (computed axial tomography, computer assisted tomography, or just CAT or CT). The pioneering work was developed independently by G.N. Hounsfield and A.M. Cormack in the 1960s, and they shared the well-deserved 1979 Nobel Prize in Medicine “for the development of computer assisted tomography.” Before Hounsfield and Cormack, however, a number of researchers had independently developed the same essential mathematical technique, for a variety of applications. In this post I’ll discuss the foundations of CT, the work of Hounsfield and Cormack, and note the various predecessors to the groundbreaking research.
What is computed tomography, and how does it work? CT is a technique for non-destructive imaging of the interior of an object. Its best-known application is in medical imaging, in which a portion of the patient’s body is imaged using x-rays, often to check for cancer. A picture of a modern CT scanner (from Wikipedia) is shown below:
The word tomography is derived from the Greek tomos (slice) and graphein (to write); it refers to the manner of image reconstruction: images of the body are put together in two-dimensional slices, which can then be assimilated into a full three-dimensional structure, if desired. An example of a CT image of the lungs is shown below (image from RadiologyInfo):
It is important to note that a CT image is far superior to a normal x-ray image such as one might get at the doctor’s office after breaking a bone (also from RadiologyInfo):
A standard x-ray, as shown above, is a single picture taken of a human ankle. The object to be imaged is placed between the x-ray source and a photographic plate, as schematically (and crudely) illustrated below:
Different materials in the human body absorb x-rays to a greater or lesser extent; bone absorbs the most. The image recorded on the photographic plate is in essence the x-ray shadow of the human body. This technique, though extremely useful for medical diagnosis, has a number of severe limitations. First and foremost, there is no depth information recorded about the object: the photographic plate records a ‘shadowgram’ of everything that lies between itself and the source. Small tumors could in principle be hidden (overshadowed) by a large piece of bone which lies directly above or below them. Second, a standard x-ray is extremely insensitive to soft tissue. As one can see in the ankle image above, the bone is clearly visible but the muscle and skin leaves hardly a trace on the plate. The technique will not detect a tumor in its earliest stages, which is the best time for successful treatment.
How does computed tomography differ? A computed tomogram is derived from a large number of x-ray shadowgrams, each taken at different angles of ‘attack’:
Each of these shadowgrams gives a different view of the interior of the patient’s body. By a nontrivial mathematical process requiring the use of a computer (hence the ‘computed’ in computed tomography), the information from this collection of shadowgrams can be combined to give a picture of a particular ‘slice’ of the body. Unlike a single shadowgram, this computed picture gives an exact cross-sectional picture of the body, and gives quantitative values for the absorption of different tissues of the body. Also unlike a single shadowgram, a CT scan can distinguish clearly between different types of soft tissue, as can be seen in the sample image above. A CT scan can find tumors at an earlier stage than a standard shadowgram.
We’ll discuss how CT actually works at the end of the post; for now, let’s look at the development of the process. The first work was done by Allan M. Cormack in the 1950s; in his own words (from his Nobel lecture),
In 1955 I was a lecturer in Physics at the University of Cape Town when the Hospital Physicist at the Groote Schuur Hospital resigned. South African law required that a properly qualified physicist supervise the use of any radioactive isotopes, and since I was the only nuclear physicist in Cape Town, I was asked to spend 1 1/2 days a week at the hospital attending to the use of isotopes, and I did so for the first half of 1956. I was placed in the Radiology Department under Dr. J. Muir Grieve, and in the course of my work I observed the planning of radiotherapy treatments. A girl would superpose isodose charts and come up with isodose contours which the physician would then examine and adjust, and the process would be repeated until a satisfactory dose-distribution was found. The isodose charts were for homogeneous materials, and it occurred to me that since the human body is quite inhomogeneous these results would be quite distorted by the inhomogeneities – a fact that physicians were, of course, well aware of. It occurred to me that in order to improve treatment planning one had to know the distribution of the attenuation coefficient of tissues in the body, and that this distribution had to be found by measurements made external to the body. It soon occurred to me that this information would be useful for diagnostic purposes and would constitute a tomogram or series of tomograms, though I did not learn the word “tomogram” for many years.
In simpler terms: radiation therapy requires being able to send a precise dosage of radiation to a particular location in a patient. The amount of radiation reaching a location in the body depends significantly on the internal structure, and no method existed at that time for measuring the absorption properties of an individual’s body; Cormack therefore decided to look for one. An initial literature search produced no prior results, so Cormack developed a mathematical technique for determining internal structure from a series of x-ray shadowgrams. The technique is now known as the Radon transform, and we will come back to it shortly.
Over the next few years, Cormack tested his new technique with systems, and targets, of increasing complexity. In 1957 in Cape Town he measured a circularly symmetric sample consisting of a cylinder of aluminum surrounded by an annulus of wood. As Cormack noted,
Even this simple result proved to have some predictive value for it will be seen that the three points nearest the origin [of his data set] lie on a line of a slightly different slope from the other points in the aluminum. Subsequent inquiry in the machine shop revealed that the aluminum cylinder contained an inner peg of slightly lower absorption coefficient than the rest of the cylinder.
By 1963, Cormack was prepared to do work on a “phantom” (simulated patient made of aluminum and lucite) without circular symmetry, using the device pictured below (taken from the Nobel lecture, again):
Quite a far-cry still from the CT machines of today, the cylinders are collimators containing the source, Co60 gamma rays, and the detector. Quite good measurements of the properties of the phantom were found, and the results were published in a pair of papers in 1963 and 1964*. From Cormack’s Nobel lecture again,
Publication took place in 1963 and 1964. There was virtually no response. The most interesting request for a reprint came from a Swiss Centre for Avalanche Research. The method would work for deposits of snow on mountains if one could get either the detector or the source into the mountain under the snow!
Cormack did little else on this subject for a number of years. Meanwhile, about the time that Cormack was moving on to other things, Godfrey N. Hounsfield, working at EMI Central Research Laboratories in Hayes, UK, started thinking about the problem along similar lines. Now quoting from his Nobel lecture,
Some time ago I imagined the possibility that a computer might be able to reconstruct a picture from sets of very accurate X-ray measurements taken through the body at a multitude of different angles. Many hundreds of thousands of measurements would have to be taken, and reconstructing a picture from them seemed to be a mammoth task as it appeared at the time that it would require an equal number of many hundreds of thousands of simultaneous equations to be solved.
When I investigated the advantages over conventional X-ray techniques however, it became apparent that the conventional methods were not making full use of all the information the X-rays could give.
Hounsfield put together his own crude initial apparatus to test his ideas:
The equipment was very much improvised. A lathe bed provided the lateral scanning movement of the gamma-ray source, and sensitive detectors were placed on either side of the object to be viewed which was rotated 1” at the end of each sweep. The 28,000 measurements from the detector were digitized and automatically recorded on paper tape. After the scan had been completed this was fed into the computer and processed.
Many tests were made on this machine, and the pictures were encouraging despite the fact that the machine worked extremely slowly, taking 9 days to scan the object because of the low intensity gamma source. The pictures took 2 1/2 hours to be processed on a large computer… Clearly, nine days for a picture was too time-consuming, and the gamma source was replaced by a more powerful X-ray tube source, which reduced the scanning time to nine hours. From then on, much better pictures were obtained; these were usually blocks of perspex. A preserved specimen of a human brain was eventually provided by a local hospital museum and we produced the first picture of a brain to show grey and white matter.
Disappointingly, further analyses revealed that the formalin used to preserve the specimen had enhanced the readings, and had produced exaggerated results. Fresh bullock’s brains were therefore used to cross-check the experiments, and although the variations in tissue density were less pronounced, it was confirmed that a large amount of anatomic detail could be seen… Although the speed had been increased to one picture per day, we had a little trouble with the specimen decaying while the picture was being taken, so producing gas bubbles, which increased in size as the scanning proceeded.
The picture of Hounsfield’s prototype CT machine is shown below, taken from his Nobel lecture.
The use of fresh brains led to some entertaining moments, as he notes in his Nobel autobiography,
As might be expected, the programme involved many frustrations, occasional awareness of achievement when particular technical hurdles were overcome, and some amusing incidents, not least the experiences of travelling across London by public transport carrying bullock’s brains for use in evaluation of an experimental scanner rig in the Laboratories.
The initial tests demonstrated the principle, but a faster, more sophisticated machine needed to be built in order to make it a worthwhile clinical tool. By 1972 a machine similar in appearance to contemporary CT scanners was installed at Atkinson Morley’s Hospital, London, and was used on a woman with a suspected brain lesion. Hounsfield published a paper describing his new system, and naming it “Computerized transverse axial scanning tomography,” in 1973**.
The paper includes many technical details and a photograph of the various components, such as the basic machine, shown below:
This early machine had many differences with the machines of today. The early system took some five minutes to image a slice, but modern systems can finish such a scan in seconds. In the early system, a patient’s head was enclosed in a water-filled rubber cap, which reduced the range of x-ray intensities arriving at the detectors. The early system also, not surprisingly, had a relatively low resolution: pictures were 80 x 80 ‘picture points’, derived from 28,800 readings.
This work seems to have been immediately recognized as groundbreaking and of fundamental importance (at the very least this can be seen by how quickly the Nobel prize was awarded for the achievement). Most hospitals now have a radiology department with some sort of CT scanner. CT is also commonly used for nondestructive testing of materials in industrial applications. Applications involving other types of waves have also been applied: CT-like algorithms have been used in geological exploration and oil prospecting, as well as in magnetic resonance imaging (MRI), another important medical diagnostic technique. The same tomographic methods are now also used for reconstructing quantum wavefunctions (which is worth a post in itself at a later date). Techniques which are inspired by or generalize CT, such as diffraction tomography, diffusion tomography, and optical coherence tomography, are also in use or under investigation.
Perhaps more broadly, CT was the first widely successful application of the theory of inverse problems. An inverse problem is a solution of a physical problem in the opposite direction from the usual ’cause-effect’ sequence of events. Each inverse problem is associated with a more familiar ‘forward problem’ in physics which represents the ’cause-effect’ process. For instance, the problem of determining the absorption of x-rays given the structure which they are incident upon is a forward problem; the problem of determining the structure of an object based on how x-rays are absorbed by the object in an inverse problem. Computed tomography effectively created its own ‘inverse problems’ subfield of mathematical physics.
Interestingly, though, the mathematical problem solved by Cormack turns out to have been solved numerous times previously by other researchers for other applications though, as noted at the beginning of this post, those works either were too far ahead of their time or too limited in application to gain much attention. All of these were acknowledged by Cormack in his Nobel lecture.
The first to develop the mathematics behind computed tomography was Johann Radon*** in 1917, in his paper, “Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten.” (“On the determination of functions by their integral values along certain manifolds”.)
In probability theory, the problem was developed by H. Cramér and H. Wold in 1936****. In 1968, D.J. De Rosier and A. Klug***** used a similar technique in electron microscopy to reconstruct the three-dimensional structure of the tail of a bacteriophage, work which eventually led Klug to win the 1982 Nobel Prize in Chemistry “for his development of crystallographic electron microscopy and his structural elucidation of biologically important nucleic acid-protein complexes.”
Perhaps the most unusual precursor to computed tomography is the 1956 work by R.N. Bracewell******, who derived exactly the same mathematical technique to reconstructions of celestial radio sources.
Enough history, for now: how does computed tomography (and its predecessors) work? The exact mathematics of CT, i.e. Radon’s elegant formula for reconstruction an object, is rather involved, and will be left for a future post. Instead we give two simple illustrations of the principles behind the process, one completely conceptual and one which involves a little bit of math.
First, a conceptual illustration: suppose we have a ‘black box’ which contains a perfectly absorbing object inside of it. We wish to roughly determine the shape of that object by measuring the absorption of x-rays which pass through the box. Let’s pretend that the object within the box is a square; how many measurements do we need to prove this? Suppose we first shine x-rays horizontally through the box; the result is as follows:
The intensity of the x-rays as a function of position is plotted on the right. This clearly doesn’t tell us the shape of the object, because the following objects would result in the same shadow:
We can eliminate the possibility of the rectangular shape by shining our x-rays from above:
This second measurement still, however, cannot distinguish between a square object or a round object. We can then take a diagonal measurement:
This measurement suggests that the object is wider along the diagonal, but we are still left with the following possibility for an object:
By making even more measurements, we can narrow down the shape of the object. In a realistic CT measurement, there are no ‘perfectly absorbing’ objects, but the same principle applies: by measuring the absorption properties of the patient/target from many directions, one can develop the interior absorption profile of the object.
Obviously, determining the actual absorption profile from the massive amount of x-ray data is not a trivial process. Radon (and Cormack’s) theoretical formulation of the problem describes how the x-ray data is related to the object absorption. With the help of a computer, one can substitute the data into Radon’s formula to get an exact description of the object’s properties.
Let’s try and show how tomography works in another, more algebraic, way: suppose we reduce our ‘patient’ to a block of 9 different regions, each with its own absorption coefficient $x_i$:
We wish to determine each of these numbers, but the only way we can measure them is by passing an x-ray through the box and measuring the total of any row, column, or diagonal; for example:
According to basic linear algebra, in order to uniquely solve a system of equations for 9 unknowns ( the $x_i$), we need 9 independent equations relating those unknowns. We make the following ‘measurements’ through the box, with the following results:
This gives the following set of 12 equations:
$x_1+x_2+x_3=6$, $x_4+x_5+x_6=9$, $x_7+x_8+x_9=8$,
$x_1+x_4+x_7=7$, $x_2+x_5+x_8=11$, $x_3+x_6+x_9=5$,
$x_2+x_6=3$, $x_1+x_5+x_9=9$, $x_4+x_8=7$,
$x_2+x_4=7$, $x_3+x_5+x_7=9$, $x_6+x_8=3$.
It turns out that we need 11 of these equations to solve uniquely for the values within the squares, as the equations are not independent of one another. This can be seen by noting that the sum of the first three equations gives
$x_1+...+x_9=23$,
which is exactly the same equation as the sum of the second three equations,
$x_1+...+x_9=23$.
Using a computer to solve the equations, the numbers within the boxes are:
If we have a larger system of boxes, we will need even more measurements to make a unique solution. In the idealized limit of an infinite (continuous) set of boxes, we would need to measure the transmission through the square from every direction.
This ‘brute force’ method of doing tomography was in essence the solution used by Hounsfield in his original paper, as he was unaware of the work of Radon and Cormack; he notes:
If the body is divided into a series of small cubes each having a calculable value of absorption, then the sum of the absorption values of the cubes which are contained within the X-ray beam will equal the total absorption of the beam path. Each beam path, therefore, forms one of a series of 28,800 simultaneous equations, in which there are 6,400 variables and, providing that there are more equations than variables, then the values of each cube in the slice can be solved. In short there must be more X-ray readings than picture points.
We can see that this method is inefficient even from our simple example, as we ended up having more measurements than we needed. The use of Radon’s née Cormack’s formula gives a better understanding of how to sort, arrange, and efficiently process the acquired data.
The history of Cormack and Hounsfield’s discovery makes a good argument by example for the usefulness of ‘cross-pollination’ between fields and subfields of science. Radon’s original calculation had to be ‘rediscovered’ numerous times by different authors working in different fields, a process which is not uncommon in the physical sciences.
On the flip side, authors often make major breakthroughs or simplify their work greatly by searching for results which have been forgotten or restricted to a little-explored subfield. The now hot topic of metamaterials began in essence with the rediscovery of a 1967 paper by Russian scientist V. Veselago. The field of quantum optics was advanced rapidly by applying results that had already been developed in the field of nuclear magnetic resonance.
None of this should be taken as a slight on the achievements of Cormack and Hounsfield; they saw possibilities in the development of tomographic methods that numerous others who came before them did not. Their researches came at a time when there was a genuine need for new diagnostic techniques, coupled with the computational ability to carry them out.
**************************************
* A.M. Cormack, “Representation of a function by its line integrals, with some radiological applications,” J. Appl. Phys. 34 (1963), 2722-2727. A.M. Cormack, “Representation of a function by its line integrals, with some radiological applications. II,” J. Appl. Phys. 35 (1964), 2908-2913.
** G.N. Hounsfield, “Computerized transverse axial scanning (tomography): Part I. Description of system,” Brit. J. Radiol. 46 (1973), 1016-1022.
*** J. Radon, “Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten,” Ber. König Säch. Aka. Wiss. (Leipzig), Math. Phys. Klasse 69 (1917), 262-267.
**** H. Cramér and H. Wold, “Some theorems on distribution functions,” J. London Math. Soc. 11 (1936), 290-294.
***** D.J. De Rosier and A. Klug, “Reconstruction of three dimensional structures from electron micrographs,” Nature 217 (1968), 130-134.
****** R.N. Bracewell, “Strip integration in radio astronomy,” Austr. J. Phys. 9 (1956), 198-217.
### Like this:
This entry was posted in History of science, Physics. Bookmark the permalink.
### 13 Responses to The discovery, rediscovery, and re-rediscovery of computed tomography
1. Thony C. says:
The correct translation of “gewisser Mannigfaltigkeiten”, in the title of Radon’s paper, is ‘certain manifolds’. Manifold is a catchall for sets, collections, spaces and similarly defined mathematical objects.
2. Thony C. says:
I’m sorry I shouldn’t correct translations before breakfast! I only noticed the second ‘error’ after having posted my first correction. This time I will give a full translation of Radon’s German title.
“Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten.”
“On the determination of functions by their integral values along certain manifolds”.
3. skullsinthestars says:
Thony C.: Thanks for the translation! I’ve updated the post accordingly. I was in a hurry when I first wrote it and simply used babelfish, though I could tell it wasn’t quite right. Somewhere I’ve got a translation of Radon’s paper, but I was too lazy to hunt for it at the time.
4. Wade Walker says:
Excellent post! This is exactly where science blogging shines most — taking a difficult subject that laymen are nonetheless interested in, and giving a clear explanation in everyday terms.
Your posts do a great job of helping people understand that science isn’t a magical thing done only by geniuses. It’s a pragmatic activity done by real people for real, commonsense reasons.
5. Thony C. says:
“Thanks for the translation!”
You’re welcome, I’ll send the bill at the end of the month!
I’ll just re-iterate what Mr Walker said and say thanks for some excellent post on scientific subjects. Living in a town that is one of the major centres in the world for CT production and being a historian of science myself I was fascinated by an obviously well researched and well-written piece on their origins.
6. skullsinthestars says:
Wade and Thony C.: A belated thanks for the comments, and compliments!
7. trey says:
Nice introduction to CT. Can you comment on how realistic this system of linear equations is? These equations assume forward scattering and no diffraction (which would complicate the equations).
I ask because the (nonlinear) inverse problems I have come across end up being formulated as an optimization (minimization) problem.
8. skullsinthestars says:
Trey: To the best of my knowledge, for medical x-ray CT the linear system of equations works just fine. Because of the high energy/short wavelength nature of the x-rays, and the relatively small refractive index contrasts of the human body at those wavelengths, the x-rays follow essentially straight-line paths through the body. One can show, in fact, that linearized scattering reduces to the CT form in the limit of short wavelengths; see G. Gbur and E. Wolf, “Relation between computed tomography and diffraction tomography”, J. Opt. Soc. Am. A 18 (2001), 2132, for instance.
There are a couple of caveats:
If a person has metal implants, the implants strongly scatter x-rays and ordinary CT won’t work. There are researchers actively seeking ways to account for this.
Ordinary CT neglects phase changes of the x-rays on propagation through tissue, but this influence is still there and leads to propagation changes far enough downstream. In recent years such ‘phase contrast tomography’ has become an important research topic.
You’re right that most inverse scattering problems are nonlinear and require techniques which are quite involved. CT is somewhat special because it uses high-energy photons (x-rays), which barrel through the body with little deviation. Most inverse scattering theory since then seems to have involved trying to achieve similar success with photons at lower energies, where multiple scattering and diffraction effects are significant.
9. Pingback: Advances in the History of Psychology » Blog Archive » More Classic(?) Science from “The Giants’ Shoulders”
10. Tercel says:
Great post! Ever since I discovered the inverse radon transform, I assumed that this was how a CT scan must work.
I’d also like to mention that, as an engineer who does a lot of image processing, I suspect that real CT scan computations use some sort of least squares solution to the system of equations. I find this to be the most practical way of solving complex image transforms in the presence of noise and imperfections, where a real solution is often inconsistent.
2-D phase unwrapping, for example, doesn’t really work when you have missing data points, but a weighted least squares solution is indistinguishable from perfect in most cases.
11. skullsinthestars says:
Tercel: Thanks for the comment! Least square solutions certainly play a big role in the theory of inverse problems in general, though it seems that the early CT work was done by, as I noted, brute force methods.
12. Kieran G. Larkin says:
There are good reasons to claim that Paul Funk pre-empted Radon with a 1916 publication:
Funk, P. (1916). “Uber eine geometrische anwendung der abelschen integralgleichung.” Math. Ann. 77(129-135).
Funk’s work is limited to integrals on great circles of the sphere. Is this more evidence for Arnold’s law: Discoveries are rarely attributed to the correct person?
• skullsinthestars says:
Kieran: Interesting! I’ve not seen Funk’s paper, though it wouldn’t surprise me that others had done similar things to Radon in the same era. My experience, in looking through the history of science, is that many discoveries are inevitable, in the sense that multiple researchers start working towards the same goal independently.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465093016624451, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/38299/what-is-the-variation-of-gauss-bonnet-term-a-total-derivative-of
|
# What is the variation of Gauss-Bonnet term a total derivative of?
What is the variation of Gauss-Bonnet term total derivative of?
i.e. Variation of Gauss-Bonnet combination $= \nabla_{\mu} C^{\mu}$.
What's $C^{\mu}$ in 4-dimensions?
-
## 2 Answers
If you just want to know why the Gauss-Bonnet Term is topological, you should take a look at the generalized gauss bonet theorem.
The integral over the gauss-bonet term is proportional to the euler-characteristic, which is a topological invariant, so it can't contribute to the dynamics.
-
According to this website, for a four dimensional manifold, $$G = \nabla_{\alpha}J^{\alpha},$$ where $$G = R^2 -4 R_{\alpha \beta} R^{\alpha \beta} + R_{\alpha \beta \gamma \delta}R^{\alpha \beta \gamma \delta},$$ and $$J^{\alpha} = \epsilon^{\alpha \beta \gamma \delta} \epsilon_{\rho \sigma}^{\;\;\; \mu \nu} \Gamma^{\rho}_{\;\; \mu \beta} \left[ \frac{1}{2} R^{\sigma}_{\;\; \nu \gamma \delta} + \frac{1}{3} \Gamma^{\sigma}_{\;\; \lambda \gamma} \Gamma^{\lambda}_{\;\; \nu \sigma} \right].$$ So $G$ becomes a topological term in the action, which does not contribute to the dynamics. However, I have yet to check it myself...
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9247292280197144, "perplexity_flag": "head"}
|
http://skepticalsports.com/?p=1854
|
bucking the unconventional wisdom
# Bayes’ Theorem, Small Samples, and WTF is Up With NBA Finals Markets?
Seriously, I am dying to post about something non-NBA related, and I should have my Open-era tennis ELO ratings by surface out in the next day or so. But last night I finally got around to checking the betting markets to see how the NBA Finals—and thus my chances of winning the Smackdown—were shaping up, and I was shocked by what I found. Anyway, I tossed a few numbers around, and thought you all might find them interesting. Plus, there’s a nice little object-lesson about the usefulness of small sample size information for making Bayesian inferences. This is actually one area where I think the normal stat geek vs. public dichotomy gets turned on its head: Most statistically-oriented people reflexively dismiss any empirical evidence without a giant data-set. But in certain cases—particularly those with a wide range of coherent possibilities—I think the general public may even be a little too conservative about the implications of seemingly minor statistical anomalies.
# Freaky Finals Odds:
First, I found that most books seem to see the series as a tossup at this point. Here’s an example from a European sports-betting market:
Intuitively, this seemed off to me. Dallas needs to win 1 out of the 2 remaining games in Miami. Assuming the odds for both games are identical (admittedly, this could be a dubious assumption), here’s a plot of Dallas’s chances of winning the series relative to Miami’s expected winrate per home game:
So for the series to be a tossup, Miami needs to be about a 71% favorite per game. Even at home in the playoffs, this is extremely high. Depending on what dataset you use, the home team wins around 60-65% of the time in the NBA regular season and about 65%-70% of the time in the postseason. But that latter number is a bit deceptive, since the playoffs are structured so that more games are played in the homes of the better teams: aside from the 2-3-2 Finals, any series that ends in an odd number of games gives the higher-seeded team (who is often much better) an extra game at home. In fact, while I haven’t looked into the issue, that extra 5% could theoretically be less than the typical skill-disparity between home and away teams in the playoffs, which would actually make home court less advantageous than in the regular season.
Now, Miami has won only 73% of their home games this season, and it was against below-average competition (overall, they had one of the weakest schedules in the league). Counting the playoffs, at this point Dallas actually has a better record than Miami (by one game), and they played an above-average schedule. More importantly, the Mavs won 68% of their games on the road (compare to the league average of 35-40%). Not to mention, Dallas is 5-2 against the Heat overall, and 2-1 against them at home (more on that later).
So how does the market tilt so heavily to this side? Honestly, I have no idea. Many people are much more willing to dismiss seemingly incongruent market outcomes than I am. While I obviously think the market can be beaten, when my analytical results diverge wildly from what the money says, my first inclination is to wonder what I’m doing wrong, as the odds of a massive market failure are probably lower than the odds that I made a mistake. But, in this case, with comparatively few variables, I don’t really get it.
It is a well-known phenomenon in sports-betting that huge games often have the juiciest (i.e., least efficient) lines. This is because the smart money that normally keeps the market somewhat efficient can literally start to run out. But why on earth would there be a massive, irrational rush to bet on the Heat? I thought everyone hated them!
# Fun With Meta-Analysis:
So, for amusement’s sake, let’s imagine a few different lines of reasoning (I’ll call them “scenarios”) that might lead us to a range of different conclusions about the present state of the series:
1. Miami won at Home ~73% of the time while Dallas won on the road (a fairly stunning) 68% of the time. If these values are taken at face value, a generic Miami Home team would be roughly 5% better than a generic Dallas road team, making Miami a 52.5% favorite in each game.
2. The average home team in the NBA wins about 63% of the time. Miami and Dallas seem pretty evenly matched, so Miami should win each game ~63% of the time as well.
3. Let’s go with the very generous end of broader statistical models (discounting early-season performance, giving Miami credit for championship experience, best player, and other factors), and assume that Miami is about 5-10% better than Dallas on a neutral site. The exact math on this is complicated (since winning is a logistic function), but, ballpark, this would translate into about a 65.5% chance at home.
4. Markets rule! Approximate Market Price for a Miami series win is ~50%, translating into the 71% chance mentioned above above.
Here’s a scatter-plot of the chances of Dallas winning the series based on those per-game estimates:
Ignore the red dots for now—we’ll get back to those. The blue dots are the probability of Dallas winning at least one of the next two games (using the same binomial formula as the function above). Now, hypothetically, let’s assume you thought each of these analyses were equally plausible, your overall probability for Dallas winning the title would simply be the average of the four scenario’s results, or right around 60%. Note: I am NOT endorsing any of these lines of reasoning or any actual conclusions about this series here—it’s just a thought experiment.
# A Little Bayesian Inference:
As I mentioned above, the Mavericks are 5-2 against the Heat this season, including 2-1 against them in Miami. Let’s focus on the second stat: Sticking with the assumption that you found each of these 4 lines of reasoning equally plausible prior to knowing Dallas’s record in Miami, how should your newly-acquired knowledge that they were 2-1 affect your assessment?
Well, wow! 3 games is such a miniscule sample, it can’t possibly be relevant, right? I think most people—stat geek and layperson alike—would find this statistical event pretty unremarkable. In the abstract, they’re right: certainly you wouldn’t let such a thing invalidate a method or process built on an entire season’s worth of data. Yet, sometimes these little details can be more important than they seem. Which brings us to perhaps the most ubiquitously useful tool discovered by man since the wheel: Bayes’ Theorem.
Bayes’ Theorem, at it’s heart, is a fairly simple conceptual tool that allows you to do probability backwards: Garden-variety probability involves taking a number of probabilistic variables and using them to calculate the likelihood of a particular result. But sometimes you have the result, and would like to know how it affects the probabilities of your conditions: Bayesian analysis makes this possible.
So, in this case, instead of looking at the games or series directly, we’re going to look at the odds of Dallas pulling off their 2-1 record in Miami under each of our scenarios above, and then use that information to adjust the probabilities of each. I’ll go into the detail in a moment, but the relevant Bayesian concept is that, given a result, the new probability of each precondition will be adjusted proportionally to its prior probability of producing that result. Looking at the red dots above (which are technically the cumulative binomial probability of Miami winning 0 or 1 out of 3 games), you should see that Dallas is far more likely to go 2-1 or better on Miami’s turf if they are an even match than if Miami is a huge favorite—over twice as likely, in fact. Thus, we should expect that scenarios suggesting the former will become much more likely, and scenarios suggesting the latter will become much less so.
In its simplest form, Bayes’ Theorem states that the probability of A given B is equal to the probability of B given A times the prior probability of A (probability before our new information), divided by the prior probability of B:
$P(A|B)= \frac{P(B|A)*P(A)} {P(B)}$
Though our case looks a little different from this, it is actually a very simple example. First, I’ll treat the belief that the four analyses are equally likely to be correct as a “discrete uniform distribution” of a single variable. That sounds complicated, but it simply means that there are 4 separate options, one of which is actually correct, and each of which is equally likely. Thus, the odds of any given scenario are expressed exactly as above (B is the 2-1 outcome):
$P(S_x)= \frac{P(B|S_x)*P(S_x)} {P(B)}$
The prior probability for Sx is .25. The prior probability of our result (the denominator) is simply the sum of the probabilities of each scenario producing that result, weighted by each scenario’s original probability. But since these are our only options and they are all equal, that element will factor out, as follows:
$P(B)= P(S_x)*(P(B|S_1)+P(B|S_2)+P(B|S_3)+P(B|S_4))$
Since P(Sx) appears in both the numerator and the denominator, it cancels out, leaving our probability for each scenario as follows:
$P(S_x)= \frac{P(B|S_x)} {P(B|S_1)+P(B|S_2)+P(B|S_3)+P(B|S_4)}$
The calculations of P(B|Sx) are the binomial probability of Dallas winning exactly 2 out of 3 games in each case (note this is slightly different from above, so that Dallas is sufficiently punished for not winning all 3), and Excel’s binom.dist() function makes this easy. Plugging those calculations in with everything else, we get the following adjusted probabilities for each scenario:
Note that the most dramatic changes are in our most extreme scenarios, which should make sense both mathematically and intuitively: going 2-1 is much more meaningful if you’re a big dog.
Our new weighted average is about 62%, meaning the 2-1 record improves our estimate of Dallas’s chances by 2%, making the gap between the two 4%: 62-38 (24% difference) instead of 60-40. That may not sound like much, but a few percentage points of edge aren’t that easy to come by. For example, to a gambler, that 4% could be pretty huge: you normally need a 5% edge to beat the house (i.e., you have to win 52.5% of the time), so imagine you were the only person in the world who knew of Dallas’s miniature triumph—in this case, that info alone could get you 80% of the way to profit-land.
# Making Use:
I should note that, yes, this analysis makes some massively oversimplifying assumption—in reality, there can be gradients of truths between the various scenarios, with a variety of interactions and hidden variables, etc.—but you’d probably be surprised by how similar the results are whether you do it the more complicated way or not. One of the things that makes Bayesian inference so powerful is that it often reveals trends and effects that are relatively insulated from incidental design decisions. I.e., the results of extremely simplified models are fairly good approximations of those produced by arbitrarily more robust calculations. Consequently, once you get used to it, you will find that you can make quick, accurate, and incredibly useful inferences and estimates in a broad range of practical contexts. The only downside is that, once you get started on this path, it’s a bit like getting Tetrisized: you start seeing Bayesian implications everywhere you look, and you can’t turn it off.
Of course, you also have to be careful: despite the flexibility Bayesian analysis provides, using it in abstract situations—like a meta-analysis of nebulous hypotheses based on very little new information—is very tricky business, requiring good logical instincts, a fair capacity for introspection, and much practice. And I can’t stress enough that this is a very different beast from the typical talking head that uses small samples to invalidate massive amounts of data in support of some bold, eye-catching and usually preposterous pronouncement.
Finally, while I’m not explicitly endorsing any of the actual results of the hypo I presented above, I definitely think there are real-life equivalents where even stronger conclusions can be drawn from similarly thin data. E.g., one situation that I’ve tested both analytically and empirically is when one team pulls off a freakishly unlikely upset in the playoffs: it can significantly improve the chances that they are better than even our most accurate models (all of which have significant error margins) would indicate.
### 3 Responses to “Bayes’ Theorem, Small Samples, and WTF is Up With NBA Finals Markets?”
1. jon k says:
I think your assumption that game 6 and game 7 will have equal win probabilities is wrong. I think Miami will be a significantly bigger favourite in game 7 than in game 6. I expect the market to bear this out as well — if there is a game 7, we can calculate the product of the two moneylines to see if the series price is inefficient or not. In any case, series lines offer much smaller betting limits and could easily be moved into “inefficient” territory by a single bettor, so I wouldn’t take much stock in them. Moneyline numbers are going to be vastly more useful.
It would be interesting to see some analysis on how often the home team wins game 7 vs historical moneyline data.
• benjaminmorris says:
I actually agree that that assumption is fairly dubious, but I’m not sure it has much of an effect on the bottom line. I highlighted the series win probabilty because it dovetails better with my small sample/Bayes Theorem hypo, but the moneyline for game 6 seems pretty crazy in its own right. Again, I haven’t really examined the situation carefully enough to say anything conclusive (besides, aside from this ESPN competition, I’m not really in the predictions game), but it doesn’t match my intuitions or first-order analysis, and I’m legitimately curious as to why.
2. DaMavs02 says:
The Mavs in both of their Finals runs have outperformed against the spread.
I feel like for a while the national perception of the Mavs, and Dirk in particular, didn’t match the performance on the court. This seemed to be reflected in betting lines consistently.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483377933502197, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/19417/path-integral-formulation-of-quantum-mechanics/19465
|
# Path integral formulation of quantum mechanics
I'm a mathematics student with not much background in physics. I'm interested in learning about the path integral formulation of quantum mechanics. Can anyone suggest me some books on this topic with minimum prerequisite in physics?
-
## locked by Manishearth♦Apr 14 at 13:11
This question exists because it has historical significance, but it is not considered a good, on-topic question for this site, so please do not use it as evidence that you can ask similar questions here. This question and its answers are frozen and cannot be changed. More info: FAQ.
## 4 Answers
### Sources for the path integral
You can read any standard source, so long as you supplement it with the text below. Here are a few which are good:
• Feynman and Hibbs
• Kleinert (although this is a bit long winded)
• An appendix to Polchinski's string theory vol I
• Mandelstam and Yourgrau
There are major flaws with other presentations, these are pretty much the only good ones. I explain the major omission below.
### Completing standard presentations
In order for the discussion of the path integral to be complete, one must explain how non-commutativity arises. This is not trivial, because the integration variables in the path integral for bosonic fields or particle paths is over ordinary real valued variables, and these quantities cannot be non-commutative themselves.
### Non-commutative quantities
The resolution of this non-paradox is that the path integral integrand is on matrix elements of operators, and the integral itself is reproducing the matrix multiplication. So it is only when you integrate over all values at intermediate times that you get a noncommutative order-dependent answer. Importantly, when noncommuting operators appear in the action or in insertions, the order of these operators is dependent on exactly how you discretize them--- whether you put the derivative parts as forward differences or backward differences or centered differences. These ambiguities are all important, and they are discussed only in a handful of places (Negele/Orland Yourgrau/Mandelstam Feynman/Hibbs Polchinski and Wikipedia) and nowhere else.
I will give the classical examples of this, which are sufficient to resolve the general case, assuming you are familiar with simple path integrals like the free particle. Consider the free particle Euclidean action
$$S= -\int {1\over 2} \dot{x}^2$$
and consider the evaluation of the noncommuting product $x\dot{x}$. This can be discretized as
$$x(t) {x(t+\epsilon) - x(t)\over \epsilon}$$
or as
$$x(t+\epsilon) {x(t+\epsilon) - x(t)\over \epsilon}$$
The first represents $x(t)p(t)$ in this operator order, the second represents $p(t)x(t)$ in the other operator order, since the operator order is the time order. The difference of the second minus the first is
$${(x(t+\epsilon) - x(t))^2\over \epsilon}$$
Which, for the fluctuating random walk path integral paths has a fluctuating limit which averages to 1 over any finite length interval, when $\epsilon$ goes to zero. This is the Euclidean canonical commutation relation, the difference in the two operator orders gives 1. For Brownian motion, this relation is called "Ito's lemma", not dX, but the square of dX is proportional to dt. While dX is fluctuating over positive and negative values with no correlation and with a magnitude at any time of approximately $\sqrt{dt}$, dX^2 is fluctuating over positive values only, with an average size of dt and no correlations. This means that the typical Brownian path is continuous but not differentiable (to prove continuity requires knowing that large dX fluctuations are exponentially suppressed--- continuity fails for Levy flights, although dX does scale to 0 with dt).
Although discretization defines the order, not all properties of the discretization matter--- only which way the time derivative goes. You can understand the dependence intuitively as follows: the value of the future position of a random walk is (ever so slightly) correlated with the current (infinite) instantaneous velocity, because if the instantaneous velocity is up, the future value is going to be bigger, if down, smaller. Because the velocity is infinite however, this teensy correlation between the future value and the current velocity gives a finite correlator which turns out to be constant in the continuum limit. Unlike the future value, the past value is completely uncorrelated with the current (forward) velocity, if you generate the random walk in the natural way going forward in time step by step, by a Markov chain.
The time order of the operators is equal to their operator order in the path integral, from the way you slice the time to make the path integral. Forward differences are derivatives displaced infinitesimally toward the future, past differences are displaced slightly toward the past. This is is important in the Lagrangian, when the Lagrangian involves non-commuting quantities. For example, consider a particle in a magnetic field (in the correct Euclidean continuation):
$$S = - \int {1\over 2} \dot{x}^2 + i e A(x) \cdot \dot{x}$$
The vector potential is a function of x, and it does not commute with the velocity $\dot{x}$. For this reason, Feynman and Hibbs and Negele and Orland carefully discretize this,
$$S = - \int \dot{x}^2 + i e A(x) \cdot \dot{x}_c$$
Where the subscript c indicates infinitesimal centered difference (the average of the forward and backward difference). In this case, the two orders differ by the commutator, [A,p], which is $\nabla\cdot A$, so that there is an order difference outside of certain gauges. The correct order is given by requiring gauge invariance, so that adding a gradiant $\nabla \alpha$ to A does nothing but a local phase rotation by $\alpha(x)$.
$$ie \int \nabla\alpha \dot{x}_c = ie \int {d\over dt} \alpha(x(t))$$
Where the centered differnece is picked out because only the centered difference obeys the chain rule. That this is true is familiar from the Heisenberg equation of motion:
$${d\over dt} F(x) = i[H,F] = {i\over 2} [p^2,F] = {i/2}(p[p,F] + [p,F]p) = {1\over 2}\dot{x} F'(x) + {1\over2} F'(x) \dot{x}$$
Where the derivative is a sum of both orders. This holds for quadratic Hamiltonians, the ones for which the path integral is most straightforward. The centered difference is the sum of both orders.
The fact that the chain rule only works for the centered difference means that people who do not understand the ordering ambiguities 100% (almost everybody) have a center fetishism, which leads them to use centered differences all the time.
THe centered difference is not appropriate for certain things, like for the Dirac equation discretization, where it leads to "Fermion doubling". The "Wilson Fermions" are a modification of the discretized Dirac action which basically amounts to saying "Don't use centered derivatives, dummy!"
Anyway, the order is important.Any presentation of the path integral which gives the Lagrangian for a particle in a magnetic field without specifying whether the time derivative is a forward difference or a past difference, is no good at all. That's most discussions.
A good formalism for path integrals always thinks of things on a fine lattice, and takes the limit of small lattice spacing at the end. Feynman always secretly thought this way (and often not at all secretly, as in the case above of a particle in a magnetic field), as does everyone else who works with this stuff comfortably. Mathematicians don't like to think this way, because they don't like the idea that the continuum still has got new surprises in the limit. Mathematicians are snobby and wrong.
The other thing that is hardly ever explained properly (except for Negele/Orland, David John Candlin's Neuvo Cimento original article of 1956, and Berezin) is the Fermionic field path integral. This is a separate discussion, so I will refer to these sources for the time being.
-
Downvoted, this doesn't answer his question. – Benjamin Horowitz Jan 13 '12 at 21:57
"Mathematicians don't like to think this way"? Obtaining the continuum path integral measure from a limit of lattice measures is utterly standard. See Glimm & Jaffe. – user1504 Jan 13 '12 at 23:28
@user1504: Jaffe is not a mathematician,The approach of Jaffe and co. in the late 1960s was resumming perturbation theory, and reworked after Symanzik and Wilson. Their writing is obfuscatory and pedantic, and shows no comprehension of he first thing about path integrals. The only valuable part of the mistitled "Quantum Mechanics and Path Inegrals" are the correlation inequalities, which are also stated confusingly, but are important. It is my firm opinion that no one who recommends this book understands its contents, or else they would be recommending other stuff. – Ron Maimon Jan 14 '12 at 1:34
2
He was the chairman of Harvard's math department, and past President of the AMS. This is something you should have googled before spouting off about. Ron, I really enjoy your posts. I think you're doing something that desperately needs doing: attempting to write well and accessibly about path integrals. (Hell, I'd be pleased to read something longer from you.) But let's not pretend that the ideas are completely unknown to the math community. ps. I'd be happy to see measure theory be first against the wall when the revolution comes. – user1504 Jan 14 '12 at 13:56
1
I came to physics.SE today planning to ask the question "In the path integral formulation of QM, where do we choose a quantization?" So this is exactly what I was looking for. Thanks! – David Speyer Oct 18 '12 at 12:29
show 4 more comments
"Quantum Mechanics and Path Integrals" by Feynman and Hibbs
-
– Thies Heidecke Jan 22 '12 at 16:00
To get you started here are some lecture notes I like on the path integral: http://bohr.physics.berkeley.edu/classes/221/1011/notes/pathint.pdf (from the page http://bohr.physics.berkeley.edu/classes/221/1011/221.html )
-
1
These lecture notes are missing the crucial derivation of the Ito-Lemma/Canonical commutation relation. Without this, no discussion of path integrals is complete. I have seen even great physicists confused regarding the commutation relation in the path integral, and the dependence on discretization is crucial for applications. This derivation is in Feynman/Hibbs Yourgrau/Mandelstam Negele/Orland Polchinski and Wikipedia. It is found almost nowhere else. It is the sign of a good presentation: no commutation relations, no comprehension. – Ron Maimon Jan 13 '12 at 16:23
The Book 'Principles of Quantum Mechanics' by R. Shankar has a really good introduction to the path integral formalism (and quantum mechanics in general) with two dedicated chapters about it. Also the book begins with a nice presentation of linear algebra in Bra-ket notation.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9226692318916321, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/69800?sort=newest
|
## Relation between Gerstenhaber bracket and Connes differential
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $C$ be an arbitrary algebra (more generally, a linear 1-category). The following structures are well-known:
A degree-0 product on the Hochschild cohomology $HH^*(C)$
```$$
HH^*(C) \otimes HH^*(C) \to HH^*(C)
$$``` ```$$
a \otimes b \mapsto ab
$$```
A degree-0 action of Hochschild cohomology on the Hochschild homology $HH_*(C)$
```$$
HH^*(C) \otimes HH_*(C) \to HH_*(C)
$$``` ```$$
a \otimes \gamma \mapsto a\cdot \gamma
$$```
A degree-1 unary operation on Hochschild homology (Connes differential) ```$$
HH_*(C) \to HH_*(C)
$$``` ```$$
\gamma \mapsto B(\gamma)
$$```
A degree-1 binary operation on Hochschild cohomology (Gerstenhaber bracket) ```$$
HH^*(C) \otimes HH^*(C) \to HH^*(C)
$$``` ```$$
a \otimes b \mapsto a * b
$$```
The above operations satisfy some well-known relations. (Note that I am not attempting to get the signs right.)
• graded commutativity $ab = \pm ba$
• more graded commutativity $a * b = \pm b * a$
• Poisson identity $a * (bc) = (a * b)c + b(a * c)$
• Jacobi identity $a * (b * c) + b * (c * a) + c * (a * b) = 0$
• $B$ is a differential $B(B(\gamma)) = 0$
• various associativities $(ab)c = a(bc)$; $(a * b) * c = a * (b * c)$; $(ab)\cdot\gamma = a\cdot(b\cdot\gamma)$
The following relation, expressing the action of a Gerstenhaber bracket on Hochschild homology in terms of the Connes differential, seems to be less well-known. At least I haven't been able to find it in the literature. ```$$
(a*b)\cdot\gamma = ab\cdot B(\gamma) - a\cdot B(b\cdot \gamma) - b\cdot B(a\cdot\gamma) + B(ba\cdot\gamma)
$$``` (Again, I haven't tried to get the signs right.)
Question: Is there a reference for the above relation?
Note: The above relation follows from the fact that the first homology of a certain operad space is 4-dimensional, so there must be some relation between the five degree-1 maps `$HH^*(C)\otimes HH^*(C)\otimes HH_*(C)\otimes \to HH_*(C)$` which figure in the relation.
Another note: In cases where `$HH^*(C) \cong HH_*(C)$` and there is a BV algebra structure, I think the relation follows from the usual definition of the Gerstenhaber bracket in terms of the BV structure. See the "Antibracket" section of this Wikipedia article.
-
## 2 Answers
Hi,
Your formula is due (without the signs!) due to Ginzburg Calabi-Yau algebras (9.3.2) as explained in Lemma 15 of my paper, Batalin-Vilkovisky algebra structures on Hochschild Cohomology, Bull. Soc. Math. France 137 (2009), no 2, 277-295 (sorry for quoting myself!)
Here is Lemma 15
Lemma 15 [17, formula (9.3.2)] Let A be a differential graded algebra. For any η, ξ ∈ HH ∗ (A, A) and c ∈ HH∗ (A, A), {ξ, η}.c = (−1)|ξ| B [(ξ ∪ η).c] − ξ.B(η.c) + (−1)(|η|+1)(|ξ|+1) η.B(ξ.c) + (−1)|η| (ξ ∪ η).B(c).
In a condensed form, this formula is
(34) `$i_{\{a,b\}}=(-1)^{\vert a\vert+1}[[B,i_{a}],i_b]=[[i_{a},B],i_b].$`
See formula (34) of my second paper Van Den Bergh isomorphisms in String Topology, J. Noncommut. Geom. 5 (2011), no. 1, 69-105. (sorry for quoting myself again!)
In this paper, I thought I gave a new definition of BV-algebras. But this definition appears more or less in the section "Compact formulation in terms of nested commutators." of the Wikipedia article, you quote! However, I was unable to find this definition in the bibliography quoted in the Wikipedia article.
Concerning signs, in my first paper, I made a mistake, corrected in my second paper. So (34) is correct and Lemma 15 has some signs problems.
ps: David Ben-zvi is absolutely right. This formula is a consequence of Tamarkin-tsygan calculus!
-
Thanks. I'm switching the green checkmark to this answer, since it more directly addresses my question. But I would accept both answers if I could. – Kevin Walker Jul 18 2011 at 12:25
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I'm not sure if your precise formulation appears there but I believe it should be part of the "homotopy calculus" structure studied by Tsygan and Tamarkin in various papers - see e.g. p.6 of Noncommutative differential calculus, homotopy BV algebras and formality conjectures, in which a similar relation is stated - namely that Hochschild chains with the Connes differential form a homotopy BV module over the canonical BV deformation of the homotopy Gerstenhaber algebra of Hochschild cochains.
-
Thanks, that does seem relevant. In particular, the middle part of equation (0.1) of that paper (which they state for polyvector fields and differential forms) is the same as the relation I'm asking about. Presumably the generalization to arbitrary Hochschild `[co]homology` follows from one of their results about G-infinity structures. – Kevin Walker Jul 8 2011 at 15:20
Great! I haven't tried but this must follow formally from TFT, at least for smooth proper algebras (which is presumably how you arrived at it?) - i.e., we can act with one 2-framed circle (HH^*) on another (HH_*), and ask how the product interacts with rotation of the circle.. – David Ben-Zvi Jul 8 2011 at 15:32
1
Yes, something like that. In my set-up, I consider circles divided into incoming and outgoing regions (instead of framings). `HH^*` corresponds to a circle composed of one incoming interval and one outgoing interval (i.e. a bigon). `HH_*` corresponds to the entire circle being outgoing (a 0-gon). In general one could have a circle with n incoming regions alternating with n outgoing regions (a 2n-gon). The 2-dimensional part of the TFT-ish structure is a colored operad, sort of like the little disks operad except that the circles are "colored" by 2n-gons and the surfaces could be... – Kevin Walker Jul 8 2011 at 15:58
...higher genus. The surfaces also have a foliation by oriented intervals, corresponding to the direction of "time". Homology classes in the topological space of such surfaces give operations (e.g. the four operations mentioned in the question), and one can deduce all the relations I mention above from easy homology calculations for these spaces of surfaces. I don't think I need to put any restrictions on the algebra C for all this to work, but perhaps I've overlooked something? – Kevin Walker Jul 8 2011 at 16:04
1
That sounds very reasonable.. I'm thinking within the full cobordism hypothesis framework, which requires the algebra smooth proper to get a framed 2d TFT (the smoothness and properness follows from allowing pictures corresponding to the disc and the saddle). I don't have a clear picture what TFT structures are allowed without full dualizability (eg what hypotheses you need on an algebra to define a noncompact framed theory). – David Ben-Zvi Jul 8 2011 at 16:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9162254929542542, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/39655-vectors-planes.html
|
# Thread:
1. ## Vectors and planes
Please help. A ray of light coming from the point (-1, 3, 2) is travelling in the directionof the vector 4i + j -2k and meets the plane x + 3y + 2z - 24= 0. Find the angle that the ray of light makes with the plane.
Tks!
2. Hello, pantera!
A ray of light coming from the point (-1, 3, 2) is travelling in direction $\langle 4, 1,-2\rangle$
and meets the plane $x + 3y + 2z - 24\:=\: 0$
Find the angle that the ray of light makes with the plane.
The angle between two vectors $\vec{u}\text{ and }\vec{v}$ is given by: . $\cos\theta \:=\:\frac{|\vec{u}\cdot \vec{v}|}{|\vec{u}||\vec{v}|}$
The ray has direction: . $\vec{u} \:=\:\langle4,1,-2\rangle$
The plane has normal direction $\vec{n} \:=\:\langle 1, 3, 2\rangle$
The angle $\theta$ between the ray and the normal is:
. . $\cos\theta \:=\:\frac{\langle 4,1,-2\rangle\cdot\langle1,3,2\rangle}{\sqrt{4^2+1^2+(-2)^2}\sqrt{1^2+3^2+2^2}} \;=\;\frac{3}{\sqrt{294}} \:=\:0.174963553$
Hemce, $\theta \;=\;79.92346289 \;\approx\;80^o$
Therefore, the angle between the ray and the plane is about: . $90^o - 80^o \;=\;\boxed{10^o}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150188565254211, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/97362/justification-of-algebraic-manipulation-of-infinitesimals/97524
|
# Justification of algebraic manipulation of infinitesimals
As an engineering student, I regularly see people making arguments like this:
Consider a rectangle of dimensions $x\times 4x$. If we make $x$ bigger by a small quantity $dx$ then this will make $4x$ bigger by $4\cdot dx$ so the area of that $x \times 4x$ rectangle will change from $4x^2$ to $$(x+dx)(4x+4dx)=4(x^2+2x\cdot dx+(dx)^2)\approx4x^2+8x\cdot dx$$ with the final step justified because $dx$ is a 'small' quantity so $(dx)^2$ will be so small as to be ignorable in some mathematically rigorous way. Thus the change in area $dA$ would be $8x\cdot dx$.
Arguments like this are very common. Another random example would be in Wikipedia's proof of the brachistochrone problem which starts with the statement
$$ds^2=dx^2+dy^2$$
and proceeds to manipulate these infinitesimals as if they were ordinary constants or variables.
I'm wondering if there's a simple, analytically rigorous justification for all of this manipulation. While I feel perfectly comfortable with the idea of the derivative of a function (considered as a limit), I've never seen a similar, rigorous justification for the algebraic manipulation of infinitesimals and the cancellation of 'small' terms (like $(dx)^2$). Any thoughts or help would be appreciated.
Thankyou
-
## 4 Answers
I will just throw a few buzzwords at you :-)
The mathematically precise concept of "infinitesimal" is called "differential form". If we fix the Euclidean plane $\mathbb{R}^2$ and think of it as a "differentiable manifold", then every point in the plane has a tangential space that is "isomorph to" (or another copy of) $\mathbb{R}^2$. If we further fix a cartesian coordinate system with coordinates x and y, then a differential form is a gadget that assigns for every point p with coordinates $(x_p, y_p)$ to a tangent vector at that point a real number.
If you think of attaching a vector pointing upwards with the length of on, (0, 1), at every point in the plane, then in our example dx would spit out 0 at every point and dy would spit out 1 at every point.
This is the starting point for modern abstract "coordinate free" differential geometry.
From my experience, these concepts are usually not easy to understand for beginners, so don't worry if you don't understand everything on a first reading.
First note that you don't need the concept of a "differential form" to understand your first example:
Take a rectangle with side lengths a and b, then we have a function that gives the area, $$f: \mathbb{R}^2 \to \mathbb{R}$$ $$f: (a, b) \mapsto ab$$ If you increase both of the coordinates by h, then this is just a directional derivative, and since $f$ is differentiable we know that $$f(a + h, b + h) - f(a, b) = df*(h, h) + o(h^2)$$ holds. Here "dx" is just short hand notation for both "h" and "we know that f is differentiable and therefore that the remainder of the right hand side is $o(h^2)$, that is for smaller and smaller h the linear approximation gets better and better". The linear approximation is by definition given by applying the differential $df$ of $f$ to the vector $(h, h)$.
The second example is a little bit more complicated, here we'll really need the concept of "differential forms". To be mathematically precise, we'd have to write $$ds^2 := dx \otimes dx + dy \otimes dy$$ That is, the left hand side is defined by the right hand side, and the right hand side consists for a fixed point of elements of the tensor product of the cotangential space $T^*_pM$ with itself.
This means this gadget eats two tangential vectors on any tangent space and spits out a real number, and this operation is bilinear (linear in both input variables). So, if you fix a point on the plane, you get an element of the space $$T^*_pM \otimes T^*_pM$$ which has an algebraic structure that can be used.
-
There are various ways of answering your question. Tim van Beek has given one in terms of differential forms. Another nice way of looking at it is using nonstandard analysis (NSA). The nice thing about NSA, to my taste, is that it allows you say things the way that Leibniz and Gauss and Euler found it so useful to say them (i.e., with dx's and dy's), and it also allows you to be certain that what you're doing is logically rigorous. NSA doesn't have to be hard and scary. Jerome Keisler wrote a very nice freshman calculus book using NSA, now available online for free: http://www.math.wisc.edu/~keisler/calc.html My own book, using a similar approach, is here: http://www.lightandmatter.com/calc/ This is also a very nice treatment: http://www.math.uiowa.edu/~stroyan/InfsmlCalculus/InfsmlCalc.htm
The basic idea of NSA is that just as we expanded the integers to the rationals, and the rationals to the reals, we go one more step and expand the reals to the hyperreals. The hyperreals are a number system that includes the reals, but that also includes infinitesimals. The way you know whether you're doing something logically correct is that if you write down all of the elementary axioms of the real number system (x+y=y+x, etc.), then all of those are true in the hyperreals. "Elementary" means axioms that only say things like "for every number...," not ones that say "for every set of numbers..."
-
Thanks, I had wondered if NSA would pop up here, though I've only heard of them fleetingly. I'll finish Spivak, maybe do some conventional analysis, then get back to this :) – tom Jan 9 '12 at 11:35
You will hear a lot about differentials, exterior algebra and maybe even about nonstandard analysis. But the sad fact is: In over 300 years of calculus we have not come up with an easy answer to your question. All one can say is: If in a particular case "all of this manipulation" leads to a correct result, then there is also an "analytically rigorous justification" for it.
If done with professional expertise, dealing with "differentials" of all sorts in a lighthanded way is definitely a successful heuristic technique, especially in an environment where one doesn't care so much about $\epsilon$'s and $\delta$'s. But some care is needed: When you argue about the area under a curve $\gamma\!: y=f(x)$ you don't have to bother about the increase of $f$ in an interval of length $dx$, but if you want to derive a formula for the length of $\gamma$, then this little increase of $y$ plays a decisive rôle.
-
1
'All one can say is...' No, this is not all one can say. '[...] then there is also an "analytically rigorous justification" for it.' You seem to be using "rigorous" in contradistinction to "nonstandard analysis." There is nothing nonrigorous about nonstandard analysis. – Ben Crowell Jan 9 '12 at 3:11
@Ben Crowell: Of course not. What I'm saying is that the various frameworks invented to salvage our everyday handling of "infinitesimals" are no easy answers to the question posed by the OP. – Christian Blatter Jan 9 '12 at 9:09
This stems from simple theorem regarding the change of variables, also called substitution, used for integration: $\int {f\left( {g\left( t \right)} \right)g'\left( t \right)dt} = \int {f\left( y \right)dy}$. If we put $g\left( t \right) = y$ in $\int {f\left( {g\left( t \right)} \right)g'\left( t \right)dt}$, we have $\int {f\left( y \right)\frac{{dy}} {{dt}}dt} = \int {f\left( y \right)dy}$ which we know is true from the theorem. So, when we integrate, it's allowed for us to think that, when $y = g\left( t \right)$, then $dy = g'\left( t \right)dt$.
In your example, you have $y\left( x \right) = 4{x^2}$, and want to calculate $y'\left( x \right)$. We have $y'\left( x \right) = \frac{{dy}}{{dx}} = 8x$ (you forgot to multiply by $x$ in your example). Now, we use abuse of notation to write $dy = 8xdx$.
Similar arguments can be made with integration of multivariate functions. Such consideration gives rise to differential forms.
-
thanks, I fixed it. So are you saying that all of the manipulation of infinitesimals I demonstrated can be justified using the substitution rule? – tom Jan 9 '12 at 1:40
1
– Alen Jan 9 '12 at 1:44
That does make some sense though - when we go from $\frac{dy}{dx} = f(y)g(x)$ to $\frac{dy}{f(y)}=g(x)\cdot dx$ we are really just applying the substitution rule in diguise – tom Jan 9 '12 at 2:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379425048828125, "perplexity_flag": "head"}
|
http://electronics.stackexchange.com/questions/37717/why-are-the-common-resistors-and-capacitors-rated-the-way-they-are
|
# Why are the “common” resistors and capacitors rated the way they are?
Very newbie question here...one to which I've been unable to find an answer in any of my books or online resources.
Whenever I see a capacitor/resistor assortment that features the most "common" ones (which, I understand can be quite subjective), they tend to come rated in powers of 1, 2.2 and 4.7. What's magical about those numbers?
Is that some mathematical relationship in Ohm's law of which I'm not aware? Is it some logarithmic progression for some OTHER electrical engineering formula that I've not yet encountered? Why is there such a huge gap between 4.7 and 10.0, why isn't there a 7.1 or some such? As someone who a scary uncanny knack for solving "find the pattern in this series of numbers" puzzles, it's killing me that I can't figure this one out. :)
-
1
I don't have time for a proper answer, but take a look at this list of values and at this article. – AndrejaKo Aug 12 '12 at 13:09
## 2 Answers
The values of the resistors are related, they are in a geometric progression. This means each is a fixed multiple of the previous one. For example,
• 1 3.3 10 - multiple is ~3.3
• 1 2.2 4.7 10 - multiple is ~2.2
• 10 12 15 18 22 27 33 39 47 56 68 82 100 - multiple is ~1.2
Yes, there is a large gap between 4.7 and 10. However, when plotted on a logarithmic scale they are evenly spaced.
-
A-ha...that's the one thing that I didn't take into account. The fact that results could be `~` rather than `=`. I was looking for 1, 2.2, 4.8, 10.6...not 1, 2.2, 4.7, 11. Of course, that still doesn't answer why there aren't 4.8 and 11 ohm/uf resistors in the standard batch. :) – dwwilson66 Aug 12 '12 at 13:23
Actually after 470 there's 680 which goes before 1000 in E6. – AndrejaKo Aug 12 '12 at 13:27
Because it should include 1, 10, 100, 1K etc. for probably the most basic reason - nice round numbers that are easy to calculate. The reason it is 3.3 and not 3.33 is that more resistor color bands would be required and anyway, that 0.03 extra is not worth worrying about when the tolerance is more than that. :) – geometrikal Aug 12 '12 at 13:29
@geometrikal - I think it's just the latter: no use giving more significant digits than the precision. If you look at what's needed to make a PTH transistor, and then see the price it won't matter much if that painter would need an extra pot of paint :-). – stevenvh Aug 13 '12 at 18:07
## Did you find this question interesting? Try our newsletter
email address
For the E12 series the step size is the 12th root of 10, or about 1.2 larger than the previous one, so 12 steps take you from 10 to 100. That goes with a 10 % tolerance: you can always find an E12 value within 10 % of the desired value. That's because $\sqrt[12]{10} \approx$ 1.2115, and 1.1$^2$ = 1.21.
For example: 18 Ω + 10 % = 19.8 Ω. The next E12 value is 22 Ω. Then 22 Ω - 10 % = 19.8 Ω. (It doesn't always fit that neatly. The blue line shows a small gap between 12 Ω and 15 Ω, but most often there's an overlap.) Nowadays 10 % isn't used much anymore for resistors, 5 % is much more common, and 1 % is not that much more expensive.
That means your desired value won't fall into the 1 % tolerance of the E12 series. For example, if you want a 20 Ω resistor the closest E12 values are 18 Ω and 22 Ω. With a 1 % tolerance they don't come closer than 18.18 Ω and 21.78 Ω, resp. That's why 1 % resistors are offered in a much larger range, typically the E96 range, which includes 20 Ω.
Further reading
EIA resistor values series
-
I wonder why the E-series use 33 instead of 32, which would be closer to 31.62, the geometric mean of 10 and 100. – starblue Aug 12 '12 at 16:20
@starblue - the EIA works in mysterious ways :-). I guess it's a cumulation of rounding errors. If you multiply 27 by the 12th root of 10 you get 32.7, which is closer to 33. But I guess you're right, they probably should have calculated each value independently. – stevenvh Aug 12 '12 at 16:30
Yes, it seems to be the result of rounding up several times in a row. Though it's not done consistently, 33 multiplied by the 12th root of 10 gives almost exactly 40. – starblue Aug 12 '12 at 20:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518845677375793, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/194792/what-is-the-meaning-of-the-range-and-the-precision/194797
|
# What is the meaning of the range and the precision?
Using the scientific notation:
$$3.14 = 0.314 \times 10^1$$
From Tanenbaum's Structured Computer Organization, section B.1:
The range is effectively determined by the number of digits in the exponent and the precision is determined by the number of digits in the fraction.
I know how this notation works but I am asking about the meaning of the two words.
Why the book is calling them the range and precision? What do they exactly mean?
-
1
– Amzoti Sep 12 '12 at 17:02
## 1 Answer
The range is determined by the biggest and smallest (positive) numbers you can represent. Clearly, with two digits in the exponent you can write numbers from approximately $10^0$ (or $10^{-99}$ if you allow signs) to $10^{99}$ (even more by clever choce of mantissa). With on-digit exponents the range is much smaller: from $10^{-9}$ to $10^{9}$.
The precision is determined by the smallest (relative) difference between two representable numbers. The difference between $3.14$ and $3.15$ is about $0.3\%$ of the values (and also the difference between $6.02\cdot 10^{23}$ and $6.03\cdot 10^{23}$ or between $1.60\cdot 10^{-19}$ and $1.61\cdot 10^{-19}$ is of the same - relative - magnitude. On the other hand $3.1415926535897932384626433$, $6.02214129\cdot10^{26}$ and $1.602176565\cdot10^{-19}$ carry a lot more precision. The latter two are numerical values for the Avogadro constant and the elementary charge. Obtaining thes values required very precise measurements. The precision does not depend on the exponent, that isif the value of $e$ in other unit systems is $1.602176565\cdot10^{6}$, the same careful prcision in the measurements is required to obtain so many digits.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234753847122192, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/28124/trigonometric-identities/28130
|
## Trigonometric identities
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In the rant I wrote at
http://ncatlab.org/nlab/show/trigonometric+identities+and+the+irrationality+of+pi
I asked: Are these four identities the first four terms in a sequence that continues?
This referred to the identities in the last bullet point above that question.
While we're at it, is there any intuitive geometric interpretation of the identity involving $f_2$?
OK, here are the functions involved:
```$$
\begin{align}
f_0(\theta_1,\theta_2,\theta_3,\dots) & = \sum_{\text{even }n \ge 0} (-1)^{n/2} \sum_{|A| = n} \prod_{i\in A} \sin\theta_i\prod_{i\not\in A}\cos\theta_i \\
f_1(\theta_1,\theta_2,\theta_3,\dots) & = \sum_{\text{odd }n \ge 1} (-1)^{(n-1)/2} \sum_{|A| = n} \prod_{i\in A} \sin\theta_i\prod_{i\not\in A}\cos\theta_i \\
f_2(\theta_1,\theta_2,\theta_3,\dots) & = \sum_{\text{even }n \ge 2} (-1)^{(n-2)/2} n \sum_{|A| = n} \prod_{i\in A} \sin\theta_i\prod_{i\not\in A}\cos\theta_i \\
f_3(\theta_1,\theta_2,\theta_3,\dots) & = \sum_{\text{odd }n \ge 3} (-1)^{(n-3)/2} (n-1) \sum_{|A| = n} \prod_{i\in A} \sin\theta_i\prod_{i\not\in A}\cos\theta_i \\
f_4(\theta_1,\theta_2,\theta_3,\dots) & = \sum_{\text{even }n \ge 4} (-1)^{(n-4)/2} n(n-2) \sum_{|A| = n} \prod_{i\in A} \sin\theta_i\prod_{i\not\in A}\cos\theta_i \\
f_5(\theta_1,\theta_2,\theta_3,\dots) & = \sum_{\text{odd }n \ge 5} (-1)^{(n-5)/2} (n-1)(n-3) \sum_{|A| = n} \prod_{i\in A} \sin\theta_i\prod_{i\not\in A}\cos\theta_i \\
f_6(\theta_1,\theta_2,\theta_3,\dots) & = \sum_{\text{even }n \ge 6} (-1)^{(n-6)/2} n(n-2)(n-4) \sum_{|A| = n} \prod_{i\in A} \sin\theta_i\prod_{i\not\in A}\cos\theta_i \\
f_7(\theta_1,\theta_2,\theta_3,\dots) & = \sum_{\text{odd }n \ge 7} (-1)^{(n-7)/2} (n-1)(n-3)(n-5) \sum_{|A| = n} \prod_{i\in A} \sin\theta_i\prod_{i\not\in A}\cos\theta_i \\
& \vdots
\end{align}
$$``` In each function the coefficient kills off the terms involving values of $n$ smaller than the index, so that for example we could have said "$\text{odd }n \ge 1$" instead of $\text{odd }n \ge 7$ and it would be the same thing.
Now some facts:
• Each $f_k$ is a symmetric function of $\theta_1,\theta_2,\theta_3,\dots$.
• 0 is an identity element for each of these functions, in the sense that $$f_k(0,\theta_2,\theta_3,\dots) = f_k(\theta_2,\theta_3,\dots).$$
• ```$f_k(\theta_1,\theta_2,\theta_3,\dots) - f_k(\theta_1+\theta_2,\theta_3,\dots) = \left. \begin{cases} k & \text{if }k \ge 2\text{ is even} \\
k-1 & \text{if }k \ge 2\text{ is odd}
\end{cases} \right\}\cdot \sin\theta_1\sin\theta_2 f_{k-2}(\theta_3,\theta_4,\dots)$``` and $=0$ if $k = 0\text{ or }1$.
Now the sequence of identies: ```$$
\begin{align}
f_0 & = \cos(\theta_1 + \theta_2 + \theta_3 + \cdots) \\
f_1 & = \sin(\theta_1 + \theta_2 + \theta_3 + \cdots) \\
\text{If } \sum_{i=1}^\infty \theta_i = \pi,\text{ then }
f_2 & = \sum_{i=1}^\infty \sin^2\theta_i \\
\text{If } \sum_{i=1}^\infty \theta_i = \pi,\text{ then }
f_3 & = \frac{1}{2} \sum_{i=1}^\infty \sin(2\theta_i)
\end{align}
$$``` The QUESTION is whether these are the first four identities in a sequence that continues beyond this point.
-
6
Please try to make your question self-contained by not including a link that the reader must click on in order to parse it. This will greatly increase the readership of your question and thus the likelihood of getting a good answer. – Pete L. Clark Jun 14 2010 at 14:46
Looking at the question and the answer below, I was intrigued how all these "trigonometric identities" could imply "the irrationality of pi". I am disappointed: there is nothing about irrationality nor about pi there... :-( – Wadim Zudilin Jun 14 2010 at 15:17
Actually, if you're looking only at the question and not at the external link, you won't see any of that, and the external link has its own external link to a Wikipedia article that contains Mary Cartwright's proof of the irrationality of $\pi$, so I am somewhat guilty of non-self-containment as suggested above. – Michael Hardy Jun 14 2010 at 16:11
2
The nLab page is really hard to read. – Qiaochu Yuan Jun 14 2010 at 20:30
1
Michael, you still have a chance to edit your question by adding the necessary contents and, of course, the trig identity itself. – Wadim Zudilin Jun 14 2010 at 23:27
show 5 more comments
## 1 Answer
With binomial theorem, the products on the right have closed form $$\sum_{|A|=n} \prod_{i \in A} \sin \theta_i \prod_{i \notin A} \cos \theta_i = \prod_{i=1}^n ( \sin \theta_i + \cos \theta_i )^n = \prod_{i=1}^n \sqrt{2} \sin (\theta_i + \pi/4)^n$$ So we'll let $x = \sin \theta + \cos \theta$ so the first sum looks like $$\sum_{n \geq 0, even} (-1)^{n/2} \prod_{i=1}^n x_i = 1 - x_1x_2 + x_1x_2x_3x_4 - \dots$$ This is not symmetric in the x's.
-
1
John, I don't think this is right. What is true is that $$f_0+if_1=\prod_j(\cos\theta_j+i\sin\theta_j)$$ from which the $f_0$ and $f_1$ identities drop out immediately. – Robin Chapman Jun 14 2010 at 15:04
The first two identities are of course universally known. The third one might be entirely novel for all I know. The simplest non-degenerate special case of the fourth one can at least be facetiously referred to as "well known", and possibly is actually well known within certain communities---I don't really know. (For the sake of at least a little bit of "self-containment", the simplest non-degenerate special case of the fourth one says that if $\alpha + \beta + \gamma = \pi$ then $4\sin\alpha\sin\beta\sin\gamma = \sin(2\alpha) + \sin(2\beta) + \sin(2\gamma)$.) – Michael Hardy Jun 14 2010 at 16:19
Yeah I misread the A as running through subsets of {1, 2, \dots, n}. In fact A runs through all n-element subsets of the natural numbers. – John Mangual Jun 14 2010 at 17:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9192582368850708, "perplexity_flag": "middle"}
|
http://nrich.maths.org/325/index
|
Slick Summing
Watch the video to see how Charlie works out the sum. Can you adapt his method?
Summing Geometric Progressions
Watch the video to see how to sum the sequence. Can you adapt the method to sum other sequences?
Double Trouble
Simple additions can lead to intriguing results...
Picture Story
Stage: 4 Challenge Level:
The picture illustrates the formula for the sum of the first six cube numbers: $1^3 + 2^3 + 3^3 + ... + 6^3 = ( 1+ 2 + 3 + ... + 6)^2$
Can you see which parts of the picture represent each part of the formula?
Could you draw a similar picture to represent the sum of the first seven cube numbers?
What about other sums of cubes?
Suggest a formula for the sum of the first $n$ cube numbers. Can you prove that your formula works, using diagrams and explanations? Send us your thoughts!
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9107898473739624, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/03/14/riemanns-condition/?like=1&source=post_flair&_wpnonce=4ca8fc8c32
|
# The Unapologetic Mathematician
## Riemann’s Condition
If we want our Riemann-Stieltjes sums to converge to some value, we’d better have our upper and lower sums converge to that value in particular. On the other hand, since the upper and lower sums sandwich in all the others, their convergence is enough for the rest. And their convergence is entirely captured by their lower and upper bounds, respectively — the upper and lower Stieltjes integrals. So we want to know when $\underline{I}_{\alpha,\left[a,b\right]}(f)=\overline{I}_{\alpha,\left[a,b\right]}(f)$.
We’ll prove this equality in general by showing that the difference has to be arbitrarily small. That is, for any partition $x$ of $\left[a,b\right]$ we have the inequalities
$\overline{I}_{\alpha,\left[a,b\right]}(f)\leq U_{\alpha,x}(f)$
$\underline{I}_{\alpha,\left[a,b\right]}(f)\geq L_{\alpha,x}(f)$
by definition. Subtracting the one from the other we find
$\overline{I}_{\alpha,\left[a,b\right]}(f)-\underline{I}_{\alpha,\left[a,b\right]}(f)\leq U_{\alpha,x}(f)-L_{\alpha,x}(f)$
So if given an $\epsilon>0$ we can find a partition $x$ for which the upper and lower sums differ by less than $\epsilon$ then the difference between the upper and lower integrals must be even less. If we can do this for any $\epsilon>0$, we say that the function $f$ satisfies Riemann’s condition with respect to $\alpha$ on $\left[a,b\right]$.
The lead-up to the definition of Riemann’s condition shows us that if $f$ satisfies this condition then the lower and upper integrals are equal. Then just like we saw happen with Darboux sums we can squeeze any Riemann-Stieltjes sum between and upper and a lower sum. So if the upper and lower integrals are both equal to some value, then the limit of the Riemann-Stieltjes sums over tagged partitions must exist and equal that value, and thus $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $\left[a,b\right]$.
Now what if the $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $\left[a,b\right]$? We would hope that $f$ then satisfies Riemann’s condition with respect to $\alpha$ on $\left[a,b\right]$, and so these three conditions are equivalent. So given $\epsilon>0$ we need to find an actual partition $x$ of $\left[a,b\right]$ so that $0\leq U_{\alpha,x}(f)-L_{\alpha(x)}<\epsilon$.
Since we’re assuming that $f$ is Riemann-Stieltjes integrable, we’ll call the value of the integral $A$. Then we can find a tagged partition $x_\epsilon$ so that for any finer tagged partitions $x=((x_0,...,x_n),(t_1,...,t_n))$ and $x'=((x_0,...,x_n),(t_1',...,t_n'))$ we have
$\displaystyle\left|A-\sum\limits_{i=1}^nf(t_i)\left(\alpha(x_i)-\alpha(x_{i-1})\right)\right|<\frac{\epsilon}{3}$
$\displaystyle\left|A-\sum\limits_{i=1}^nf(t_i')\left(\alpha(x_i)-\alpha(x_{i-1})\right)\right|<\frac{\epsilon}{3}$
Combining these we find that
$\displaystyle\left|\sum\limits_{i=1}^n\left(f(t_i)-f(t_i')\right)\left(\alpha(x_i)-\alpha(x_{i-1})\right)\right|<\frac{2}{3}\epsilon$
Now as we pick different $t$ and $t'$ we can make the difference in values of $f$ get as large as that between $M_i=\max\limits_{x_{i-1}\leq t\leq c}f(t)$ and $m_i=\min\limits_{x_{i-1}\leq t\leq c}f(t)$. So for any $h>0$ we can choose tags so that $f(t_i)-f(t_i')>M_i-m_i-h$. In particular, we can consider $h=\frac{\epsilon}{3(\alpha(b)-\alpha(a))}$, which is positive because $\alpha$ is increasing.
The difference between the upper and lower sums is
$\displaystyle\sum\limits_{i=1}^n\left(M_i-m_i\right)\left(\alpha(x_i)-\alpha(x_{i-1})\right)$
which is then less than
$\displaystyle\sum\limits_{i=1}^n\left(f(t_i)-f(t_i')\right)\left(\alpha(x_i)-\alpha(x_{i-1})\right)+h\sum\limits_{i=1}^n\left(\alpha(x_i)-\alpha(x_{i-1})\right)$
which is then less than $\epsilon$.
Thus we establish the equivalence of Riemann’s condition and Riemann-Stieltjes integrability, as long as the integrator $\alpha$ is increasing.
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 1 Comment »
1. [...] finally, we have Riemann’s condition. The function satisfies Riemann’s condition on we can make upper and lower sums arbitrarily [...]
Pingback by | December 2, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 48, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9021305441856384, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/46247/is-there-a-good-definition-of-the-universal-cover-for-non-connected-lie-groups/46260
|
## Is there a good definition of the universal cover for non-connected Lie groups?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is well-known that the universal cover $\tilde G$ of a connected Lie group $G$ has a Lie group structure such that the covering projection $\tilde G\to G$ is a Lie group morphism. Of course $\tilde G$ might not be linear even though $G$ is, but this is not the point here.
My question is: assume that $G$ is a not necessarily connected Lie group. Does there exist a Lie group $\tilde G$ and an onto Lie group morphism $\tilde G\to G$ whose restriction to the identity component of $\tilde G$ is the universal cover of the identity component of $G$?
I assume that the answer is "no" in general, but I could not find any counter-example.
@Jim: Of course, the terminology "universal cover" would have been inappropriate even though such a cover existed (which as you and André pointed out, is not the case).
I came to this question from some other direction. Namely, the $PSL_2(R)$ action on $RP^1$ lifts to a $\widetilde{PSL_2(R)}$ action on the universal cover of $RP^1$, and this action extends to an action of a two-sheeted cover of $PGL_2(R)$. It is tempting to denote this cover by $\widetilde{PGL_2(R)}$, and I wondered whether such a construction was standard.
-
1
The Pin groups of Atiyah-Bott-Shapiro (covering orthogonal groups) are the most natural examples of what you are looking for, but I'm not at all sure about the existence of such a construction for arbitrary non-connected Lie groups. (Calling it a "universal cover" as in your header might be overkill, since that is such a standard term: usually a universal cover is itself simply connected, in particular connected.) – Jim Humphreys Nov 16 2010 at 17:31
## 2 Answers
The group $Pin_-(2)$ is an example of what you're looking for.
It can be described explicitly as a subgroup of the group of unit quaternions:
$Pin_-(2)=$ { $a+bi| a^2+b^2=1$ } $\cup$ { $cj+dk| c^2+d^2=1$ } $\subset \mathbb H^\times$.
Its main interesting properties are:
- The conjugation action of $\pi_0$ on its Lie algebra is non-trivial.
- All the elements of the non-identity component have a non-trivial square.
There is no Lie group that is diffeomorphic to $\mathbb Z/2\times \mathbb R$ and that shares those properties.
-
Thank you André! This is the group I was originally testing, but for some reason I thought that it were no counter-example. I have to check your last assertion in order to credit your answer. – Andrei Moroianu Nov 16 2010 at 17:28
1
I first learned about $Pin_-(2)$ from my Berkeley advisor Allen Knutson. He had asked me the following question: "how many group structures are there on the disjoint union of two circles?". The answer is three. – André Henriques Nov 17 2010 at 19:54
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Expanding on Andre's answer, there is an obstruction class in $H^3(\pi_0(G),\pi_1(G,e))$ (due to R.L. Taylor, Covering groups of non connected topological groups, Proc. Amer. Math. Soc. 5, pp753-768, 1954) to the existence of a universal covering space. There is a University of Wales thesis by Mucuk with the main results contained in this paper by Brown and Mucuk which detail when you can get a universal covering space that acts as you want.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490640759468079, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/03/17/products-of-algebras-of-sets/?like=1&source=post_flair&_wpnonce=671450efa2
|
# The Unapologetic Mathematician
## Products of Algebras of Sets
As we deal with algebras of sets, we’ll be wanting to take products of these structures. But it’s not as simple as it might seem at first. We won’t focus, yet, on the categorical perspective, and will return to that somewhat later.
Okay, so what’s the problem? Well, say we have sets $X_1$ and $X_2$, and algebras of subsets $\mathcal{E}_1\subseteq P(X_1)$ and $\mathcal{E}_2\subseteq P(X_2)$. We want to take the product set $X=X_1\times X_2$ and come up with an algebra of sets $\mathcal{E}\subseteq P(X)$. It’s sensible to expect that if we have $E_1\in\mathcal{E}_1$ and $E_2\in\mathcal{E}_2$, we should have $E_1\times E_2\in\mathcal{E}$. Unfortunately, the collection of such products is not, itself, an algebra of sets!
So here’s where our method of generating an algebra of sets comes in. In fact, let’s generalize the setup a bit. Let’s say we’ve got $\mathcal{R}_1\subseteq P(X_1)$ which generates $\mathcal{E}_1$ as the collection of finite disjoint unions of sets in $\mathcal{R}_1$, and let $\mathcal{R}_2\subseteq P(X_2)$ be a similar collection. Of course, since the algebras $\mathcal{E}_1$ and $\mathcal{E}_2$ are themselves closed under finite disjoint unions, we could just take $\mathcal{R}_1=\mathcal{E}_1$ and $\mathcal{R}_2=\mathcal{E}_2$, but we could also have a more general situation.
Now we can define $\mathcal{R}$ to be the collection of products $R_1\times R_2$ of sets $R_1\in\mathcal{R}_1$ and $R_2\in\mathcal{R}_2$, and we define $\mathcal{E}$ as the set of finite disjoint unions of sets in $\mathcal{R}$. I say that $\mathcal{R}$ satisfies the criteria we set out yesterday, and thus $\mathcal{E}$ is an algebra of subsets of $X$.
First off, $\emptyset$ is in both $\mathcal{R}_1$ and $\mathcal{R}_2$, and so $\emptyset\times\emptyset=\emptyset$ is in $\mathcal{R}$. On the other hand, $X_1\in\mathcal{R}_1$ and $X_2\in\mathcal{R}_2$, so $X_1\times X_2=X$ is in $\mathcal{R}$. That takes care of the first condition.
Next, is $\mathcal{R}$ closed under pairwise intersections? Let $R_1\times R_2$ and $S_1\times S_2$ be sets in $\mathcal{R}$ A point $(x_1,x_2)$ is in the first of these sets if $x_1\in R_1$ and $x_2\in R_2$; it’s in the second if $x_1\in S_1$ and $x_2\in S_2$. Thus to be in both, we must have $x_1\in R_1\cap S_1$ and $x_2\in R_2\cap S_2$. That is,
$\displaystyle(R_1\times R_2)\cap(S_1\times S_2)=(R_1\cap S_1)\times(R_2\cap S_2)$
Since $\mathcal{R}_1$ and $\mathcal{R}_2$ are themselves closed under intersections, this set is in $\mathcal{R}$.
Finally, can we write $(R_1\times R_2)\setminus(S_1\times S_2)$ as a finite disjoint union of sets in $\mathcal{R}$? A point $(x_1,x_2)$ is in this set if it misses $S_1$ in the first coordinate — $x_1\in R_1\setminus S_1$ and $x_2\in R_2$ — or if it does hit $S_1$ but misses $S_2$ in the second coordinate — $x_1\in R_1\cap S_1$ and $x_2\in R_2\setminus S_2$. That is:
$\displaystyle(R_1\times R_2)\setminus(S_1\times S_2)=\left((R_1\setminus S_1)\times R_2\right)\cup\left((R_1\cap S_1)\times(R_2\setminus S_2)\right)$
Now $R_1\setminus S_1\in\mathcal{E}_1$, and so it can be written as a finite disjoint union of sets in $\mathcal{R}_1$; thus $(R_1\setminus S_1)\times R_2$ can be written as a finite disjoint union of sets in $\mathcal{R}$. Similarly, we see that $(R_1\cap S_1)\times(R_2\setminus S_2)$ can be written as a finite disjoint union of sets in $\mathcal{R}$. And no set from the first collection can overlap any set in the second collection, since they’re separated by the first coordinate being contained in $S_1$ or not. Thus we’ve written the difference as a finite disjoint union of sets in $\mathcal{R}$, and so $(R_1\times R_2)\setminus(S_1\times S_2)\in\mathcal{E}$.
Therefore, $\mathcal{R}$ satisfies our conditions, and $\mathcal{E}$ is the algebra of sets it generates.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
## 3 Comments »
1. If this is the product, what are the morphisms?
Comment by Chad | March 18, 2010 | Reply
2. “We won’t focus, yet, on the categorical perspective, and will return to that somewhat later.”
Comment by | March 18, 2010 | Reply
3. [...] fact, we’ve seen that given rings and we can define the product as the collection of finite disjoint unions of [...]
Pingback by | July 15, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 72, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513030052185059, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/100882?sort=newest
|
## S-matrix for the HOMFLY/Hecke category
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question concerns the HOMFLY-PT category, closely related to Hecke algebras. (See here for example.)
The minimal idempotents of this category are indexed by pairs $(\lambda_+, \lambda_-)$ of Young diagrams. (The sizes of the diagrams are arbitrary and need not be the same. The diagram $\lambda_+$ corresponds to upward oriented strands, while $\lambda_-$ corresponds to downward oriented strands.) Consequently one can define numerical invariants of oriented links whose components are labeled by pairs of Young diagrams. This is the "colored" HOMFLY-PT polynomial.
Of fundamental importance in this subject are the invariants $S_{\lambda_+\lambda_-,\mu_+\mu_-}$ of the Hopf link with its components labeled by pairs of Young diagrams (i.e. idempotents) $(\lambda_+, \lambda_-)$ and $(\mu_+, \mu_-)$. In TQFT language, this is the "S-matrix" of the theory.
My Question:
Has the S-matrix for the HOMFLY-PT category been calculated and published? If not, are partial results in this direction known?
I am aware of this paper by Morton and Lukac, which does the case where $\lambda_-$ and $\mu_-$ are both empty (i.e. all strands oriented the same direction). This paper by Morton and Hadji is also related. Are there other relevant papers that I have missed?
See also the BMW version of this question here.
-
## 1 Answer
The $S$-matrix is given by \begin{equation} \frac{S_{ij}}{S_{00}}=S_{R_i}(q^{\rho})S_{R_j}(q^{\rho+R_i}) \end{equation} where $S_{R}(x_1,\cdots,x_N)$ is the Schur polynomial with highest weight $R$, $S_{R}(q^{\rho})=S_{R}(q^{\rho_{1}},...,q^{\rho_{N}})$ and $\rho$ is the Weyl vector. Furthermore, the paper by Aganagic and Shakirov propoesed the refinement (categorification) of the $S$-matrix \begin{equation} \frac{S_{ij}}{S_{00}}=M_{R_i}(t^{\rho})M_{R_j}(t^{\rho}q^{R_i}) \end{equation} where $M_{R}(x_1,\cdots,x_N;q,t)$ is the Macdonald polynomial with highest weight $R$ and $M_{R}(t^{\rho}q^{R})=M_{R}(t^{\rho_{1}}q^{R_{1}},...,t^{\rho_{N}}q^{R_{n}};q,t)$. It reduces to the above equation for $q=t$. By using the refined topological vertex, Iqbal and Kozcaz showed that the Khovanov-Rozansky polynomial of the Hopf link is actually proportional to the refined $S$-matrix \begin{equation} KhR_{ij}({\rm Hopf},q,t)\propto M_{R_i}(t^{\rho})M_{R_j}(t^{\rho}q^{R_i}) \end{equation} See Eq.(4.10) and appendix B in the paper.
-
Thanks -- I'll have a look at those papers. – Kevin Walker Aug 19 at 23:05
If I'm understanding correctly, these formulas are for the case where all strands are oriented in the same direction. (i.e. the case where $\mu_-$ and $\lambda_-$ of the question are both empty.) This case is also covered in the Morton and Lukac paper I refer to above. But it's nice to know about other papers discussing other viewpoints, so thanks again for your answer. – Kevin Walker Aug 19 at 23:23
Oh, sorry. I did not understand your question correctly. But, that's all I know. – Satoshi Nawata Aug 19 at 23:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8749013543128967, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/15688-prove-inequality.html
|
# Thread:
1. ## Prove an inequality
Can anyone prove the following inequality:
$<br /> \sum_{i=1}^N y_{i}^2/n_{i} - \frac{\left(\sum_{i=1}^N y_{i}\right)^2}{n} \ge 0,<br />$
where $n=\sum_{i=1}^N n_{i},\ \ n_{i} \ge 0$, and $y_{i}$ is a real quantity (can be both negative or positive)?
It appears that the inequality is valid (even used random numbers), but can't see how to prove it.
2. Originally Posted by puch7524
Can anyone prove the following inequality:
$<br /> \sum_{i=1}^N y_{i}^2/n_{i} - \frac{\left(\sum_{i=1}^N y_{i}\right)^2}{n} \ge 0,<br />$
where $n=\sum_{i=1}^N n_{i},\ \ n_{i} \ge 0$, and $y_{i}$ is a real quantity (can be both negative or positive)?
It appears that the inequality is valid (even used random numbers), but can't see how to prove it.
This inequality gets really messy really quickly. So I will prove the special case $N=2$. I am sure you can use the same method to generalize it to more terms but it just is so messy.
---
First, I think you meant $n_i >0$.
Thus, we have to show:
$\frac{y_1^2}{n_1}+\frac{y_2^2}{n_2} \geq \frac{(y_1+y_2)^2}{n_1+n_2}$
Rewrite as,
$\frac{y_1^2}{n_1}+\frac{y_2^2}{n_2} \geq \frac{y_1^2}{n_1+n_2}+\frac{y_2^2}{n_1+n_2}+ 2\cdot \frac{y_1y_2}{n_1+n_2}$
Multiply by $n_1n_2(n_1+n_2)>0$:
$n_2(n_1+n_2)y_1^2+n_1(n_1+n_2)y_2^2 \geq y_1^2n_1n_2+y_2^2n_1n_2+2y_1y_2n_1n_2$
Open and cancel,
$n_2^2y_2^2 + n_1^2y_2^2 \geq 2y_1y_2n_1n_2$
This is the AM-GM inequality.
Which is true.
(Or you can write $(n_1y_1 - n_2y_2)^2 \geq 0$).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423511624336243, "perplexity_flag": "middle"}
|
http://pediaview.com/openpedia/Boundary_conditions
|
# Boundary conditions
Shows a region where a differential equation is valid and the associated boundary values
In mathematics, in the field of differential equations, a boundary value problem is a differential equation together with a set of additional restraints, called the boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions.
Boundary value problems arise in several branches of physics as any physical differential equation will have them. Problems involving the wave equation, such as the determination of normal modes, are often stated as boundary value problems. A large class of important boundary value problems are the Sturm–Liouville problems. The analysis of these problems involves the eigenfunctions of a differential operator.
To be useful in applications, a boundary value problem should be well posed. This means that given the input to the problem there exists a unique solution, which depends continuously on the input. Much theoretical work in the field of partial differential equations is devoted to proving that boundary value problems arising from scientific and engineering applications are in fact well-posed.
Among the earliest boundary value problems to be studied is the Dirichlet problem, of finding the harmonic functions (solutions to Laplace's equation); the solution was given by the Dirichlet's principle.
## Explanation
Boundary value problems are similar to initial value problems. A boundary value problem has conditions specified at the extremes ("boundaries") of the independent variable in the equation whereas an initial value problem has all of the conditions specified at the same value of the independent variable (and that value is at the lower boundary of the domain, thus the term "initial" value).
For example, if the independent variable is time over the domain [0,1], a boundary value problem would specify values for $y(t)$ at both $t=0$ and $t=1$, whereas an initial value problem would specify a value of $y(t)$ and $y'(t)$ at time $t=0$.
Finding the temperature at all points of an iron bar with one end kept at absolute zero and the other end at the freezing point of water would be a boundary value problem.
If the problem is dependent on both space and time, one could specify the value of the problem at a given point for all time the data or at a given time for all space.
Concretely, an example of a boundary value (in one spatial dimension) is the problem
$y''(x)+y(x)=0 \,$
to be solved for the unknown function $y(x)$ with the boundary conditions
$y(0)=0, \ y(\pi/2)=2.$
Without the boundary conditions, the general solution to this equation is
$y(x) = A \sin(x) + B \cos(x).\,$
From the boundary condition $y(0)=0$ one obtains
$0 = A \cdot 0 + B \cdot 1$
which implies that $B=0.$ From the boundary condition $y(\pi/2)=2$ one finds
$2 = A \cdot 1$
and so $A=2.$ One sees that imposing boundary conditions allowed one to determine a unique solution, which in this case is
$y(x)=2\sin(x). \,$
## Types of boundary value problems
The boundary value problem for an idealised 2D rod
If the boundary gives a value to the normal derivative of the problem then it is a Neumann boundary condition. For example, if there is a heater at one end of an iron rod, then energy would be added at a constant rate but the actual temperature would not be known.
If the boundary gives a value to the problem then it is a Dirichlet boundary condition. For example, if one end of an iron rod is held at absolute zero, then the value of the problem would be known at that point in space.
If the boundary has the form of a curve or surface that gives a value to the normal derivative and the problem itself then it is a Cauchy boundary condition.
Aside from the boundary condition, boundary value problems are also classified according to the type of differential operator involved. For an elliptic operator, one discusses elliptic boundary value problems. For an hyperbolic operator, one discusses hyperbolic boundary value problems. These categories are further subdivided into linear and various nonlinear types.
## See also
Related mathematics: Physical applications: Numerical algorithms:
## References
• A. D. Polyanin and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2 [Amazon-US | Amazon-UK].
• A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002. ISBN 1-58488-299-9 [Amazon-US | Amazon-UK].
## Source
Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Boundary conditions", which is available in its original form here:
http://en.wikipedia.org/w/index.php?title=Boundary_conditions
• ## Finding More
You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page.
• ## Questions or Comments?
If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content.
All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8853017687797546, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/tagged/futures+time-series
|
# Tagged Questions
3answers
599 views
### How to normalize Futures data(different leverage) for cointegration test?
For example I want to construct 2 time series, one for ES and the other for NQ and test for cointegration. ES one point equal to 50$. NQ one point equal to 20$. If I have the following data: ...
2answers
759 views
### Is there a standard method for getting a continuous time series from futures data?
I would like to be able to analyse futures prices as one continuous time series, so what kinds of methods exist for combining the prices for the various delivery dates into a single time series? I am ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919275164604187, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/151700-can-random-variable-donimant.html
|
# Thread:
1. ## can a random variable donimant
Given two random variables x and y, and a constant c
What conditions are needed to make:
$Prob( w x + y < c ) \approx Prob( w x < c ), \text{ for } w \rightarrow \infty$
Can anyone help?
I think $E(x) < \infty$ and $E(y) < \infty$ might do. Is this right?
tks!!!!
2. 1 I assume, for all c?
2 Do you want one or two tilde's?
One means that the limit of the ratio is 1
Two means that the limit of the ratio is not infinite, O(1).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146183729171753, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/36994-very-confused-matrices.html
|
# Thread:
1. ## very confused matrices
STUCK ON PART (III) hellp
(i) find the inverse of 4 1 k , k doesnt =5
3 2 5
8 5 13
DONE
(ii) solve the smultaneous equation
4x + y + 7z = 12
3x + 2y + 5z = m
8x + 5y +13z = 0
giving x,y,z in terms of m,,,DONE i got x= -6-11m
y= -6+2m
z=6+6m correct i think
(iii) find the value of p for which the simultaneous equations have solutions, and fn general solution in this case
4x+y+5z=12
3x+2y+5z=p
8x+5y+13z=0
how does it have solutions??,ithought this was just det doesnt = 0,,confused as to how im supposed to do this part when k doesnt = 5 either,,,,,,help pls
2. Hello, i_zz_y_ill!
(iii) Find the value of $p$ for which the system has solutions.
. . $\begin{array}{ccc}4x+y+5z&=& 12 \\<br /> 3x+2y+5z&=&p \\ 8x+5y+13z& =& 0 \end{array}$
We have: . $\left| \begin{array}{ccc|c} 4 & 1 & 5 & 12 \\ 3 & 2 & 5 & p \\ 8 & 5 & 13 & 0 \end{array}\right|$
$\begin{array}{c}R_1-R_2 \\ \\ R_3-2R_1\end{array}\left|\begin{array}{ccc|c}1 & \text{-}1 & 0 & 12-p \\ 3 & 2 & 5 & p \\ 0 & 3 & 3 & \text{-}24 \end{array}\right|$
$\begin{array}{c}\\ R_2 - 3R_1 \\ R_3\div3 \end{array}\left|\begin{array}{ccc|c}1 & \text{-}1 & 0 & 12-p \\ 0 & 5 & 5 & 4p-36 \\ 0 & 1 & 1 & \text{-}8 \end{array}\right|$
$\begin{array}{c}\text{Switch} \\R_2\text{ and }R_3\\ \end{array}<br /> \left|\begin{array}{ccc|c}1 & \text{-}1 & 0 & 12-p \\ 0 & 1 & 1 & \text{-}8 \\ 0 & 5 & 5 & 4p-36 \end{array}\right|$
$\begin{array}{c}R_1+R_2 \\ \\ R_3-5R_2\end{array}<br /> \left|\begin{array}{ccc|c}1 & 0 & 1 & 4-p \\ 0 & 1 & 1 & \text{-}8 \\<br /> 0 & 0 & 0 & 4p+4 \end{array}\right|$
The last row of the matrix is all zeros.
To have a solution, the last term must also be zero. **
. . $4p + 4 \:=\:0\quad\Rightarrow\quad\boxed{ p \:=\:-1}$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
We have: . $\begin{array}{ccc}x + z &=& 5 \\ y + z &=&\text{-}8 \end{array} \quad\Rightarrow\quad \begin{array}{ccc} x &=& 5 - z \\ y &=& \text{-}8-z \\ z&=&z \end{array}$
On the right, replace $z$ with the parameter $t.$
And we have: . $\begin{Bmatrix}x &=& 5-t \\ y &=&\text{-}8-t \\ z &=& t \end{Bmatrix}$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
**
If the last row were something like: . $0\;\;0\;\;0\;\;|\;\;3$
. . the system would have no solution . . . remember?
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8255661129951477, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/36211/proof-of-pauli-group-preservation-by-clifford-group-conjugation/36254
|
# Proof of Pauli group preservation by Clifford group conjugation?
A well know result is that Clifford group preserve the Pauli group under conjugation or, in other words:
• $C(P_{1} \otimes P_{2})C^{\dagger} = P_{3} \otimes P_{4}$, with $C \in$ Clifford group and $P_{n} \in$ Pauli group.
How we can prove this?
Thank's...
-
Can you say some more about what these groups are? – user404153 Sep 12 '12 at 4:22
– David Zaslavsky♦ Sep 12 '12 at 5:00
Also, it seems like a relatively trivial problem of multiplying finitely many matrices, but please define these to get a proper answer. You should know that "preserving a group" usually means preserving the algebraic relations, and then all conjugations do that. In this case, I assume you mean multiplying and product of $\sigma_x,\sigma_y,\sigma_z$ by some finite set of matrices and their inverse keeps you in the set of these matrices and their products – Ron Maimon Sep 12 '12 at 6:33
– user901366 Sep 12 '12 at 12:54
## 1 Answer
Usually the Clifford group is defined to be the group of unitaries that preserve the Pauli group under conjugation, so no proof is needed.
If instead you are asking, how can we prove that a certain unitary (such as the controlled-NOT) is in the Clifford group, the usual straightforward way to do this is just to calculate. Conjugation is a group homomorphism, so it is sufficient to check a generating set of the Pauli group. For instance, single-qubit X and Z operators are enough, so in the 2-qubit case, you should check the action of conjugation for X_1, X_2, Z_1, and Z_2.
See quant-ph/9807006 for more about the Clifford group.
-
Hi, thanks by your answer. My question is really about the proof this definition, my intention is know the technique applied in this proof to study the construction of groups that show same structure (preservation about conjugation). – user901366 Sep 25 '12 at 17:57
1
I don't understand what you are asking. How does one prove a definition? There are some standard generalizations to qudit Clifford groups, which share many of the properties of the qubit version. The general group theory construction of preserving a subgroup under conjugation is called the "normalizer". – Daniel Gottesman Sep 27 '12 at 14:49
Ok, now I understand, my question is really about find group normalizer's. – user901366 Sep 28 '12 at 13:46
– user901366 Sep 28 '12 at 13:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9206119179725647, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=1580720
|
Physics Forums
## Surface Integral
As you know surface integrals are integrated with respect to dS. We then tranform the integral into one in dxdy. Is this the end of the problem or must we calculate it for dxdz and dydz as well and if so do you add up all results at the end!?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
You can do it several ways. If you have some arbitrary surface, the trick is to project the surface to some simpler surface, for example the x-y-plane. With projection we have simpler integral in which we use dA, dA being infinitesimal surface element on our simpler surface, for example on the x-y-plane the area element is dx*dy (could be $$r dr d\phi$$ if we used polar cordinates). You asked wether we calculate it for dxdz or dydz, the answer is: you have to use the plane on which the surface is projected on. If we have some surface f(x,y), we project it on the x-y-plane and this is almost always the case. So we have to evaluate only one integral, in this case one with dxdy.
Thanks for reply Just to clarify, if asked to evaluate doub_int[f(x,y) dS] We just solve in xy plane? Thanks again...
## Surface Integral
Yes. Solve in x-y-plane. But the most important thing to remember is the projection! Let da be surface element on f(x,y) and dA a surface element on x-y-plane. Then we have a relation $$da = dA \sqrt{\frac{\partial f(x,y)}{\partial x}^2 + \frac{\partial f(x,y)}{\partial y}^2 +1}$$ (follows from the cosine of the angle between the surface normal and the x-y-plane normal). So when you're doing the surface integral you get $$\int f(x,y)da = \int f(x,y) \sqrt{\frac{\partial f(x,y)}{\partial x}^2 + \frac{\partial f(x,y)}{\partial y}^2 +1} dA$$. For only the surface area you have similar formula, you just have $$\int da$$ and so on.
Recognitions: Gold Member Science Advisor Staff Emeritus More generally, one can have a surface in terms of any 2 parameters. If x= x(u,v), y= y(u,v), z= z(u,v), then we can write the "position vector" of any point on the surface as $x(u,v)\vec{i}+ y(u,v)\vec{j}+ z(u,v)\vec{k}$. The two derivatives $\vec{r}_u= x_u\vec{i}+ y_u\vec{j}+ z_u\vec{k}$ and $\vec{r}_v= x_v\vec{i}+ y_v\vec{j}+ z_v\vec{k}$ lie in the tangent plane and their lengths are the differentials of length in that direction. Their cross product, $\vec{r}_u\times\vec{r}_v$ is called the "fundamental vector product" and its length, times dudv, is the differential of surface area. In particular, if z= f(x,y), this gives exactly what JukkaVayrynen said.
Thread Tools
| | | |
|---------------------------------------|----------------------------|---------|
| Similar Threads for: Surface Integral | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 6 |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 11 |
| | Calculus & Beyond Homework | 21 |
| | Calculus & Beyond Homework | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8823960423469543, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/142272-two-ordinal-problems-print.html
|
# Two ordinal problems
Printable View
• April 30th 2010, 04:47 AM
Ester
Two ordinal problems
I have this kind of problems:
1) Let $A$ be a set and let $\alpha = \{ \beta \mid \beta \mbox{ is ordinal and } \beta \preceq A \}$.
Show that a) $\alpha$ is a cardinal number, b) $card(A) < \alpha$
2) Let A and $\alpha$ be the same as above. Show that $\alpha$ is the smallest cardinal, which is greater than $card(A)$.
I have no idea how to solve these problems. Any help would be nice.. Thanks!
• April 30th 2010, 08:20 AM
MoeBlee
Let's do one at a time.
a = {b | b is an ordinal and b is dominated by A}.
Show a is a cardinal.
'a is a cardinal'.
Spell that definition out, then work to show that a meets all the requirements in the definition.
Show me what you get doing that.
• May 2nd 2010, 02:20 AM
Ester
Quote:
Originally Posted by MoeBlee
Let's do one at a time.
a = {b | b is an ordinal and b is dominated by A}.
Show a is a cardinal.
'a is a cardinal'.
Spell that definition out, then work to show that a meets all the requirements in the definition.
Show me what you get doing that.
I have this kind of definition for cardinals:
Let $A$ be a set. Now $card(A) = \mbox{ "the smallest ordinal so that }A \approx \alpha$ ".
So, does this mean I have to find a set B whom $\alpha$ is the smallest ordinal? In this case, is my set $B = \{\beta \mid \beta \mbox{ is ordinal and }\beta \preceq A \}$?
• May 2nd 2010, 10:17 AM
MoeBlee
Your definition of 'card' is fine.
card(A) = the least ordinal a such that A equinumerous with a
But then we need to define the predicate 'a is a cardinal'.
So, we define:
a is a cardinal <-> there exists an A such that a = card(A)
And we derive:
a is a cardinal <-> (a is an ordinal & a is not equinumerous with any ordinal less than a)
So now you have to prove:
(1) {b | b is an ordinal & b is dominated by A} is an ordinal
(2) {b | b is an ordinal & b is dominated by A} is not equinumerous with any lesser ordinal
For this, of course, you need to refer to your definition of 'ordinal'.
Let me see what you come up with, and I'll help you if you get stuck.
• May 2nd 2010, 10:21 AM
MoeBlee
Quote:
Originally Posted by Ester
So, does this mean I have to find a set B whom $\alpha$ is the smallest ordinal?
In this sense you have to find a set X such that a is equinumerous with X and no ordinal less than a is equinumerous with X, but also you still have to show that a is an ordinal.
But it's just as easy to say you need to prove a is an ordinal and that a is not equinumerous with any ordinal less than a (since then the set X can be taken to be a itself).
• May 3rd 2010, 04:08 AM
Ester
Quote:
Originally Posted by MoeBlee
Your definition of 'card' is fine.
card(A) = the least ordinal a such that A equinumerous with a
But then we need to define the predicate 'a is a cardinal'.
So, we define:
a is a cardinal <-> there exists an A such that a = card(A)
And we derive:
a is a cardinal <-> (a is an ordinal & a is not equinumerous with any ordinal less than a)
So now you have to prove:
(1) {b | b is an ordinal & b is dominated by A} is an ordinal
(2) {b | b is an ordinal & b is dominated by A} is not equinumerous with any lesser ordinal
1) Let's choose arbitrary $\beta \in \alpha$ and assume that $\gamma \in \beta$. $\beta \preceq A$, so there exists an injection $f:\beta \rightarrow A$. $\gamma \in \beta$ so $\gamma < \beta$. There for function $g:\gamma \rightarrow A$ is also injection. That means that $\gamma \preceq A$, so $\gamma \in \alpha$. $\alpha$ is a set of ordinals and it is transitive, so $\alpha$ is ordinal.
2) I have only ideas for this. I thought, I could begin like this:
Let's choose arbitrary $\beta < \alpha$. That means $\beta \in \alpha$, so $\beta \preceq A$.
Counterexample: $\beta \approx \alpha$.
I can't get any further here...
• May 3rd 2010, 06:43 AM
MoeBlee
Quote:
Originally Posted by Ester
1) Let's choose arbitrary $\beta \in \alpha$ and assume that $\gamma \in \beta$. $\beta \preceq A$, so there exists an injection $f:\beta \rightarrow A$. $\gamma \in \beta$ so $\gamma < \beta$. There for function $g:\gamma \rightarrow A$ is also injection. That means that $\gamma \preceq A$, so $\gamma \in \alpha$. $\alpha$ is a set of ordinals and it is transitive, so $\alpha$ is ordinal.
Very good.
I would put it this way:
Suppose g in b in a. So there is an injection from b into A. Also, the identity function on g is an injection from g into b. So, by composition of functions, we have an injection from g into A. And g is an ordinal. So g in A. So a is epsilon-transitive. So, since a is an epsilon-transitive set of ordinals, we have that a is an ordinal.
Quote:
Originally Posted by Ester
2) I have only ideas for this. I thought, I could begin like this:
Let's choose arbitrary $\beta < \alpha$. That means $\beta \in \alpha$, so $\beta \preceq A$.
Counterexample: $\beta \approx \alpha$.
You've almost got it (except, I don't see why you use the word 'counterexample' there).
These are the relevant items now:
1. b is equinumerious with a
2. b is dominated by A
3. a is an ordinal
4. I don't want to tell you this one (but you know it) since telling you would give away the answer.
5. Another simple fact I don't want to give away.
So now just draw a contradiction from 1-5. Let me know what you come up with ...
• May 3rd 2010, 09:39 AM
Ester
Quote:
Originally Posted by MoeBlee
You've almost got it (except, I don't see why you use the word 'counterexample' there).
Sorry, English is not my native language. :) I think the word what I was thinking was counterassumption (?).
Quote:
Originally Posted by MoeBlee
These are the relevant items now:
1. b is equinumerious with a
2. b is dominated by A
3. a is an ordinal
4. I don't want to tell you this one (but you know it) since telling you would give away the answer.
5. Another simple fact I don't want to give away.
So now just draw a contradiction from 1-5. Let me know what you come up with ...
This is what I came up with:
$\beta \approx \alpha$ and $\beta \preceq A$, so there for $\alpha \preceq A$. $\alpha$ is an ordinal, so $\alpha \in \{\beta \mid \beta \mbox{ is an ordinal and } \beta \preceq A\} = \alpha$. That means that $\alpha \in \alpha$, which is contradiction. There for the counterassumption is false and orginal statement is true.
• May 3rd 2010, 09:45 AM
MoeBlee
Quote:
Originally Posted by Ester
$\beta \approx \alpha$ and $\beta \preceq A$, so there for $\alpha \preceq A$. $\alpha$ is an ordinal, so $\alpha \in \{\beta \mid \beta \mbox{ is an ordinal and } \beta \preceq A\} = \alpha$. That means that $\alpha \in \alpha$, which is contradiction. There for the counterassumption is false and orginal statement is true.
Perfect. Now, let's see how far you can get with the next part of the exercise you posted.
• May 4th 2010, 05:31 AM
Ester
So, I have to show that $card(A) < \alpha$.
Using the definition of cardinals, I get that card(A) is an ordinal and $card(A) \approx A$, so $card(A) \preceq A$. That means that $card(A) \in \{\beta \mid \beta \mbox{ is ordinal and } \beta \preceq A\} = \alpha$. $card(A) \in \alpha$, so $card(A) < \alpha$.
I was also thinking the second problem which I have (Show that $\alpha$ is the smallest cardinal which is greater than card(A)), and this is what I came up with:
Counterassumption: There exists a cardinal $\kappa$ such that $card(A) < \kappa < \alpha$. We have $\kappa < \alpha$, which means $\kappa \in \alpha$. So $\kappa$ is an ordinal and $\kappa \preceq A$. $card(A) < \kappa$, so $A \prec \kappa$.
Now we have $A \prec \kappa \preceq A$, which leads to $A \prec A$. This is contradiction.
So the orginal statement is true.
Is this right?
• May 4th 2010, 08:13 AM
MoeBlee
Quote:
Originally Posted by Ester
Using the definition of cardinals, I get that card(A) is an ordinal and $card(A) \approx A$, so $card(A) \preceq A$. That means that $card(A) \in \{\beta \mid \beta \mbox{ is ordinal and } \beta \preceq A\} = \alpha$. $card(A) \in \alpha$, so $card(A) < \alpha$.
Very good. A couple of notes:
(1) We have that card(A) is equinumerous with A by virtue of the numeration theorem (derived from the axiom of choice and the axiom schema of replacement). (Though, by the Fregean method, this particular proof will work even without numeration theorem.)
(2) Just remember that card(A) in a implies that card(A) is cardinal-less-than a by the fact that both card(A) and a are cardinals (as opposed to merely ordinals, since for certain ordinals we have k in j and k equinumerous with j).
Here's another version (basically the same as yours):
Suppose it is not the case that card(A) < a. So a is less than or equal card(A). (This does not require the axiom of choice, since we're not relying on domination trichotomy holding among all sets but rather only that cardinal less-than trichotomy holds among all cardinals.) So a is dominated by A. So a in a, which is a contradiction, since a is an ordinal (also contradicts axiom of regularity).
Quote:
Originally Posted by Ester
Counterassumption: There exists a cardinal $\kappa$ such that $card(A) < \kappa < \alpha$. We have $\kappa < \alpha$, which means $\kappa \in \alpha$. So $\kappa$ is an ordinal and $\kappa \preceq A$. $card(A) < \kappa$, so $A \prec \kappa$.
Now we have $A \prec \kappa \preceq A$, which leads to $A \prec A$. This is contradiction.
Very good. Note that the numeration theorem was used to infer "A is strictly dominated by k" from "card(A) is cardinal-less-than k".
All times are GMT -8. The time now is 07:40 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 86, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602097272872925, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/13124/ext-complexes/13130
|
# Ext & Complexes
I have heard that given two sheaves $A$ and $B$ on a variety, one can identify elements of $Ext^d(A,B)$ with complexes of sheaves $$0\to B \to C_1 \to \cdots \to C_d \to A \to 0.$$
My questions are,
How do I see that this is true?
and
If I have obtained an element of $Ext^n$ by some other method, can I explicitly construct the $C_j$ sheaves and the differentials?
I am sure this is well-known, so I'm marking it also as "reference-request".
-
Are you familiar with how to do this in the setting without sheaves? For example $Ext$ of $R$-modules and how to get extensions from cocycles? – Sean Tilson Dec 5 '10 at 18:34
I've never gone through the general case, but I found that working out the special case of exact sequences $0 \to S \to E \to Q \to 0$ of vector bundles (i.e. of $Ext^1(S,Q)$) gives a pretty good idea of what's going on (it also makes you never want to check the details in the general case). – Gunnar Magnusson Dec 5 '10 at 18:43
@Gunnar, to get a feeling for the general case, you need to do at least $\mathrm{Ext}^2$. – Mariano Suárez-Alvarez♦ Dec 9 '10 at 1:51
## 2 Answers
For modules, Weibel discusses this in "Introduction to homological algebra." Section 3.4 deals with d=1 case and in Vista 3.4.6 is about the general case. He gives no proof for d>1 and refers to Bourbaki "Algebre homologique" 7.5 and Maclane "Homology" pp82-87.
-
In Vista 3.4.6, Weibel says "... the set of equivalence classes ... (if this is indeed a set)", and then claims that $Ext^1(A,B)$ is an abelian group. How can $Ext^1$ not be a set, and still be an abelian group? – James Davidoff Dec 5 '10 at 19:13
2
I think (I could be wrong) that the point of that vista is that the Baer sum on representatives of equivalence classes of extensions is always well-defined in an abelian category, and always such that modulo the equivalence it is associative, has an identity, every element has an inverse and is commutative. If the collection of equivalence classes formed as set, we would have an abelian group. If the collection of equivalence classes did not form a set, but rather a class, then we would have an Abelian Group, i.e. a class with a group structure (another example: the Field of surreal numbers). – Vladimir Sotirov Dec 5 '10 at 22:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237298369407654, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/297941/limits-in-c0-1
|
# Limits in $C[0,1]$
Let $(x_i)$ be a sequence of numbers in $(0,1)$ such that $\lim_{n \to \infty} \left( \frac{1}{n} \right) \sum_{i = 1}^n x_i^k$ exists for all integers $k \geq 0$. Show that $\lim_{n \to \infty} \left( \frac{1}{n} \right) \sum_{i = 1}^n f(x_i)$ exists for all functions $f \in C[0,1]$.
I was wondering if I could get a hint?
-
## 1 Answer
• Show that $\lim_{n\to \infty}\frac 1n\sum_{i=1}^nP(x_i)$ exists for any polynomial $P$.
• Conclude by a well known theorem about approximation of continuous functions on a compact subset of the real line.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947964608669281, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/107877/list
|
## Return to Question
2 Corrected notation
Suppose I have an infinite discrete topological space $X$ of cardinality $\kappa$. Then I know some things about the Stone-Cech compactification, $\beta X$: it is Hausdorff and compact but not sequentially compact, has a basis of clopen sets, etc. My question is the following: is there a "nice" characterization of the spaces $Y$ which are homeomorphic to the Stone-Cech compactification of a discrete space? Certainly, the term "nice" is vague; I have in mind characterizations only using terms from a standard text on point-set topology, but I would consider as an answer to this question really any nontrivial characterization of Stone-Cech compactifications.
I am particularly interested in nice characterizations that require some set theory, such as "assuming $V=L$, $Y$ is homeomorphic to $\beta(X)$ \beta X$for some discrete$X$iff$Y$is compact, not sequentially compact, and has a basis of clopen sets" (although I'm certain that statement is extremely false), and I would especially like to know whether there are two incompatible strong set-theoretic assumptions which yield distinct nice characterizations. The only relevant result I know is along these lines: in 1963, Parovicenko showed that assuming CH, the only Parovicenko space (which has a long but elementary definition*) is$\beta\mathbb{N}-\mathbb{N}$; this can be molded into a characterization of$\beta\mathbb{N}\$, assuming CH, but says nothing about whether a space is the Stone-Cech compactification of a discrete space of uncountable cardinality. In 1978, van Douwen and van Mill showed that CH was necessary. One more concrete sub-question I have, then, is:
Does Parovicenko's result generalize in some way to characterize Stone-Cech compactifications of larger discrete spaces? If so, how much set theory is needed - is GCH enough?
(One very tempting way to try to rephrase Parovicenko's result is to define "$\kappa$-Parovicenko space" by taking the definition of Parovicenko space and replacing the "weight $c$" condition with "weight $2^\kappa$," and then claiming that - assuming GCH - every $\kappa$-Parovicenko space is homeomorphic to $\beta X-X$ for a discrete space $X$ of cardinality $\kappa$. However, I see absolutely no reason to believe this. A sub-subquestion: is this statement obviously false?)
*For completeness, a Parovicenko space is a topological space which is compact and Hausdorff, has no isolated points, has no nonempty $G_\delta$ set with empty interior, has no two disjoint $F_\sigma$ sets with non-disjoint closures, and has weight $c=2^{\aleph_0}$ - that is, every basis has cardinality $\ge c$, and there is some basis with cardinality $c$.
1
# Characterization of Stone-Cech compactifications
Suppose I have an infinite discrete topological space $X$ of cardinality $\kappa$. Then I know some things about the Stone-Cech compactification, $\beta X$: it is Hausdorff and compact but not sequentially compact, has a basis of clopen sets, etc. My question is the following: is there a "nice" characterization of the spaces $Y$ which are homeomorphic to the Stone-Cech compactification of a discrete space? Certainly, the term "nice" is vague; I have in mind characterizations only using terms from a standard text on point-set topology, but I would consider as an answer to this question really any nontrivial characterization of Stone-Cech compactifications.
I am particularly interested in nice characterizations that require some set theory, such as "assuming $V=L$, $Y$ is homeomorphic to $\beta(X)$ for some discrete $X$ iff $Y$ is compact, not sequentially compact, and has a basis of clopen sets" (although I'm certain that statement is extremely false), and I would especially like to know whether there are two incompatible strong set-theoretic assumptions which yield distinct nice characterizations. The only relevant result I know is along these lines: in 1963, Parovicenko showed that assuming CH, the only Parovicenko space (which has a long but elementary definition*) is $\beta\mathbb{N}-\mathbb{N}$; this can be molded into a characterization of $\beta\mathbb{N}$, assuming CH, but says nothing about whether a space is the Stone-Cech compactification of a discrete space of uncountable cardinality. In 1978, van Douwen and van Mill showed that CH was necessary. One more concrete sub-question I have, then, is:
Does Parovicenko's result generalize in some way to characterize Stone-Cech compactifications of larger discrete spaces? If so, how much set theory is needed - is GCH enough?
(One very tempting way to try to rephrase Parovicenko's result is to define "$\kappa$-Parovicenko space" by taking the definition of Parovicenko space and replacing the "weight $c$" condition with "weight $2^\kappa$," and then claiming that - assuming GCH - every $\kappa$-Parovicenko space is homeomorphic to $\beta X-X$ for a discrete space $X$ of cardinality $\kappa$. However, I see absolutely no reason to believe this. A sub-subquestion: is this statement obviously false?)
*For completeness, a Parovicenko space is a topological space which is compact and Hausdorff, has no isolated points, has no nonempty $G_\delta$ set with empty interior, has no two disjoint $F_\sigma$ sets with non-disjoint closures, and has weight $c=2^{\aleph_0}$ - that is, every basis has cardinality $\ge c$, and there is some basis with cardinality $c$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500471353530884, "perplexity_flag": "head"}
|
http://mathhelpforum.com/math-challenge-problems/73591-old-four-numbers-equal-24-problem.html
|
# Thread:
1. ## The old "Four numbers equal 24" problem
Any time I see a question like this: http://www.mathhelpforum.com/math-he...make-24-a.html I wonder:
Of the $9^4$ possibilities of digits, how many can be solved by the four simple operations, how many can be solved if we allow other symbols, and how many are unsolvable?
I can't come up with any way to google an answer, though I'm sure someone must have done this before. Any interest?
I'm confident that (1, 1, 1, 1) cannot be made into 24, regardless of operations.
Anyone want to join in to help with the other 6560 possibilities?
2. ## 1,1,1,1
$(1+1+1+1)! = 24$ if factorials can be used.
3. Originally Posted by Henderson
Any time I see a question like this: http://www.mathhelpforum.com/math-he...make-24-a.html I wonder:
Of the $9^4$ possibilities of digits, how many can be solved by the four simple operations, how many can be solved if we allow other symbols, and how many are unsolvable?
I can't come up with any way to google an answer, though I'm sure someone must have done this before. Any interest?
I'm confident that (1, 1, 1, 1) cannot be made into 24, regardless of operations.
Anyone want to join in to help with the other 6560 possibilities?
I've done this, limiting the operations to the four standard +, -, *, /. I do not have the results where I can get to them easily, and I'm not even sure I saved the files.
However, I would like to point out another use for this type of analysis. By counting the number of solutions for each set of 4 numbers you can rank the problems by difficulty. A problem with many solutions is easier than one with few.
There are also a couple of fine points to consider: Do you allow intermediate steps which result in negative numbers? And how about fractions-- must each step yield an integer result? My understanding, based on heresay, is that these problems ("X24") are used to drill arithmetic in elementary schools, and depending on how how much math the kids know, problems involving fractions or negative numbers might be considered to be too hard.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418540596961975, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/115980/projection-formula-for-immersions
|
## Projection formula for immersions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $i : Y \to X$ be a quasi-compact immersion of schemes and let $M$ be a quasi-coherent sheaf on $X$. There is a canonical homomorphism
`$M \otimes i_* \mathcal{O}_Y \to i_* i^* M.$`
Question: Is it always an isomorphism?
Clearly this question is local on $X$. The class of $M$ satisfying the condition is closed under finite direct sums and contains $\mathcal{O}_X$. It follows that it contains all sheaves which are locally free of finite rank.
It is true in general if $i$ is an affine morphism (for example, when $i$ is a closed immersion). So what happens for open immersions?
-
5
Set $X = \mathbb A^2_k$, `$Y = X \smallsetminus \{(0,0)\}$`, and suppose that $M$ is non-zero and supported at the origin. – Angelo Dec 10 at 15:16
Thank you Angelo. Please add this as an answer, then I will accept it. – Martin Brandenburg Dec 11 at 6:56
Martin, how do you prove this for affine morphisms? Wouldn't $X=\mathbb A^1$, `$Y=X\setminus\{0\}$`, $M$ supported on $X\setminus Y$ give a counter-example? – Sándor Kovács Dec 11 at 18:06
isn't a good approach to take everything to be derived (where the formula always holds) and then see for which class of morphisms derived = underived? – Jacob Bell Dec 11 at 21:09
I don't think that the corresponding derived formula holds either. – Angelo Dec 12 at 7:44
show 1 more comment
## 1 Answer
Set $X = \mathbb A^2_k$, `$Y = X \smallsetminus \{(0,0)\}$`, and suppose that $M$ is non-zero and supported at the origin.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926620602607727, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/16506/seeking-a-specific-quantum-spin-system-of-interacting-spin-1-2-particles?answertab=oldest
|
# Seeking a specific quantum spin system of interacting spin 1/2 particles
Is there a system of interacting quantum spin 1/2 particles (of any topology) whose the states where all spins are up or down are eigenstates of its Hamiltonian and yet does not conserve the total spin polarization (in the z-direction) ?
-
## 1 Answer
There are plenty, the condition is very weak. For a simple example, with N spin 1/2, all spins up and all spins down both have total angular momentum N/2. You can make a Hamiltonian be zero on the N/2 total angular momentum states, so that all N/2 states are eigenstates, but it can be anything at all to the states of lower total angular momentum, without violating your condition. So, for example letting $\vec\sigma_j$ be the Pauli matrix for the n-th spin, and
$$P = ({N/2(N/2+1)\over 2} - |\sum_j \vec\sigma_j|^2)$$
be the projection operator which zeros out the total angular momentum N/2 state, then
$$H = P A P$$
will work, where A is any Hermitian operator at all (except for specially chosen ones). The projection on both sides guarantees that the N/2 total angular momentum states are all eigenvectors with eigenvalue zero.
But I assume you are more interested in a local spin-coupling. The construction above is global--- it requires you to decompose the total angular momentum. In this case, it's still easy to do. Make a matrix element which flips spins only when they are different. It will take adjacent |-+-> to |+-+> and adjacent |+-+> to |-+->. Both of these operations do not conserve total z spin.
The Pauli matrices suffice to expand any operator. In Pauli form $(\sigma^z_i + 1)$, and $(\sigma^z_i-1)$ project out the upper and lower component of a spin, and $\sigma^x_i$ flips a spin. So the Hamiltonian above is:
$$H = \sum_i \sigma^x_{i-1}\sigma^x_{i}\sigma^x_{i+1}( (\sigma^z_{i-1} +1)(\sigma^z_i-1)(\sigma^z_{i+1} +1) + (\sigma^z_{i+1} -1)(\sigma^z_i +1)(\sigma^z_i -1) )$$
Which is Hermitian (the second term in the sum gives the Hermitian conjugate of the first when multiplied by all the sigma-x's).
The reason you were having trouble is probably because you wanted a two-spin nearest neighbor coupling. To preserve all up and all down, you can't change aligned-spins. This means that the only nonzero matrix elements of H are between antialigned spins, so you only have matrix elements between |+-> and |-+> at every site, and each of these two allowed transitions preserves total spin.
So only for the special case of two-spin interactions, the condition of having all up and all down preserved implies that the total z spin is conserved.
-
Thanks Ron Maimon. Are there physically realizable Hamiltonians like the Ising, or Heisenberg systems with long-range interactions? – physicist Nov 3 '11 at 9:14
The long-range interactions might be used to model surface states, although I don't know cases of surface localized spins with a dynamics which are studied (most likely just out of ignorance, somebody please correct me). If you have spin electrons on the surface interacting through a bulk, the effective hamiltonian can be nonlocal. There are also direct nonlocal forces in materials like dielectrics with charged impurities, and you can possibly mock up two state systems in those, like a floppy protein crystal with a charged group in two symmetric conformations. Other people know better than me. – Ron Maimon Nov 4 '11 at 7:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160244464874268, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/77425/failures-that-lead-eventually-to-new-mathematics/77486
|
## Failures that lead eventually to new mathematics [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Possible Duplicate:
Most interesting mathematics mistake?
In the 25-centuries old history of Mathematics, there have been landmark points when a famous mathematician claimed to have proven a fundamental statement, then the proof turned out to be false or incomplete, and finally the statement became a nice problem playing an important role in the development of Mathematics. Two immediate examples come to me:
• Fermat's last theorem,
• the existence of a minimizer of the Dirichlet integral in calculus of variations, which led Weierstrass to introduce the notion of compactness.
This must have happened in almost all branches of Mathematics.
What are the best examples of such an evolution? How did they influence our every mathematical life?
-
5
Pretty much every mathematical theory starts out as a mess full of mistakes and inaccuracies... – darij grinberg Oct 7 2011 at 6:19
4
A similar question: mathoverflow.net/questions/879/… – Margaret Friedland Oct 7 2011 at 13:58
1
@Margaret: right! I apologize. I should like to delete my question, but I can't, due to the answers that have been voted. – Denis Serre Oct 7 2011 at 15:02
2
Well, I guess then, in that case we could vote to close it, if you have nothing against it? – S. Sra Oct 7 2011 at 15:50
1
@Dennis: you got some new answers, so it was worth asking. – Margaret Friedland Oct 7 2011 at 17:05
show 3 more comments
## 8 Answers
Lebesgue claimed that the projection of a Borel subset of the plane is a Borel set itself, an erroneous assertion that led the then 23 year old Mikhail Souslin to the definition of an analytic set (i.e. sets that are projections of closed sets). Before his untimely death in 1919 at the age of 25 he was able to prove that a set $A$ is Borel iff $A$ and its complement are both analytic- a discovery that initiated the field of descriptive set theory.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A nice example of a mistake that caused a big setback to its field when discovered, and led to a big upsurge when it was finally fixed, was Dehn's lemma in 3-manifold topology.
Dehn used the lemma in 1910, believing he had proved it, and for a long time it was thought to have established a simple connection between knots and the fundamental group.
In 1929, Helmuth Kneser discovered a mistake in Dehn's proof, which wrecked his plan to write a book on 3-manifolds based on Dehn's lemma, and probably caused him to change fields fo several complex variables.
The field of 3-manifolds did not become very active again until Papakyriakopoulos finally proved Dehn's lemma in 1957.
-
An important moment for chaos theory and dynamical systems was the discovery by Phragmén that there was a problem with the convergence of a series in Poincaré's original submission to a competition organised as part of the 60th-anniversary celebration of the birth of Oscar II, King of Sweden and Norway. The rewritten paper is seminal. The story is well told by June Barrow-Green in Poincaré and the three body problem (1997).
-
I had this one in mind, but my souvenir was too vague and I could not be specific. Thanks! – Denis Serre Oct 7 2011 at 7:49
Another thing that comes into mind (with connection to the FLT), is the unique factorization in the ring of integers of a number field. That was the basic mistake in Cauchy's and Lamé's proof of the FLT given to the french academy. It motivated a large development in algebraic number theory (definition of the class number and so on) by Kummer,Dedekind and others. And later obviously was generalized in the field of commutative algebra.
-
1
The following question seems somewhat relevant mathoverflow.net/questions/34806/… – quid Oct 7 2011 at 12:47
Dulac in 1923 claimed to have proven
Any real planar differential equation $$\frac{dx}{dt} = Q(x,y) \qquad \frac{dy}{dt} = P(x,y)$$ where $P,Q$ are polynomials with real coefficients, has a finite number of limit cycles.
His proof turned out to have a large hole. It took until the 1980s and the (independent) work of Écalle on resummation and the Borel-Laplace tranform, and the work of Ilyashenko on analytic continuation in the complex plane of the Poincaré first real return map associated to polycycles. Both of these strands of work are phenomenal achievements, and bore fruit for quite some time (maybe still does, but I stopped following this area some years back).
-
In some sense it was a failure of a certain diagram to commute that led to the Symmetric Spectra, S-modules, and other modern theories of spectra. Since these concepts underlie a lot of modern stable homotopy theory, everyone knows some version of this story. The category of spectra (not symmetric or S-algebras) goes back to Lima's 1959 paper The Spanier-Whitehead Duality in New Homotopy Categories, and is a natural construction if you want to do stable homotopy theory. Inverting the stable homotopy equivalences we get the stable homotopy category.
The "failure" I promised above is the failure of this category of spectra to have a symmetric monoidal structure. Such a structure was desired as a way to do more algebra in this setting (without it you have no hope of ring objects or modules over them). The diagram I mentioned which failed to commute was the diagram arising from the smash product on spaces, which the move to spectra did not preserve. For about 40 years it was thought that you could not have a symmetric monoidal category on spectra. See for instance the Lewis's 1991 paper Is there a convenient category of spectra? which shows that you can't have all the properties you want on such a category and also have it be symmetric monoidal. Thankfully, you can get enough of the properties you want and also get it to be symmetric monoidal. This was shown at the same time by two different teams of mathematicians:
• Elmendorf, Kriz, Mandell, and May created the category of $S$-modules
• Hovey, Shipley, and Smith created the category of symmetric spectra
Both are symmetric monoidal categories of spectra, have (different) desirable homotopy-theoretic properties, and both give the stable homotopy category when you invert weak equivalences. It turns out both approaches are equivalent in an even stronger sense than this, as can be seen for example in Schwede's S-modules and symmetric spectra.
[Disclaimer] This answer tells a story but may be missing important details or have things slightly wrong. That's because as a current graduate student I wasn't doing math at the time of these developments. So I'm glad this answer is CW so someone more knowledgeable can come and edit this if I got it wrong.
I think there's also a way to fit operads into this story, since every time I think of operads I think of $A_\infty$ and $E_\infty$ ring objects, which are ones where a key structural diagram (associativity and commutativity, respectively) does not commute on the nose, but it does commute up to homotopy. However, the coherence diagram doesn't commute up to homotopy, but does up to homotopies of homotopies. And for it's coherence diagram you need homotopies of homotopies of homotopies, etc. It seems to me that this arises from a similar goal as the above, namely to do algebra in stable homotopy theory. Before the issue was a lack of a product, but now the issue is that the product doesn't follow the rules (but it does up to infinitely coherent homotopy).
-
4
"Since these concepts underlie a lot of modern stable homotopy theory, everyone knows some version of this story." I suppose that is a technical use of the word "everyone". :) – KConrad Oct 8 2011 at 1:14
1
Topologists... they say space and they mean all sort of weird things: what can you expect when they say everyone! – Mariano Suárez-Alvarez Oct 8 2011 at 4:26
Algebraic geometers tend to use the word 'space' quite flexibly too. And as a group, I'd have to say from my experience that Russians are the most flexible in this regard. I've heard the work space used in talks to refer to any of: topological space, simplicial set, spectra, manifold, variety, algebraic space, stack, vector space, Lie algebra, A_infty algebra, L_infty algebra, noncommutative ring, C^* algebra, and a category! – Jeffrey Giansiracusa Oct 10 2011 at 8:40
In the early 1960's Smale published a paper containing a conjecture whose consequence was that (in modern language) "chaos didn't exist". He soon received a letter from Norman Levinson informing him of an earlier work of Cartwright and Littlewood which effectively contained a counterexample to Smale's conjecture. Smale "worked day and night to resolve the challenges that the letter posed to my beliefs" (in his own words), trying to translate analytic arguments of Levinson and Cartwright-Littlewood into his own geometric way of thinking. This led him to his seminal discovery of the horseshoe map, followed by the foundation of the field of hyperbolic dynamical systems. For more details, see Smale's popular article "Finding a horseshoe on the beaches of Rio", Mathematical Intelligencer 20 (1998), 39-44.
-
This doesn't exactly fit, but I thought it might be close enough to be worth mentioning. The Fundamental Lemma in the Langlands Program was (as implied by the name) originally expected to be relatively easy result. Much of the program depended on it, and yet the Lemma remained unproven for about 2 decades until Ngô Bảo Châu's recent proof appeared.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553791284561157, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/140167-finding-derivative-complex-fraction-radicals.html
|
# Thread:
1. ## Finding the Derivative - Complex Fraction with Radicals
So I've been doing good with these until this problem.
Find $f'(a)$
$f(x)=\frac{8}{\sqrt{x+2}}$
$\frac{\frac{8}{\sqrt{a+h+2}}-\frac{8}{\sqrt{a+2}}}{h}$
I began multiplying by a common denominator to get rid of the complex fraction, then multiplied by a conjugate to get rid of the square roots. After that I don't know what to do.
2. Originally Posted by soad
So I've been doing good with these until this problem.
Find $f'(a)$
$f(x)=\frac{8}{\sqrt{x+2}}$
$\frac{\frac{8}{\sqrt{a+h+2}}-\frac{8}{\sqrt{a+2}}}{h}$
I began multiplying by a common denominator to get rid of the complex fraction, then multiplied by a conjugate to get rid of the square roots. After that I don't know what to do.
you should be up to this point ...
$8 \lim_{h \to 0} \frac{1}{h}\left(\frac{-h}{\sqrt{a+h+2} \cdot \sqrt{a+2} \left[\sqrt{a+h+2}+\sqrt{a+2}\right]}\right)$
note that the $h$'s cancel, allowing the limit to be evaluated as ...
$-\frac{8}{(a+2) \cdot 2\sqrt{a+2}} = -\frac{4}{(a+2)^{\frac{3}{2}}}$
3. Okay... so the $h$'s cancel out because it's approaching $0$?
If so, I completely understand now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9662526249885559, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/202845-another-limit-problem.html
|
4Thanks
• 2 Post By Plato
• 1 Post By Prove It
• 1 Post By SworD
# Thread:
1. ## Another limit problem
Hello everyone!
So, this is the question, and I am not sure I am able to find any reason why this statement is wrong, as the exercise requires.
Exercise 2.3.53
Show by example that the following statement is wrong: The number L is the limit of
f(x) as x approaches x0 if f(x) gets closer to L as x approaches x0.
Explain why the function in your example does not have the given value of L as a
limit as x -> x0.
I have just started my university calculus course and am feeling pretty lost most of the time. Some words of advice would be nice if anyone has any hehe.
Thank you everyone!
2. ## Re: Another limit problem
Originally Posted by Nora314
Exercise 2.3.53
Show by example that the following statement is wrong: The number L is the limit of
f(x) as x approaches x0 if f(x) gets closer to L as x approaches x0.
Explain why the function in your example does not have the given value of L as a
limit as x -> x0.
Consider $f(x)=(x-2)^2+1$ and $x_0=2$.
Now the closer $x$ is to $2$, the closer $f(x)$ is to $0$.
But ${\lim _{x \to 2}}f(x) = 1 \ne 0$
3. ## Re: Another limit problem
Originally Posted by Plato
Consider $f(x)=(x-2)^2+1$ and $x_0=2$.
Now the closer $x$ is to $2$, the closer $f(x)$ is to $0$.
But ${\lim _{x \to 2}}f(x) = 1 \ne 0$
Are you sure about that? If I was to approach 2, I'm sure my f(x) would approach 1.
I think the problem with the statement as given is that you need to approach L FROM BOTH SIDES as you make x approach x_0 FROM BOTH SIDES in order to have L be the limit of f(x) as x approaches x_0.
4. ## Re: Another limit problem
The idea is that you need to not only approach it, but it is also required that you get arbitrarily close to the limit. In the example stated above, in a manner of speaking, you are "getting closer to" 0 from both sides, but you never get closer than 1 unit away. So the limit is not 0.
5. ## Re: Another limit problem
Thanks everyone for the reply! It is much more clear to me now. This was a bit of a funny question, seems like a lawyer question and not math hehe.
6. ## Re: Another limit problem
Originally Posted by Prove It
Are you sure about that? If I was to approach 2, I'm sure my f(x) would approach 1.
Yes, I am quite sure of it. The minimum value of $f$ is $f(2)=1$ so the closer $x$ is to $2$ the closer $f(x)$ is to $0$.
7. ## Re: Another limit problem
Originally Posted by Plato
Yes, I am quite sure of it. The minimum value of $f$ is $f(2)=1$ so the closer $x$ is to $2$ the closer $f(x)$ is to $0$.
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9639766812324524, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/247877/how-to-prove-that-for-every-finite-field-its-cardinality-is-pn?answertab=votes
|
# How to prove that for every finite field its cardinality is $p^n$?
How to prove that for every finite field its cardinality is $p^n$ where $p$ is prime and $n\in\mathbb{N}$?
Thanks in advance!
-
3
– joriki Nov 30 '12 at 7:13
## 2 Answers
The prime field of $F$, which is the smallest subfield of $F$ or the field obtained by taking the elements $0, 1, 1+1, 1+1+1, \ldots$ must be some $\mathbb Z/p\mathbb Z$. To make this a field, $p$ must be prime. Then $F$ is a vector space over $\mathbb Z/p\mathbb Z$, which makes its cardinality $|\mathbb Z/p\mathbb Z|^{\operatorname{dim}F}$.
-
The characteristic of a finite field $F$ is some prime number $p$. Then for any $a\in F$ holds that $p\cdot a=0$ (here $p\cdot a$ means $a$ plus itself $p$ times). So, in $F$, considered simply as an additive group, every element has order $p$. Thus it is a $p$-group and since any $p$-group (by Cauchy's Theorem) has order a power of $p$ the proof is complete.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9697692394256592, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/111290-integral-square-root.html
|
# Thread:
1. ## integral of a square root
how do you find the integral of a square root of a polynomial?
specifically,
the integral of the sqaure root of (t^2+t^4)
2. Originally Posted by cottekr
how do you find the integral of a square root of a polynomial?
specifically,
the integral of the sqaure root of (t^2+t^4)
$\sqrt{t^2+t^4}=\sqrt{t^2(1+t^2)}=|t|\sqrt{1+t^2}$
We can find an antiderviative when $t \ge 0$ or when $t < 0$
3. in the problem I have, its a definite integral from 0 to 1. but I'm not sure how to find the integral of anything under the square root
4. Originally Posted by cottekr
in the problem I have, its a definite integral from 0 to 1. but I'm not sure how to find the integral of anything under the square root
$\int_{0}^{1}t\sqrt{1+t^2}dt$
let $u=1+t^2 \implies du=2tdt \iff \frac{1}{2}du=tdt$
$\int_{1}^{2}\sqrt{u}\frac{1}{2}du=\frac{1}{2}\int_ {1}^{2}u^{1/2}du...$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8889036178588867, "perplexity_flag": "head"}
|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoms/1177728491
|
### Distribution of Quadratic Forms and Some Applications
Arthur Grad and Herbert Solomon
Source: Ann. Math. Statist. Volume 26, Number 3 (1955), 464-477.
#### Abstract
The authors were prompted by a general problem concerning hit probabilities arising in military operations to seek the distribution of $Q_i = \sum^k_{i=1}a_ix^2_i, k = 2, 3,$ where the $x_i$ are normally and independently distributed with zero mean and unit variance, $\sum a_i = 1,$ and $a_i > 0.$ While the distribution of a positive definite quadratic form in independent normal variates has been the subject of several papers in recent years [6], [11], [12], laborious computations are required to prepare from existing results the percentiles of the distribution and a table of hit probabilities. This paper discusses the exact distribution of $Q_k$ and then obtains and tabulates the distributions of $Q_2$ and $Q_3,$ accurate to four places. Three other approaches to the distributions are discussed and compared with the exact results: a derivation by Hotelling [8], the Cornish-Fisher asymptotic approximation [3], and the approximation obtained by replacing the quadratic form with a chi-square variate whose first two moments are equated to those of the quadratic form--a type of approximation used in components of variance analysis. The exact values and the approximations are given in Tables I and II. The tables have been prepared with the original problem in mind, but also serve as an aid in several problems arising out of quite different contexts, [1], [2], [13]. These are discussed in Section 6.
First Page:
Full-text: Open access
Permanent link to this document: http://projecteuclid.org/euclid.aoms/1177728491
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915952205657959, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/86773-group-element-order.html
|
# Thread:
1. ## group element order
Let G be an abelian group and let T={a in G|a^m=e for some m>1}. Prove that T is a subgroup of G and that G/T has no element - other than its identity element - of finite order.
Please show details of this proof. It is another practice problem for my final exam.
Thank you so much!
2. Originally Posted by mpryal
Let G be an abelian group and let T={a in G|a^m=e for some m>1}. Prove that T is a subgroup of G and that G/T has no element - other than its identity element - of finite order.
Please show details of this proof. It is another practice problem for my final exam.
Thank you so much!
The first thing we should do is try to prove closure. Choose $a,b\in T$. Then there exists positive integers $m,n$ with $a^m=e$ and $b^n=e$, with $m,n>1$. Since $a,b\in G$, then $ab\in G$. But is it also in $T$? Well, observe that $(ab)^{mn}\in G$. Also, $(ab)^{mn}=a^{mn}b^{mn}=(a^m)^n(b^n)^m=e^ne^m=e$. So then the conditions are satisfied such that $ab\in T$, and therefore $T$ is closed.
It is pretty obvious that $e\in T$.
It's almost just as obvious that for any $a\in T$ then $a^{-1}\in T$. But let's prove it anyway. There exists integer $m>1$ with $a^m=e$. Observe also that since $T$ is abelian, we can do the following: $e=a^{-1}a=(a^{-1}a)^m=(a^{-1})^ma^m=(a^{-1})^m$. So for any $a\in T$, then $a^{-1}\in T$.
That should do it.
3. Originally Posted by mpryal
Let G be an abelian group and let T={a in G|a^m=e for some m>1}. Prove that T is a subgroup of G and that G/T has no element - other than its identity element - of finite order.
Please show details of this proof. It is another practice problem for my final exam.
Thank you so much!
Hi mpryal.
Let $gT\in G/T$ and suppose $(gT)^k=e_{G/T}=T$ for some $k\in\mathbb Z^+.$ Then $g^kT=T$ and so $g^k\in T$ and so $(g^k)^m=g^{km}=e$ for some $k\in\mathbb Z^+.$ Hence $g$ is of finite order in $G,$ i.e. $g\in T$ so that $gT=T.$ This shows that $T$ is the only element of $G/T$ of finite order.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464845657348633, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/77730/list
|
2 better title
1
# A Problem on Graph Theory
Suppose that there are $n$ vertices, we want to construct a regular graph with degree $p$, which, of course, is less than $n$. My question is how many possible such graphs can we get?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9761105179786682, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/181847-simple-first-order-diff-eq.html
|
# Thread:
1. ## Simple first order diff. eq
Hi! I have a simple question that i cant get thorugh at the moment. I have a differential equatoin of the form:
y' = C - y
where y' = dy/dt
How can I solve this. I cant do separation of variables here. Help plz
2. $y'=C-y\Leftrightarrow \dfrac{dy}{C-y}=dx$ (separated variables) .
3. Do you mean you aren't allowed to do separation of variables? Or that you don't think separation of variables will work for this problem? Because you can do separation of variables. You could also do an integrating factor if you wanted. And probably a few other techniques as well.
4. Originally Posted by Zogru11
Hi! I have a simple question that i cant get thorugh at the moment. I have a differential equatoin of the form:
y' = C - y
where y' = dy/dt
How can I solve this. I cant do separation of variables here. Help plz
$\displaystyle \begin{align*}\frac{dy}{dt} &= C - y \\ \frac{dy}{dt} + y &= C \\ e^{\int{1\,dt}}\,\frac{dy}{dt} + e^{\int{1\,dt}}\,y &= C\,e^{\int{1\,dt}} \\ e^t\,\frac{dy}{dt} + e^t\,y &= C\,e^t \\ \frac{d}{dt}\left(e^t\,y\right) &= C\,e^t \end{align*}$
Can you go from here?
5. No I just didnt realise I was able to do separation of variables :P Thanks for all the answers
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927645206451416, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/2131/how-should-i-check-the-received-ephemeral-diffie-hellman-public-keys
|
How should I check the received ephemeral Diffie-Hellman public keys?
In my application I'm doing a DH key exchange, where both sides generate their own ephemeral key. No static keys are used. I am trying to make my application resistant against an active attack and therefore need to validate the public key that my counterpart is sending me.
Below I'm using standard DH variable names: DH parameters $g$ and $p$. Party $A$ has a private key $x_a$ and public key $y_a = g^{x_a} \bmod p$. Party $B$ has a private key $x_b$ and public key $y_b = g^{x_b} \bmod p$. The calculated secret is then $z = g^{x_ax_b} \bmod p$. Also $p$ is a safe prime $p = 2q+1$.
In my application I will authenticate the shared secret $z$ via an out of bounds way (basically a user verifying a fingerprint).
OpenSSL has a function to validate a DH public key (DH_check_pub_key()) which does the following check:
$$2 \leq y_b \leq p-2$$
I believe this always excludes the generators that generate the order-2 subgroup. Because $p$ is a safe prime, I think these generators are always $1$ and $p-1$. Is this correct? Is it also correct that all integers in $[2, p-2]$ either generate an order-$q$ or an order-$2q$ subgroup?
Secondly, in NIST SP800-56, section 5.6.2.4, it is mentioned that I should also check:
$$y_b^q \bmod p = 1$$
I don't understand the background of this check. Is it needed when $p$ is a safe prime? OpenSSL does not implement this.
-
1 Answer
The check $y_b^q = 1 \mod p$ is there to prevent two possible weaknesses:
• Suppose someone gave us (either because of a programmer error or deliberate attack) gave us a $y_b$ value of small order. If so, then someone listening in can guess the shared secret you derive.
• Suppose an attacker gave us a $y_b$ value with an order with a small factor $r$. Then, by seeing the shared secret value $z$ you derived (which will be one of $r$ possibilities), then the attacker could rederive the value $x_a \bmod r$; whether that is interesting would depend on whether you reuse $x_a$ elsewhere.
Now, for a "safe prime" (one where $q=(p-1)/2$ is prime), neither of these attackers are of much concern; there are no small subgroups other than the trivial order-1 and order-2 subgroups (and yes, you are correct, all group members in $[2, p-2]$ are either of order $q$ or $2q$). In addition, which the attacker could give us a $y_b$ value with order 2q (and potentially learn $x_a \bmod 2$, that just gives him one bit of your private exponent; as he can't learn anything else, this is not much of a concern (even if you reused $x_a$ in other DH exchanges).
On the other hand, NIST SP800-56 doesn't assume a safe prime. In fact, a strict reading of NIST SP800-56 would appear to forbid it; if you look at Table A, it specifies the size of $q$ to be either 160, 224 or 256 bits (and not "at least so many bits", precisely that size).
As for your question as to whether the check $2 \le y_b \le p-2$ will always exclude all members of an order-2 subgroup, that is actually true for any prime $p$, not just "safe primes". For any prime $p$, the only group member with order 2 will be $p-1$.
Also, if you think you want to perform the $y_b^q = 1 \mod q$ check (even though it doesn't appear to be strictly necessary), one optimization that a safe prime allows is that you can compute the Lagrange symbol; that's a short cut for computing $x^q \bmod p$ for $q = (p-1)/2$ (which is a lot faster than computing $x^q \bmod p$ directly)
-
Poncho, thank you very much, this answers my question perfectly. Since OpenSSL always generates DH parameters such that $p$ is a safe prime, i will skip the $y_b^q = 1 \mod q$ test. Thanks! – geertj Mar 19 '12 at 18:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359370470046997, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/19605/is-it-possible-for-the-repeated-doubling-of-a-non-torsion-point-of-an-elliptic-cu/20412
|
## Is it possible for the repeated doubling of a non torsion point of an elliptic curve tstays bounded in the affine plane ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let P=(x1,y1) be a non torsion point on an elliptic curve y^2=x^3+Ax+B. Let (xn,yn)=P^{2^n}. xn,yn are rationals with heights growing rapidly. Can {xn} {yn} stays bounded ?
-
## 2 Answers
EDIT: this answer is wrong. I misread the question as looking at the group generated by P, not the points obtained by repeated doubling. I would be OK if the subset of S^1 generated by taking a non-torsion point and repeatedly doubling came arbitrarily close to the origin---but it may not, as the comments below show. As I write, this question is still open. If a correct answer appears I might well delete this one.
Original answer:
"Bounded" in what sense? You mention heights, that's why I ask. But in fact the answer is "no" in both cases. The height will get bigger because of standard arguments on heights. And the absolute values of x_n and y_n will also be unbounded: think topologically! The real points on the curve are S^1 or S^1 x Z/2Z and if the point isn't torsion then the subgroup it generates will be dense in the identity component and hence will contain points arbitrarily close to the identity, which, by continuity, translates to "arbitrarily large absolute value" in the affine model.
-
I mean the absolute. Thanks. You've answered my question. – defgh Mar 28 2010 at 10:30
Kevin, repeated doubling of a point doesn't produce the subgroup generated by that point. What is needed is a theorem to the effect that if $\xi$ is irrational then the set of $2^n\xi$ in $\mathbb{R}/\mathbb{Z}$ meets every neighbourhood of the origin in $\mathbb{R}/\mathbb{Z}$. I'm sure this is true but can't see an immediate proof. – Robin Chapman Mar 28 2010 at 10:32
@Robin: you're right---I misread the question. Thanks. – Kevin Buzzard Mar 28 2010 at 10:57
3
However, $2^n \xi$ doesn't need to meet every neighborhood of the origin if you only assume $\xi$ is irrational. That $\xi$ is irrational just means the "digits" of the binary expansion aren't preperiodic, not that $\xi$ is normal base 2. – Douglas Zare Mar 28 2010 at 11:22
1
In that case should the elliptic logarithm'' of a rational point have this property then the answer to the original question would be `yes'. I don't think this can be the case but proving it suddenly looks like hard work :-( – Robin Chapman Mar 28 2010 at 11:31
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
From the discussion above it looks like the answer is yes (EDIT: if you allow real numbers; the OP was unclear, perhaps they wanted a rational point, in which case I'm uncertain. Does anybody know anything about the binary expansion of complex numbers with rational Weierstrass p-values?). Let the origin of your group be the point at infinity in the curve in $\mathbb{RP}^2$, and pick a topological group isomorphism of $S^1$ to the component of the identity to $S^1\cong \mathbb{R}/\mathbb{Z}$. The doublings of a point are given by truncating off the first $m$ digits of the base 2 expansion of your point. Thus the doublings of a point stay bounded if and only if the length of a consecutive string of 0's and 1's in this expansion is bounded above (there are plenty of irrational numbers with this property).
There's a similar answer for putting the origin somewhere else: you can never allow too much of the beginning of the expansion of the point at infinity to show up in the expansion of your point.
-
4
P seems to be rational. – S. Carnahan♦ Apr 5 2010 at 19:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945704996585846, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/14752/checking-if-two-graphs-have-the-same-universal-cover/17518
|
## Checking if two graphs have the same universal cover
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It's possible I just haven't thought hard enough about this, but I've been working at it off and on for a day or two and getting nowhere.
You can define a notion of "covering graph" in graph theory, analogous to covering spaces. (Actually I think there's some sense -- maybe in differential topology -- in which the notions agree exactly, but that's not the question.) Anyway, it behaves like a covering space -- it lifts paths and so on.
There's also a "universal cover," which I think satisfies the same universal property as topological universal covers but I'm not sure. Universal covers are acyclic (simply connected) in graph theory, so they're trees, usually infinite. The universal cover doesn't determine the graph; for instance, any two k-regular graphs (k > 1) have the same universal cover. You can construct plenty of other pairs, too.
I'm interested in necessary and sufficient conditions for two graphs $G, H$ to have the same universal cover. One such condition (I'm pretty sure!) is whether you can give a 1-1 correspondence between trails in $G$ and trails in $H$ that preserves degree sequences. Unfortunately this doesn't help me much, since this is still basically an infinite condition. Is there some less-obvious but more easily checkable condition? In particular is it possible to determine if two (finite) graphs have the same universal cover in polynomial time?
-
1
You don't need differential topology, just ordinary alg. top. will do. There is a useful connection to combinatorial group theory here and browsing through that area may give you ideas w.r.t say Cayley graphs, that will help. One problem that you face is what `the same' should mean in your setting. One idea might be to look at the valence of nodes, but that may be what you have already looked at. I am not a graph theorist so may not be understanding some of the terminology that you are using. – Tim Porter Feb 9 2010 at 9:03
As Tim says, this ought to be connected to a much-studied POV in geometric/combinatorial group theory: I don't know where this "started", but perhaps these notes by Brent Everitt arxiv.org/abs/math.GR/0606326 might have useful pointers, if not answers for your questions. – Yemon Choi Feb 9 2010 at 9:23
I don't think there's anything directly relevant to this question, but you might find Jeff Erickson's notes on computational topology interesting anyway: compgeom.cs.uiuc.edu/~jeffe/teaching/comptop/… – Eric Peterson Feb 9 2010 at 14:33
## 4 Answers
Two finite graphs have the same universal cover iff they have a common finite cover. This surprising fact was first proved by Tom Leighton here:
Frank Thomson Leighton, Finite common coverings of graphs. 231-238 1982 33 J. Comb. Theory, Ser. B
I'm quite sure the paper also presents an algorithm for determining if this is the case for two given graphs; essentially you develop a refined "degree" sequence for the graphs, starting from "# of vertices of degree k" and refining to "# of vertices of degree k with so-and-so vertices of degree l" etc.
As an aside, the reason this result is so surprising is that it says something highly non-trivial about groups acting on trees (any two subgroups of Aut(T) with a finite quotient are commensurable, up to conjugation), and proving this result directly via group-theoretic methods is surprisingly difficult (and interesting). There's a paper of Bass and Kulkarni which pretty much does just that.
Edit: I just ran a quick search and found this sweet overview: "On Leighton's Graph Covering Theorem".
-
Definitely a seminal paper in the topological theory of graphs! – Pete L. Clark Mar 8 2010 at 21:03
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
An addendum on the question of polynomial time. In a 1991 paper, which can be found here, Abello, Fellows, and Stillwell proved that there is a fixed graph H for which the problem of deciding whether a given G covers H is NP-complete.
-
1
Detecting if given G covers fixed H sounds harder than deciding if given G, H have a common finite cover. Is there some non-obvious equivalence? I feel like I am missing something here... – Sam Nead Feb 9 2010 at 15:11
1
@Sam: I take your point; there is no obvious equivalence between deciding whether a common cover exists, and deciding whether one graph covers another. I should have looked at Leighton's proof, which shows that graphs have a common cover if and only if they have the same "degree refinement", a matrix which is pretty clearly computable in polynomial time. This surprises me, because I did not expect testing for common covering to be easier than testing for covering. – John Stillwell Feb 10 2010 at 0:15
If two graphs have a common cover, then they have a common universal cover. The "maximal" cover is therefore unique. In general, the reverse is not true. The "minimal" common cover may not be unique. This was shown by Wilfried Imrich and me. See also European Journal of Combinatorics Volume 29, Issue 5, July 2008, Pages 1116-1122
-
Maybe this is of interest to you. In section 3.3 of the below paper (see also 2.2), the universal covers are extended from graphs to degree matrices, and some conditions are given under which two degree matrices (graphs) have the same universal cover: Locally constrained graph homomorphisms and equitable partitions J.Fiala, D.Paulusma and J.A.Telle European Journal of Combinatorics. Volume 29.(4) p. 850-880
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444684982299805, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/50148/what-is-the-electrical-conductivity-s-m-of-a-carbon-nanotube
|
# What is the electrical conductivity (S/m) of a carbon nanotube?
I have been searching around for a while for this but I am having trouble finding any actual figures, all I can seem to find is that it is "very high".
So I am wondering, does anyone have any figures of what the electrical conductivity of a carbon nanotube is, a theoretical or estimated answer is fine. I am preferably looking for the answer in $Sm^{-1}$.
-
## 1 Answer
The numbers will greatly vary depending on the kind of nanotube. The following are some examples from cursory Google searches.
Electrical conductivity was increased by 50 percent to 1,230 siemens per meter.
http://news.ncsu.edu/releases/wms-zhu-cnt-composites/
And that’s not all: colossal carbon tubes are ductile and can be stretched, which makes them attractive for applications requiring high toughness. They also have high electrical conductivities of around 103 siemens per centimetre at room temperature, compared with 102 siemens per centimetre for multi-walled carbon nanotube fibres.
http://physicsworld.com/cws/article/news/2008/aug/08/carbon-nanotubes-but-without-the-nano
The researchers found that the electrical conductivity increased with increasing nanotube content and temperature – in contrast to earlier findings. They observed a maximum conductivity of 3375 siemens per metre at 77°C in samples that were 15% nanotube by volume.
http://physicsworld.com/cws/article/news/2003/aug/20/nanotubes-boost-ceramic-performance
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9011342525482178, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/52998/finding-the-nth-term-in-a-repeating-number-sequence
|
# Finding the nth term in a repeating number sequence
I'm trying to figure out how to solve these types of repeating number sequence problems. Here is one I made up:
Consider the following repeating number sequence: {4, 8, 15, 16, 23, 42, 4, 8, 15, 16, 23, 42, 4, 8, 15, 16, 23, 42,…} in which the first 6 numbers keep repeating. What is the 108th term of the sequence?
I was told that when a group of k numbers repeats itself, to find the *n*th number, divide n by k and take the remainder r. The *r*th term and the *n*th term are always the same. 108 / 6 = 18, r = 0 So the 108th term is equal to the 0th term? Undefined?
I'm confused at how this works.
Thanks!
-
3
Think of your sequence as doubly infinite, $\dots,4,8,15,\dots$, and the zeroth term is the term just before the first term. PS: I just noticed what sequence you are using. Are you sure you're confused, not lost? – Gerry Myerson Jul 22 '11 at 1:52
Yes, hahaha :) Thanks though, I understand now. – stoicfury Jul 22 '11 at 1:57
## 3 Answers
You are looking for modular arithmetic. The procedure you described of dividing and taking the remainder is encapsulated in modular arithmetic.
-
HINT:
In such cases, look at more manageable sample problems. So what would the 5th, 10th, 12th, 17th element (etc.) be? All these can be found by hand, and you can see how the remainder matters.
-
HINT: write the numbers in six columns, and work out how the row number and column number relate to the position in the sequence.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955629825592041, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-statistics/137953-distribution-function-function-random-variable.html
|
# Thread:
1. ## Distribution function for a function of random variable
Suppose that X is a random variable with distribution function $F_X(t)$ and let $Y=a+bX \ \ \ b<0$. Derive the distribution function for Y.
My solution:
I know that $F_Y(t)=P(T \leq t)=P(a+bX \leq t)$
$=P(X \geq \frac {t-a} {b} ) =1 - P( X \leq \frac {t-a}{b})=1- F_X ( \frac {t-a}{b} )$
But the answer is wrong, why?
Thanks.
2. Hello,
More precisely, it's $1-\mathbb{P}\left(X{\color{red}<} \frac{t-a}{b}\right)$
There are some situations (in particular for some discrete random variables) where $\mathbb{P}\left(X< \frac{t-a}{b}\right)\neq \mathbb{P}\left(X\leq \frac{t-a}{b}\right)$
3. For some reason, the back of the book says the answer is:
$1-F_X( \frac {t-a}{b})+p_X ( \frac {t-a}{b})$
4. Originally Posted by tttcomrader
For some reason, the back of the book says the answer is:
$1-F_X( \frac {t-a}{b})+p_X ( \frac {t-a}{b})$
So that confirms what Moo suggested...
You have $1 - P( X < \frac {t-a}{b})$ So you just need to add the case $P( X = \frac {t-a}{b})$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8916972279548645, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/21786/extending-methods-from-lubin-tate-theory
|
## Extending methods from Lubin-Tate theory
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The first lemma in Lubin-Tate theory says the following:
Let $K$ be a local field, $A$ its ring of integers, and $f\in A[[T]]$ be such that $f(0) = 0$, $f'(0)$ is a uniformizer, and $f$ induces Frobenius over the residue field. Then there exists a unique formal group law $F_f(X,Y)\in A[[X,Y]]$ that makes $f$ into a formal $A$-endomorphism.
If you go over the details of the lemma, you can (I think) generalize it as follows:
If $R$ is any ring, $f\in R[[T]]$ such that $f(0) = 0$ and $f'(0)\in R^\times$ (Edit: $u=f'(0)$ then $u^n - u\in R^\times$ for all $n$), then there exists a unique formal group law $F_f(X,Y)\in R[[X,Y]]$ that makes $f$ into a formal $R$-endomorphism.
The business about uniformizers and Frobenius in the Lubin-Tate lemma is just to ensure that everything converges on the maximal ideal of the ring of integers in the separable closure of $K$, so that you get an actual group.
So this is pretty cool---it says that you can take something purely analytic, $f$, and magically give it an algebraic structure. Specifically, the roots of the iterates $f^{(n)} = f\circ\cdots\circ f$ become a torsion $A$-module.
If the existence of $F_f$ generalizes like I think it does, a natural question is where does $F_f$ converge? I want to be able to answer the question for specific $f$, a simple example would be the following: if $R=\mathbb{C}$ and $f(z) = uz + z^2$, then what can you say about the convergence of $F_f$?
Edit: Okay, $\mathbb{C}$ was a bad choice, but suppose $R$ is a ring complete with respect to some $\mathfrak{a}$-adic topology. Would there be a reason not to study this case? Maybe the question I should be asking is, for what other $R$ and $f$ do people study these formal groups $F_f$?
-
Fair enough, I added the extra condition on my claim. As far as the second claim, I only meant the roots when they make sense. The f I'm interested in are polynomials or rational functions. – Sean Kelly Apr 19 2010 at 2:17
1
As far as I know, the generalization of Lubin-Tate to even local fields where the residue field is not finite is still an open problem. The Frobenius condition is very crucial in the proof. – Tran Chieu Minh Apr 19 2010 at 15:28
you CAN talk about roots without convergence matters since you may use Weierstrass preparation to reduce your question to a polynomial times some unit power series. Take the roots to be those of the polynomial. Similarly the Galois action sends units to units, so the Galois action is fine as well. – olli_jvn Apr 23 2010 at 10:19
## 2 Answers
I don't think you should be trying to interpret this stuff over the (real or) complex numbers. Discs do not have nice algebraic properties in the archimedean world as they do $p$-adically. Even $p$-adically, one never talks about an actual radius of convergence but just makes sure things converge on the maximal ideal and work there. For example, the Lubin--Tate series could even be a polynomial, with infinite radius of convergence, but still one just focuses on what it does inside the unit disc ($p$-adically).
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It does probably not work in the way you wish.
The crucial bit where convergence is used later in Lubin-Tate is to realize the Galois action via these series [a]. The first-level roots will lie in m minus m^2, so you really want convergence in this radius.
For finding roots itself I think convergence really is not the problem, but having some roots will not help you much if you cannot make the machine realizing the Galois action through power series work.
If you look at the proof (e.g. Yoshida's notes or Milne's notes), at some point one uses that
f(X^q) - f(X)^q
is zero modulo p, which is quite crucial for the convergence - as you say above. But this is not just a sufficient condition, I think if you try a series over some other ring where this fails, you'll really inavoidably get a series which doesn't converge in m. Still your Lubin-Tate polynomial (as proposed by KConrad as a good example above) has roots and all that, good, but there is no way to make the formal O_K-module series [a] act on them. I don't quite remember so well, but I am not even so sure whether it is certain that the extension made from the roots of a Lubin-Tate polynomial/powerseries is Galois anymore in general.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9362953901290894, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/175179-cyclic-groups-modulo.html
|
# Thread:
1. ## Cyclic groups modulo
Hi all,
Brief intro to what im talking about....
If n $\geq$ 2 is an integer, then Zn* denotes the set of invertible elements in the ring Zn. That is, it denotes the numbers in {1,2....n-1} which are coprime to n. The set Zn* is a group under multiplication modulo n.
Does anyone know how to determine whether or not these groups are cyclic?: Z8*, Z9*, Z10*, Z12*
Maybe someone can walk me through the first couple and i can try the others for myself!
Thanks!
2. Originally Posted by sirellwood
Hi all,
Brief intro to what im talking about....
If n $\geq$ 2 is an integer, then Zn* denotes the set of invertible elements in the ring Zn. That is, it denotes the numbers in {1,2....n-1} which are coprime to n. The set Zn* is a group under multiplication modulo n.
Does anyone know how to determine whether or not these groups are cyclic?: Z8*, Z9*, Z10*, Z12*
Maybe someone can walk me through the first couple and i can try the others for myself!
Thanks!
This is standard stuff you can find in any decent book in group theory or general algebra. Read the folowing, too:
Multiplicative group of integers modulo n - Wikipedia, the free encyclopedia
Tonio
3. here is how you would proceed for U(Z8): the elements co-prime to 8 are the odd integers {1,3,5,7} (since 8 is a power of 2). this is a group of order 4, for it to be cyclic, you would need an element of order 4. it suffices to check 3,5 and 7. 3^2 = 1, 5^2 = 1, and 7^2 = 1, so U(Z8) is not cyclic.
U(Z9) is even easier: it has 6 elements {1,2,4,5,7,8}. we know that the multiplication is commutative, and any abelian group of order 6 is cyclic.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8726809620857239, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/44130/what-are-the-properties-and-characteristics-of-a-single-quantum?answertab=votes
|
# What are the properties and characteristics of a single Quantum?
In Quantum mechanics , a quantum of energy called Quanta is origin of everything.
In physics, a quantum (plural: quanta) is the minimum amount of any physical entity involved in an interaction.
$E=n.h.\nu$
$\epsilon=h.\nu$.
But it's properties and characteristics is unknown
What is properties and characteristics of a single Quanta?
-
1
One quantum, multiple quanta. – Rhys Nov 13 '12 at 14:48
from the Webster dicitonary, definition of "plural":1 : of, relating to, or constituting a class of grammatical forms usually used to denote more than one or in some languages more than two . (plural: quanta means that if you have got more than one quantum, say two, you do not say you have "two quantums", you say you have "two quanta". A (one) quantum of energy in quantum mechanics is called "quantum" – anna v Nov 15 '12 at 11:49
## 1 Answer
The question confuses the concept of quantization with the "origin of everything", whatever that last is.
Starting from the beginning:
There exists classical mechanics, studied over the last centuries with explicit and well know differential equations whose solutions are used in all engineering problems around us.
The solutions are continuous functions of the variables.
There exist wave mechanics of continuous media and of electromagnetic potentials, again governed by well known differential equations whose solutions are in use in the technology around us.
These equations have some solutions which are quantized in some variables. For example the well tuned violin string of a sharp frequency, or monochromatic light.
The quanta of quantum mechanics are one level below the two above formulations. Definitive experiments and data gathered pointed the way to the quantized nature of matter and radiation at a very basic lower level. From the periodicity of the periodic table of elements , the photoelectric effect, and innumerable studies over the years it is established that quantum mechanical equations hold in the microcosm, and that nature macroscopically emerges from an underlying quantum mechanical level.
This does not mean that there does not exist a continuum in some variables, it just means that specific solutions of QM equations have allowed values for the variables that are quantized, i.e. come in packets of energy or have specific locations in space ( cf crystals).
$\epsilon=h\nu$ allows all frequencies in free electromagnetic fields. Their creation and absorption happens in quanta specific to specific atoms and boundary conditions.
Thus the "origin of everything" is not a specific quantum of energy, but specific quantum mechanical equations whose solutions describe nature as sometimes having transitions in quantized steps and sometimes continuously.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242557287216187, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/15169/if-the-earth-has-gravity-why-dont-we-all-collapse-to-the-center/16067
|
# If the earth has gravity, why don't we all collapse to the center?
I'm sorry if the answer is obvious for you guys, but why don't we all (including buildings, road, people, the ground) collapse to the center of the earth because of gravity? Is it because we have velocity, just like the earth not falling to the sun (or electrons not falling to the protons)? But that analogy doesn't sound right because those involve wide empty space.
-
## 5 Answers
While there is some truth to all the answers presented, I think there is something important that has been missed.
It is not just electromagnetism, it really is a quantum mechanical effect. The Pauli exclusion principle for the spin-$\frac{1}{2}$ electrons says that two electrons with the same spin state cannot occupy the quantum orbital state. So when two atoms are pushed together by, for example, gravity, is therefore essentially a quantum mechanical effect that prevents the electrons from occupying the same space that results in a distortion of the quantum mechanical wavefunction. The energy it takes to create this distortion is what then results in the electromagnetic repulsion.
As @Vineet and others say, there are only 4 forces in nature and only gravity and electromagnetism have a long range. So it has to be the stronger electromagnetic force that opposes gravity, but it is the quantum mechanical Pauli exclusion principle that causes the distortion of the wavefunction that results in the electromagnetic force.
-
+1: good that someone said it. – Ron Maimon Oct 23 '11 at 18:00
Well, if you run over a sufficiently deep hole, you WILL fall to center. So why don't you fall to the center anyway? Because there already is something thats blocks the space - namely "the earth". As most of it is either heavier than you are (so switching places would need energy to lift the it up) or at least is solid and sticks together, it's on you to stay on the outside of the ball. If the matter below you is light and doesn't "stick together" tough, you will fall as deep as possible to the center (i.e., until you hit something that again is solid and/or heavier than you are). In case of a hole this lighter material is air, in case of, for example, the ocean it is water. In both cases you "fall" to the ground, that is, towards the center of the earth.
So why does matter block us at all, why can't we go through walls or fall THROUGH the ground to the center of the earth? The answer is: elektromagnetic forces between the atoms of the matter. They prevent the atoms from colliding.
-
1
-1: the answer is not electromagnetic forces in any way. It's all exclusion forces. Bosonic electrons would collapse to sit on top of each other in a small sphere at the center of the Earth, where the radius would be determined by the exclusion of the Fermionic nuclei. – Ron Maimon Oct 23 '11 at 17:59
1
Why should they do that? They repell each other much more electromagnetically than they attract gravitationally. – mcandril Oct 23 '11 at 19:12
1
Because the attraction to the nuclei balances out the repulsion, the attractive force with the nucleus is equal to the repulsive force of the electrons, and you gain by mushing the electrons closer to other nuclei. The nuclei and bosonic electrons clump together in a volume which doesn't grow proportionally to their number. Fermionic electrons occupy a volume proportional to the number. You can see this already in one atom, solving for Bosonic electron orbitals. This is a famous problem of the stability of matter, and it was solved in the 1960s, and Freeman Dyson was involved in this. – Ron Maimon Oct 23 '11 at 19:29
## A simple framework of the fundamental forces suggests the following explanation
The second of your guesses is the correct one. The reason we do not fall to the center of the Earth is the same reason you don't fall to the floor when you sit down in a chair; the electrons in your body repel the electrons in the chair.
More specifically, there is an electron cloud around each atomic nucleus. When two atoms get close to each other, the repulsion between their respective clouds keeps them apart. Meanwhile, the attraction between the cloud and the nucleus keeps each individual atom together.
From this view, the explanation superficially makes sense.
## However...
As pointed out in the comments below, this explanation is not the one given by modern quantum theory. According to quantum mechanics, particles with half-integer spins follow the Pauli exclusion principle and cannot occupy the same quantum states simultaneously. Because electrons, protons, and neutrons are all fermions, they are excluded from occupying the same states and it is this that prevents you from falling through the chair. To figure out why the Pauli exclusion principle holds, it is necessary to get deeply into quantum mechanics and solid state physics. I am not an expert in either so I'll leave that up to those more capable than myself.
It should be noted that the correction to my original answer is not a trivial one. It isn't just the framework of the two views that changes, the predictions made by them also change. A fantastic example of the difference can be seen in superconductivity. In this phenomenon, electrons join together to form Cooper pairs. These pairs of fermions are bosons and so do not obey the exclusion principle. As a consequence, they can travel through superconducting material (made up of electrons and protons) without resistance despite the Coulombic attraction and repulsion.
-
why don't the nucleus in our body attract the electrons of the chair? And if there is such a strong repulsion why aren't we floating? – Louis Rhys Sep 29 '11 at 6:37
1
You better not dig into explanations such as "those electrons repel other electrons", because solid state physics is much more complex, and those arguments are, simply speaking, incorrect. For instance two dipoles do attract each other, although some particles in one dipole repel others in another dipole. It's impossible even to explain why single atom has its size. Not to mention molecules, and why some electronic configurations have lesser energy than the others. – valdo Oct 23 '11 at 11:58
1
-1: The repulsion forces between electrons are not responsible for this in any way. It's only exclusion. – Ron Maimon Oct 23 '11 at 17:58
2
@AdamRedWine: I'll remove the downvote--- sorry, but this misconception is repeated endlessly. It happened here several times. The reason is because Carl Sagan used this explanation in Cosmos, and the degree to which popularization texts are repeated is never determined by accuracy, but by how popular the meme becomes. – Ron Maimon Oct 23 '11 at 18:48
1
@Adam, no problem, leave it as is. I apologize for my inappropriate comment. – FrankH Oct 24 '11 at 1:40
show 5 more comments
To add to the previous answers, note that the universe has four major forces. Of these, gravity and electromagnetism dominate the macroscopic level. Between these two forces, electromagnetic forces are orders of magnitude stronger than gravitational forces, mainly because of the small value of the Gravitational constant $G$. So, when you sit on a chair, there are two forces playing tug of war, the gravitational force of the earth pulling you towards center, and the electromagnetic forces between the electrons in you and the chair which keeps you apart (one is attractive and other is repulsive).
-
This is not true. It's not electromagnetic forces, but electron forces--- exclusion principle forces--- that keep the chair stable. -1 – Ron Maimon Oct 23 '11 at 17:58
@RonMaimon..u mean pauli's exclusion?? That can only describe the degenerate electronic configuration...not the stabliliy of everyday matter! – Vineet Menon Oct 24 '11 at 6:31
it describes the stability of everyday matter, because this is a highly degenerate eletronic configuration in the external field provided by the nuclei. – Ron Maimon Oct 24 '11 at 7:16
i have read the answer by @FrankH, it seems to me that it explains the quantum origins to EM forces? Is it true?? Anyways thank for that info regarding stability of matter.. – Vineet Menon Oct 25 '11 at 5:44
Because the electromagnetic force, the force that keeps atoms together is much much much much more stronger then the gravitational force. So the only thing that the gravitational force can do given the mass of the earth (G = mg) is bring the atoms closer, but as we go down to the atom, the electromagnetic force overcomes the gravitational force and keeps everything as it is
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365630149841309, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=668424
|
Physics Forums
Page 1 of 2 1 2 >
## Cross Product
What is the cross product of a constant and a vector? I know that the cross product between two vectors is the area of the parallelogram those two vectors form. My intuition tells me that since a constant is not a vector, it would only be multiplying with a vector when in a cross product with one. Since the vector will only grow larger in magnitude, there would be zero area in the paralleogram formed because there is no paralleogram.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Blog Entries: 8
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
The cross product is only defined between vectors of $\mathbb{R}^3$. The cross of a constant and a vector is not defined.
Quote by Lame Joke "What do you get when you cross a mountain-climber with a mosquito?" "Nothing: you can't cross a scaler with a vector"
So if I had an equation that contains a term that has a cross product of a constant and a vector, do I just cross it out of the equation? ( it is in an adding term so crossing it out would be okay). That's an awesome joke(:
Blog Entries: 8
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
## Cross Product
Quote by quantumfoam So if I had an equation that contains a term that has a cross product of a constant and a vector, do I just cross it out of the equation? ( it is in an adding term so crossing it out would be okay). That's an awesome joke(:
Can you give a specific example?
Sure! An equation like F=π[hXh+cXh] where h is a vector and c is a constant.
Blog Entries: 8
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by quantumfoam Sure! An equation like π[hXh+cXh] where h is a vector and c is a constant.
That doesn't really make any sense.
F is a vector.
F=π[hXh+cXh] Sorry about not adding the equality.
Would the term containing the cross product of the constant c and vector h in the above equation just be zero? Or am I able to take cross it out of the above equation?
Blog Entries: 8
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by quantumfoam Would the term containing the cross product of the constant c and vector h in the above equation just be zero? Or am I able to take cross it out of the above equation?
No. As it stands, your equation makes no sense. You can't take the cross product of a scalar and a vector.
Damn that stinks. Even if the c was a constant?
Blog Entries: 8
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by quantumfoam Damn that stinks. Even if the c was a constant?
Does this equation appear in some book or anything? Can you provide some more context?
Well I made it up haha. Im sorry. I'm new at this. Do you think you can make an equation that makes sense? Like the one I attempted but failed at.
Blog Entries: 8
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by quantumfoam Well I made it up haha. Im sorry. I'm new at this. Do you think you can make an equation that makes sense? Like the one I attempted but failed at.
It only makes sense if you take the cross of a vector and a vector.
What were you attempting to do?? What lead you to this particular equation?
Well, the h is a vector that represents a magnetic field strength. In the definition of a current, I=dq/dt, multiplying both sides by a small length ds would give the magnetic field produced my a moving charge. (dq/dt)ds turns into dq(ds/dt) which turns into vdq where dq is a small piece of charge and v is the velocity of the total charge. Integrating both sides to I ds=vdq would give the total magnetic field. For a constant velocity, the right side of the above equation turns into vq+ c, where c is some constant. Now I get the equation h=vq+c. Solving for qv gives me h-c=qv. In the equation for magnetic force on a moving charge, F=qvxB. I substituted h-c for qv in the above force equation. B turns into uh where u is the permeability of free space. I substitute uh for B in the magnetic force equation and get F=u[hxh-cxh]. I want the cxh term to go away.
Does that sort of help?
Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus I don't understand any of what you said, but my physics is very bad. I'll move this to the physics section for you.
Page 1 of 2 1 2 >
Thread Tools
Similar Threads for: Cross Product
Thread Forum Replies
Calculus & Beyond Homework 1
Calculus & Beyond Homework 9
Calculus & Beyond Homework 8
Calculus & Beyond Homework 8
Introductory Physics Homework 4
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324575662612915, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/250527/fun-question-anyone-know-why-e-eulers-number-was-chosen-for-wave-functio?answertab=oldest
|
# “Fun” question: anyone know why $e$ (Euler's Number) was chosen for wave functions?
First, let me say that this is merely something I have always wondered about, and can never seem to find a good reference for. I simply want to know... the geek in me.
Why was $e$ (Euler's Number) chosen for wave function descriptions? For instance:
$$\Phi(x, t) = Ae^{i(kx - \omega t)}$$
It's really the $i$ that's doing the work of making a circular form here, while the $e$ is simply making the scaling more friendly to what we're used to. For instance, let's compare $2^{ix}$ versus $e^{ix}$. When $x=0$, they are both 1. To get them both to reach $i$, $x = \pi / 2$ for $e^{ix}$ and $x \approx 2.26618$ for $2^{ix}$. Similarly, for all the other quadrants of the circle, an equivalent factor can be found for $2^{ix}$, scaling linearly, of course.
So the scaling might look a bit less "pretty", but it is completely functional using $2^{ix}$ instead of $e^{ix}$.
So, I guess my question is twofold:
• Why is the wave equation using $e$, other than because it supplies the "proper" scaling factor to make it friendlier with circular equations? (I.e., $2 \pi = 0$, brings us back to where we started.)
• What is it about $e$ that makes the scaling work out? Euler's number was derived from $\lim \ (1 + 1/n)^n$ , which doesn't, to me, suggest anything particularly circular to it. (In fact, from that definition, it also doesn't immediately suggest why it's derivative is equal to itself, either, but that's a different question for another day!) Just seems awful serendipitous to me, too much so, which makes me suspect a connection I don't know about...
Thanks in advance!
Mike
-
1
Thanks to whomever made all of the equations look prettier! Now I know how to add equations properly in this forum. – Mike Williamson Dec 4 '12 at 19:11
## 2 Answers
It likely relies on the fact that
$$e^{ix} = \cos{x} + i\sin{x}$$
whereas
$$2^{ix} = \cos{(\ln{(2)} x)} + i\sin{(\ln{(2)}x)}$$
which leads to ugly formulas.
EDIT: This is great resource, by John Cook, on why $e$ is used: http://www.johndcook.com/blog/2012/11/15/logarithms/ It deals with logarithms, but all arguments hold here too.
DOUBLE EDIT: I'll elaborate a little be more. The Laplacian operator is defined by $\Delta f = \bigtriangledown \bigtriangledown f$, and in the 1d case it is simply the second derivative. The eigenfunctions and eigenvalues for the Laplacian are defined as the solution to
$$\Delta f = \lambda f$$ This is also the mathematical definition of a drum, which has lots of circular and sinusoidal properties. One particular solution (for the 1d case) is $\lambda = 1, f(x) = e^{ix}$. But, in general, the solution is
$$e^{\sqrt{\lambda}\;ix}$$
which can be molded into any exponential. For example, setting $\lambda = (\log{2})^2$ gives us $2^{ix}$. So what does this mean?
It means there is nothing special about $e$!! The real connection is between all complex exponentials and circles. We merely choose $e$ for mathematical convinence.
-
Hi, yes, I think you are correct. But my question was more fundamental as to WHY this is the case? No definitions of e that I have run across ever use trigonometry to define the number. So how does it "magically" also have this incredibly useful trigonometric property? Doesn't that seem downright crazy to anyone else? – Mike Williamson Dec 4 '12 at 19:09
@MikeWilliamson The trigonometric link comes from reconciling the power series expansion of $e$ along with the power series expansions of $\sin$ and $\cos$. – Arkamis Dec 4 '12 at 19:59
Thanks for the "double edit"! The comment regarding the solution for a drum and the connection between ALL complex exponentials and circles is great! I guess my problem was that I was intuitively seeing what you wrote elegantly & explicitly, but not recognizing that the real "magic" is in recognizing that complex exponentials are "magically circular". That's easier for me to get my head around, since it is simply the easiest / simplest way to write a circle as a function: i^x is itself circular, where x is any real number. Thanks so much!! – Mike Williamson Dec 6 '12 at 0:16
By using a base of $e$, derivatives are simpler. As I think you recognize, $\frac{d}{dx}e^{ikx}=ik\,e^{ikx}$. Because of this, when initial value $v_0$ and initial relative slope $r_0$ are known, they fit right in: $v_0\,e^{ir_0x}$. All of this is uglier with a number other than $e$ for the base.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9573517441749573, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/271837/functional-analysis-hilbert-spaces/272698
|
# Functional analysis-Hilbert spaces
Let $X$ be an inner product space.
Show that $X$ is a Hilbert space if and only if for each continuous linear functional $L$ on $X$,there exists $z\in X$ such that $L(x)=\langle x,z\rangle$ .
Here,one part is exactly the Riesz Representation Theorem.
How can I prove the converse result?That is, If for each continuous linear functional $L$ on $X$,there exists $z\in X$ such that $L(x)=\langle x,z\rangle$ then $X$ is a Hilbert space.Any Help is appreciated.
Thanks!
-
## 3 Answers
Hint: Suppose $x_n$ is a Cauchy sequence in $X$.
Consider the linear functional $L(x) = \lim_n \langle x, x_n \rangle$.
-
How can I turn this into a Cauchy sequence in $\mathcal C$ or $\mathcal R$? That is the difficulty here.Can you please explain me further? – ccc Jan 7 at 1:35
You don't "turn this into a Cauchy sequence in $\mathbb C$ or $\mathbb R$". Can you prove $L(x)$ exists, and that $L$ is a bounded linear functional? If so, take $z$ so that $L(x) = \langle x, z \rangle$. The limit of the Cauchy sequence $x_n$ is going to be $z$. Can you estimate $\|x_n - z\|$? – Robert Israel Jan 7 at 2:27
@ robert,still its difficult for me to organize the answer for this.Can you please help me? – ccc Jan 7 at 3:44
Which part are you stuck on? – Robert Israel Jan 7 at 7:19
The basic idea here is to pic a sequence in $X$ and show that the limit also in $X$.So how can I prove that $lim_n\langle x,x_n\rangle=\langle x,z\rangle$.How to estimate $∥x_n−z∥$? – ccc Jan 7 at 7:53
show 1 more comment
## Did you find this question interesting? Try our newsletter
email address
by the hints we know that $<x,x_n>$ is a cauchy sequence in $\mathbb{K}$. Hence it converges. We know that $L(x) = <x,z> \rightarrow <x,z> = \lim_n <x,x_n>$. By choosing $L(x_n-z)$ we get $x_n = z$. So every Cauchy sequence in $X$ converges so $X$ is a Hilbert space.
-
why -1 ? is it wrong please correct me! – Johan Jan 9 at 9:06
Note that the limit $\langle x,x_n \rangle$, converge because
$$| \langle x,x_n \rangle- \langle x,x_m \rangle |= |\langle x,x_n-x_m \rangle|\leq\|x\|\|x_n-x_m\|$$
Then $x_n$ cauchy implies that $\langle x,x_n \rangle$ cauchy( in $\mathbb R$ or $\mathbb C$)
How above define a continuous linear functional $L(x)=\lim\limits_{n } \langle x,x_n \rangle$.
By hypothesis there is a $z\in X$ such that $L(x)=\langle x,z \rangle$. Even we know in advance that X is Hilbert then we will conclude only that $x_n$ converge to $z$ weakly.
This is not a answer of course. Is a comment that does not fit in the place of comment.
@RobertIsrael,@ccc Can you give another hint?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046682119369507, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/speed+energy
|
# Tagged Questions
3answers
143 views
### If an electric car were to drive without having to stop, would the range be greatly affected by the speed at which the vehicle is moving?
Of course aerodynamics factors into this question, and the faster you are moving the more air you have to push out of your way, the more energy you use. But would the difference be only a small ...
0answers
26 views
### Intuitive understanding of the equation for kinetic energy [duplicate]
Possible Duplicate: Why does kinetic energy increase quadratically, not linearly, with speed? I know this is one of the easiest equations out there and I've used it for years but its still ...
3answers
462 views
### Relation between Newtons and Kilograms
Work is expressed as W=F*d, where the F is in Newton, d is in meters and result ...
1answer
800 views
### Fuel usage at the same constant rpm at different gears
I've had a discussion with my father today, about the fuel usage of a vehicle at the same rpm, but a different gear. He claims that the following situations have the same fuel usage: ...
1answer
2k views
### How to calculate fuel consumption of car (mpg) from speed and accleration knowing mass, drag coeff and rolling resistance?
How can I calculate the current (instantaneous) mpg of my car if I know the speed and acceleration of the car? From reading various answers for the "car going level or up/down hill" question asked ...
14answers
6k views
### Why does kinetic energy increase quadratically, not linearly, with speed?
As Wikipedia says: [...] the kinetic energy of a non-rotating object of mass $m$ traveling at a speed $v$ is $mv^2/2$. Why does this not increase linearly with speed? Why does it take so much ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577934145927429, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/165223-free-group.html
|
# Thread:
1. ## Free Group
Hi,
Prove that every free group is torsion-free and is non-abelian if its rank is $\geq 2$.
The second part is easy.
What about the first part? I think that we should suppose that this free group $F[X]$ is torsion (but I'm not sure).
Thanks for any help.
2. Originally Posted by Arczi1984
Hi,
Prove that every free group is torsion-free and is non-abelian if its rank is $\geq 2$.
The second part is easy.
What about the first part? I think that we should suppose that this free group $F[X]$ is torsion (but I'm not sure).
Thanks for any help.
This is an easy task if you know already about the normal form of an element in a free group...besides this, we
can work with F(2) = the free group of rank two, which contains an isomorphic copy of the free
group of any rank between 2 and countably infinite.
If you don't know yet about the normal form, then you can do as follows: suppose $w:=w(x,y)$ is a
non-trivial word on x,y in the free group $F(x,y)$, and $ord(w) =k<\infty$ . We can
carry on now by induction on the length of this word: if $l(w)=1\Longrightarrow w(x,y) =x,y,x^{-1},\,\,or\,\,y^{-1}$ , suppose
$w=x\Longrightarrow$ defining the function $f:\{x,y\}\rightarrow C_{k+1}=\langle c\rangle$ the cyclic group of
order $k+1$, by $f(x):=c\,,\,f(y)=1$ , we know (the universal property of free groups) that there
exists a unique group homomorphism $\phi(F(x,y))\rightarrow C_{k+1}$ extending
the function $f$. But this is impossible since then it must be $\phi(x)=c\Longrightarrow 1\neq c^k=\phi(x)^k=\phi(x^k)=\phi(1)=1$ ,
thus getting a straighfroward contradiction. Try now to extend this idea when $l(w)>1$
Tonio
Ps. Or read about the normal reduced form for an element in a free group. The proof then is
almost trivial.
3. Thanks for help. I'll try to do the rest
Could You show how can You do this task using normal reduced form? It will be nice to see the second solution.
4. Originally Posted by Arczi1984
Thanks for help. I'll try to do the rest
Could You show how can You do this task using normal reduced form? It will be nice to see the second solution.
If $w(x,y)$ is in reduced normal form, then we can assume that its first letter is no the inverse of its
last one (why?), so we get at once that $l(w)=m\Longrightarrow l(w^r)=rm$ , so
if the length of the word is possitive also its multiples also have positive length and are thus
not the unit element.
Tonio
5. We can assume this because this is cyclically reduced word (am I right?).
I'm still do not see how this proof the fact that every free group is torsion-free .
Could You show this with details , of course if You have time. (I'm thinking with difficulty so this is unclear for me.) I'll be glad for this.
6. Originally Posted by Arczi1984
We can assume this because this is cyclically reduced word (am I right?).
I'm still do not see how this proof the fact that every free group is torsion-free .
Could You show this with details , of course if You have time. (I'm thinking with difficulty so this is unclear for me.) I'll be glad for this.
Well, you know that ONLY the empty word (of length 0 = zero) in free generators represents the unit element, so if ANY non trivial element's
word increases its length as we multiply this element by itself then its length won't ever be zero and this
means the element raised to no (non-zero) power will be the unit element <==> the element has infinite order.
Tonio
7. Ok, I'll look at this tomorrow. Now it's a little late
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354804754257202, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Descriptive_set_theory
|
# Descriptive set theory
In mathematical logic, descriptive set theory is the study of certain classes of "well-behaved" subsets of the real line and other Polish spaces. As well as being one of the primary areas of research in set theory, it has applications to other areas of mathematics such as functional analysis, ergodic theory, the study of operator algebras and group actions, and mathematical logic.
## Polish spaces
Descriptive set theory begins with the study of Polish spaces and their Borel sets.
A Polish space is a second countable topological space that is metrizable with a complete metric. Equivalently, it is a complete separable metric space whose metric has been "forgotten". Examples include the real line $\mathbb{R}$, the Baire space $\mathcal{N}$, the Cantor space $\mathcal{C}$, and the Hilbert cube $I^{\mathbb{N}}$.
### Universality properties
The class of Polish spaces has several universality properties, which show that there is no loss of generality in considering Polish spaces of certain restricted forms.
• Every Polish space is homeomorphic to a Gδ subspace of the Hilbert cube, and every Gδ subspace of the Hilbert cube is Polish.
• Every Polish space is obtained as a continuous image of Baire space; in fact every Polish space is the image of a continuous bijection defined on a closed subset of Baire space. Similarly, every compact Polish space is a continuous image of Cantor space.
Because of these universality properties, and because the Baire space $\mathcal{N}$ has the convenient property that it is homeomorphic to $\mathcal{N}^\omega$, many results in descriptive set theory are proved in the context of Baire space alone.
## Borel sets
The class of Borel sets of a topological space X consists of all sets in the smallest σ-algebra containing the open sets of X. This means that the Borel sets of X are the smallest collection of sets such that:
• Every open subset of X is a Borel set.
• If A is a Borel set, so is $X \setminus A$. That is, the class of Borel sets are closed under complementation.
• If An is a Borel set for each natural number n, then the union $\bigcup A_n$ is a Borel set. That is, the Borel sets are closed under countable unions.
A fundamental result shows that any two uncountable Polish spaces X and Y are Borel isomorphic: there is a bijection from X to Y such that the preimage of any Borel set is Borel, and the image of any Borel set is Borel. This gives additional justification to the practice of restricting attention to Baire space and Cantor space, since these and any other Polish spaces are all isomorphic at the level of Borel sets.
### Borel hierarchy
Each Borel set of a Polish space is classified in the Borel hierarchy based on how many times the operations of countable union and complementation must be used to obtain the set, beginning from open sets. The classification is in terms of countable ordinal numbers. For each nonzero countable ordinal α there are classes $\mathbf{\Sigma}^0_\alpha$, $\mathbf{\Pi}^0_\alpha$, and $\mathbf{\Delta}^0_\alpha$.
• Every open set is declared to be $\mathbf{\Sigma}^0_1$.
• A set is declared to be $\mathbf{\Pi}^0_\alpha$ if and only if its complement is $\mathbf{\Sigma}^0_\alpha$.
• A set A is declared to be $\mathbf{\Sigma}^0_\delta$, δ > 1, if there is a sequence 〈 Ai 〉 of sets, each of which is $\mathbf{\Pi}^0_{\lambda(i)}$ for some λ(i) < δ, such that $A = \bigcup A_i$.
• A set is $\mathbf{\Delta}^0_\alpha$ if and only if it is both $\mathbf{\Sigma}^0_\alpha$ and $\mathbf{\Pi}^0_\alpha$.
A theorem shows that any set that is $\mathbf{\Sigma}^0_\alpha$ or $\mathbf{\Pi}^0_\alpha$ is $\mathbf{\Delta}^0_{\alpha + 1}$, and any $\Delta^0_\beta$ set is both $\mathbf{\Sigma}^0_\alpha$ and $\mathbf{\Pi}^0_\alpha$ for all α > β. Thus the hierarchy has the following structure, where arrows indicate inclusion.
$\begin{matrix} & & \mathbf{\Sigma}^0_1 & & & & \mathbf{\Sigma}^0_2 & & \cdots \\ & \nearrow & & \searrow & & \nearrow \\ \mathbf{\Delta}^0_1 & & & & \mathbf{\Delta}^0_2 & & & & \cdots \\ & \searrow & & \nearrow & & \searrow \\ & & \mathbf{\Pi}^0_1 & & & & \mathbf{\Pi}^0_2 & & \cdots \end{matrix}\begin{matrix} & & \mathbf{\Sigma}^0_\alpha & & & \cdots \\ & \nearrow & & \searrow \\ \quad \mathbf{\Delta}^0_\alpha & & & & \mathbf{\Delta}^0_{\alpha + 1} & \cdots \\ & \searrow & & \nearrow \\ & & \mathbf{\Pi}^0_\alpha & & & \cdots \end{matrix}$
### Regularity properties of Borel sets
Classical descriptive set theory includes the study of regularity properties of Borel sets. For example, all Borel sets of a Polish space have the property of Baire and the perfect set property. Modern descriptive set theory includes the study of the ways in which these results generalize, or fail to generalize, to other classes of subsets of Polish spaces.
## Analytic and coanalytic sets
Just beyond the Borel sets in complexity are the analytic sets and coanalytic sets. A subset of a Polish space X is analytic if it is the continuous image of a Borel subset of some other Polish space. Although any continuous preimage of a Borel set is Borel, not all analytic sets are Borel sets. A set is coanalytic if its complement is analytic.
## Projective sets and Wadge degrees
Many questions in descriptive set theory ultimately depend upon set-theoretic considerations and the properties of ordinal and cardinal numbers. This phenomenon is particularly apparent in the projective sets. These are defined via the projective hierarchy on a Polish space X:
• A set is declared to be $\mathbf{\Sigma}^1_1$ if it is analytic.
• A set is $\mathbf{\Pi}^1_1$ if it is coanalytic.
• A set A is $\mathbf{\Sigma}^1_{n+1}$ if there is a $\mathbf{\Pi}^1_n$ subset B of $X \times X$ such that A is the projection of B to the first coordinate.
• A set A is $\mathbf{\Pi}^1_{n+1}$ if there is a $\mathbf{\Sigma}^1_n$ subset B of $X \times X$ such that A is the projection of B to the first coordinate.
• A set is $\mathbf{\Delta}^1_{n}$ if it is both $\mathbf{\Pi}^1_n$ and $\mathbf{\Sigma}^1_n$ .
As with the Borel hierarchy, for each n, any $\mathbf{\Delta}^1_n$ set is both $\mathbf{\Sigma}^1_{n+1}$ and $\mathbf{\Pi}^1_{n+1}.$
The properties of the projective sets are not completely determined by ZFC. Under the assumption V = L, not all projective sets have the perfect set property or the property of Baire. However, under the assumption of projective determinacy, all projective sets have both the perfect set property and the property of Baire. This is related to the fact that ZFC proves Borel determinacy, but not projective determinacy.
More generally, the entire collection of sets of elements of a Polish space X can be grouped into equivalence classes, known as Wadge degrees, that generalize the projective hierarchy. These degrees are ordered in the Wadge hierarchy. The axiom of determinacy implies that the Wadge hierarchy on any Polish space is well-founded and of length Θ, with structure extending the projective hierarchy.
## Borel equivalence relations
A contemporary area of research in descriptive set theory studies Borel equivalence relations. A Borel equivalence relation on a Polish space X is a Borel subset of $X \times X$ that is an equivalence relation on X.
## Effective descriptive set theory
The area of effective descriptive set theory combines the methods of descriptive set theory with those of generalized recursion theory (especially hyperarithmetical theory). In particular, it focuses on lightface analogues of hierarchies of classical descriptive set theory. Thus the hyperarithmetic hierarchy is studied instead of the Borel hierarchy, and the analytical hierarchy instead of the projective hierarchy. This research is related to weaker version of set theory such as Kripke-Platek set theory and second-order arithmetic.
## References
• Kechris, Alexander S. (1994). Classical Descriptive Set Theory. Springer-Verlag. ISBN 0-387-94374-9.
• Moschovakis, Yiannis N. (1980). Descriptive Set Theory. North Holland. ISBN 0-444-70199-0. Second edition available online
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 42, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9167379140853882, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/17457/the-meaning-of-unique-expression
|
# The meaning of “unique expression”
Let $G$ be abelian group, and let $A_k$ be a family of subgroups of $G$. Prove that $G=\sum A_k$ (internal) if and only if every non-zero element $g\in G$ has a unique expression of the form $g=a_{k_1}+...+a_{k_n}$, where $a_{k_i} \in A_{k_i}$, the $k_i$ are distinct and each $a_{k_i}\neq 0$.
I have a hunch that the term "$g$ has a unique expression" here means that $A_k \bigcap A_j =\{0\}$. If so, how do we reason it and proved it to be so?
-
## 1 Answer
No, it does not mean that. It means that $\displaystyle A_i\cap(\sum_{j\neq i}A_j)=0$ for all $i$.
Later: Well, actually, it is equivalent to that. That «$g$ has a unique expression of the form $a_1+\cdots+a_n$ with $a_i\in A_i$» means exactly that
• first, there exist $a_1,\dots,a_n$ with $a_i\in A_i$ for each $i$ such that $g=a_1+\cdots+a_n$, and
• second, that whenever you have elements $a_1,\dots,a_n, b_1,\dots,n_n$ with $a_i,b_i\in A_i$ for each $i$ such that $=a_1+\cdots+a_n=b_1+\cdots+b_n$, then $a_i=b_i$ for all $i$.
-
Oh ok. but what's the difference? And also how do we show that uniqueness implies $A_i\cap(\sum_{j\neq i}A_j)=0$. – Seoral Jan 14 '11 at 6:19
2
Consider the following subgroups of $\mathbb Z^2$: $A_1=\langle(1,0)\rangle$, $A_2=\langle(1,1)\rangle$, $A_3=\langle(0,1)\rangle$. Now show that for all $i\neq j$ you have $A_i\cap A_j=0$, yet it is not true that $A_1\cap(A_2+A_3)=0$, for example. – Mariano Suárez-Alvarez♦ Jan 14 '11 at 6:22
As for how to prove it: you should try to do it yourself for a while first... – Mariano Suárez-Alvarez♦ Jan 14 '11 at 6:24
Ok, i think i know how to prove it already.thanks. – Seoral Jan 14 '11 at 7:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261068105697632, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4271094
|
Physics Forums
## Understanding Vitali Sets...
I'm not sure if I understood Vitali Sets correctly, so I just want to write what I understood (because I don't know if it's right):
We have an equivalence relation where $x \sim y \iff x-y \in Q$. So if we look at the interval [0,1], each irrational number will have its own equivalence class...and we will have one equivalence class for all rational numbers, right? Now, using the axiom of choice, we take one element from each equivalence class as a representative and form the set A. And then we form a new collection of sets $A_q = \{q+a | a \in A\}$. We know that this collection has a countable number of sets, because each set corresponds to one rational number between 0 and 1...and the rational numbers are countable. We also know that the sets are disjoint. Then, when we take the union of these sets, we just need to add their measure. But we know that each set has the same measure, since measure is translation invariant. But we also know that there are infinte number of rational numbers between 0 and 1, so there are infinite amound of sets...so the measure must be infinity. But that can't be true since [0,1] has measure 1. So that's a contradiction, and the vitali set is not measureable.
Do you think my understanding is correct? If not can you please correct me?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I think you got it.....
Recognitions:
Science Advisor
Quote by Artusartos So if we look at the interval [0,1], each irrational number will have its own equivalence class..
It isn't clear what you mean by that.
.and we will have one equivalence class for all rational numbers, right?
Yes, if you mean to say that all rational numbers are in the same equivalence class.
Blog Entries: 1
Recognitions:
Gold Member
Homework Help
Science Advisor
## Understanding Vitali Sets...
Quote by Artusartos Then, when we take the union of these sets, we just need to add their measure. But we know that each set has the same measure, since measure is translation invariant. But we also know that there are infinte number of rational numbers between 0 and 1, so there are infinite amound of sets...so the measure must be infinity. But that can't be true since [0,1] has measure 1. So that's a contradiction, and the vitali set is not measureable. Do you think my understanding is correct? If not can you please correct me?
Pretty close, but I would state it as follows. As you said, if ##A## is measurable, then each ##A_q## is measurable and has the same measure, due to translation invariance. Also, the ##A_q## form a countable partition of ##[0,1]##, so we must have
$$\sum_{q\in \mathbb{Q}} m(A_q) = 1$$
But all of the ##m(A_q)## are equal to ##m(A)##, so the sum on the left is either zero or infinity, depending on whether ##m(A) = 0## or ##m(A) > 0##. In either case we have a contradiction.
Blog Entries: 1
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by Artusartos So if we look at the interval [0,1], each irrational number will have its own equivalence class
No, a single equivalence class is of the form ##\{x + q \textrm{ }|\textrm{ } q \in \mathbb{Q}\}##, so every equivalence class contains a countably infinite number of elements. There is one equivalence class containing all of the rationals (and no irrationals). Every other equivalence class contains a countably infinite number of irrationals (and no rationals).
There are of course uncountably many equivalence classes. ##A## contains one element from each equivalence class, by construction. The same is true of each ##A_q##.
Quote by jbunniii Pretty close, but I would state it as follows. As you said, if ##A## is measurable, then each ##A_q## is measurable and has the same measure, due to translation invariance. Also, the ##A_q## form a countable partition of ##[0,1]##, so we must have $$\sum_{q\in \mathbb{Q}} m(A_q) = 1$$ But all of the ##m(A_q)## are equal to ##m(A)##, so the sum on the left is either zero or infinity, depending on whether ##m(A) = 0## or ##m(A) > 0##. In either case we have a contradiction.
Thanks
Thread Tools
| | | |
|---------------------------------------------------|--------------------------------------------|---------|
| Similar Threads for: Understanding Vitali Sets... | | |
| Thread | Forum | Replies |
| | Set Theory, Logic, Probability, Statistics | 5 |
| | Calculus | 3 |
| | Set Theory, Logic, Probability, Statistics | 2 |
| | Calculus & Beyond Homework | 1 |
| | Differential Geometry | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924624502658844, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/279965/smoothening-viscosity-solution-to-classical-solution/280040
|
# Smoothening viscosity solution to classical solution
Let $A$ be a linear second order differential operator with constant coefficients defined on real-valued functions of one-variable. Suppose that we have that for an upper-semicontinuous function $u:\mathbb{R}\to\mathbb{R}$ $$Au \leq 0\mbox{ holds in the viscosity sense.}$$ Let $u^{\epsilon}(x) = \int_{-1}^1u(x-\epsilon y)\varphi(y)\mbox{dy}$, where $\varphi$ is the standard mollifier. I would like to say that $$Au^{\epsilon}\leq 0\mbox{ in the classical sense}.$$
I am convinced that this must be true in this special and simple case. I have tried a few approaches, but as is often the case with viscosity theory, the devil is in the details and I cannot seem to write something convincing. Is anyone familiar with a general theorem along the lines of mollification preserves sub-solutions of linear const. coeff. pde''?
Attempt at a solution: We suppose the contrary, that at some point $x_0,$ we have $Au^{\epsilon}(x_0) > 0.$ By continuity, we can extend this further to strict positivity on some open neighborhood $N$ of $x_0.$ By upper semi-continuity, I know that $u-u^{\epsilon}$ will achieve a maximum on closure $\overline{N}$. If it is a local maximum, that is great, I'm done. However, if it is not, I am looking to construct another test function, say $\psi$ such that $A\psi \geq 0$ and $u - u^{\epsilon} - \psi$ achieves a local maximum inside $N$. Clearly, $\psi$ needs to depend somehow on the convergence of $u^{\epsilon}$ to $u$. But how to construct?
Edit: problem statement has been amended to say constant coefficients. Thanks Willie Wong for pointing that out.
-
## 1 Answer
Let $\chi$ be an odd, smooth function such that $\chi(x) > 0$ if $x < 0$. Let $D$ be the standard derivative operator. Let $A = \chi D^2$.
Let $u(x) = -(x+1)^2 + 1$ if $x < 0$ and $0$ otherwise. Away from $x = 0$, $u(x)$ is a bona fide subsolution, so $u(x)$ is also viscosity subsolution there. At $0$, we easily see that while $u$ is continuous, it is not differentiable. And in fact we see that by convexity that any $C^2$ function $v$ on a neighborhood of zero such that $v(0) = 0$ cannot satisfy $v(x) \geq u(x)$. Hence the viscosity condition is satisfied vacuously.
But for $0 < x < \epsilon$
$$Au^{\epsilon}(x) = -\chi(x) \int_{x/\epsilon}^1 D^2(x + 1 - \epsilon y)^2 \phi(y) dy = -\chi(x) \int_{x/\epsilon}^1 2 \phi(y) dy > 0$$
and so it is not a classical subsolution. So the statement as given is not true.
In general, only constant coefficient differential operators can commute with mollification. As long as you allow variable coefficients, you can potentially get into trouble: variable coefficients break translation invariance, and so the mollifying process which involves adding translated versions of the function is no longer (necessarily) taking an infinite linear combination of subsolutions, and hence the mollified function does not have to be a subsolution in general.
-
I will edit the question. I only have constant coefficients, in fact. Thanks for the interesting counterexample, I'll be sure to keep that in mind in the future. – fouryear Jan 16 at 14:08
I am pretty sure that if the coefficients are sufficiently smooth and satisfy some sort of ellipticity condition we can essentially freeze coefficients and obtain the same result as the constant coefficient case. But it appears that the constant coefficient case is a bit more difficult then I originally imagined. I will update if I find a trick. – Willie Wong♦ Jan 17 at 9:10
I would also be interested in the solution to something slightly simpler and I could try to take it from there. One difficulty seems to be the fact that mollification does not approximate well semi-continuous functions. After a bit of searching, I've come across the notion of "strong" semi-continuity, which seems to be enough for the difficulty to go away. But I have no such strong semi-continuity in my case. – fouryear Jan 17 at 10:03
I was also trying to apply your idea of translations of sub-solutions, thinking that I'd perhaps be able to approximate the $u^{\epsilon}$ by averaging with a bunch of dirac measures and then take a limit to get the Lebesgue integral and go ahead with the contradiction, but it seems that the discontinuity set of $u$ could be too big for that idea to work. – fouryear Jan 17 at 10:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 3, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602184295654297, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/72699?sort=oldest
|
## Maximum-bend TSP
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I've seen minimum-bend TSP studied, has anyone looked at max-bend TSP?
As a special case, I'm interested in the maximum number of turns a hamiltonian path can take in an $n \times n$ square grid.
I think it should be $n^2 - n$ for even $n$ and $n^2 - n - 1$ for odd $n$, but does anyone know a proof?
-
Hi Michael, would you care spelling out the meaning of the acronym TSP? I don't know what it stands for. – André Henriques Aug 11 2011 at 20:39
I suspect he is using Traveling Salesman Path for Hamiltonian path. In many contexts, bends don't make much sense, but on the grid it becomes an interesting problem. I suspect some region argument (divide the grid into somewhat similar regions) might demonstrate the poster's conjecture. Gerhard "Ask Me Aboit System Design" Paseman, 2011.08.11 – Gerhard Paseman Aug 11 2011 at 21:12
1
It's a serious breach of etiquette to post this here just hours after posting it to math.stackexchange math.stackexchange.com/questions/56861/… especially without acknowledging that you are doing so. Voting to close. – Gerry Myerson Aug 11 2011 at 22:40
## 1 Answer
Let me write $f(n) = n^{2}-n$ for even $n$ and $f(n) = n^{2}-n-1$ for odd $n$.
It's certainly the case that you can do at least as well as $f(n)$. More precisely, there is a path with $f(n)$ turns that ends up at the top right corner of the grid, and which arrived there from the point below that.
It's easy to check this for $n=2$ or $n=3$, and we can handle the rest by induction (the inductive step being the reason it's important to generate examples that end up at the top right corner).
Suppose we have a path that works for an $(n-2) \times (n-2)$ grid; I'll call it the $(n-2)$-path. We proceed by extending this $(n-2)$-path to an $n$-path.
Place the $(n-2)$-path at the top left corner of the $n \times n$ grid, leaving two columns to the right and two rows below the path. Start from the end point at the top right corner of the $(n-2)$-path. Extend the path two grid points rightwards, to the edge of the grid.
Now we split into two cases. If $n$ is even, snake down the right side of the grid, then across the bottom of the grid, finishing, via a down-move, at the bottom left corner. This procedure adds $4n-6$ bends to the original path, and so has $(n-2)^2 - (n-2) + 4n-6 = n^{2}-n$ bends. Finally, rotate the resulting path to give you a path with $n^{2}-n$ bends that ends with an up-move at the top right corner.
If $n$ is odd, then snake down the right side of grid. There is one small modification due to the oddness: you have to stop snaking just before you hit the bottom right. (At this point I wish I knew how to draw a nice picture.) Then resume snaking along the bottom of the grid. As before, you finish at the bottom left corner, via a down-move. Again, this adds $4n-6$ bends to the original path, so has $(n-2)^{2}-(n-2)-1 + 4n-6 = n^{2}-n-1$ bends. Finally, rotate the path to give an $n$-path that ends up at the top right.
There must be some neat argument to show that you can't do better than $f(n)$, but I can't see it yet...
-
By looking at corners, you can get $n^2 - 4$ as an upper bound for large n. I suspect the conjectured bound is tight. It might be provable by dividing the grid into diagonal regions, but I have not thought it through. Gerhard "Ask Me About System Design" Paseman, 2011.08.11 – Gerhard Paseman Aug 11 2011 at 22:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343315958976746, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/152379-number-pattern.html
|
# Thread:
1. ## Number pattern
Hi, The question I have is:
Express a single generality that generalises all three related observations that
$1+3+5+3+1=3^2+2^2$
$1+3+5+7+9+7+5+3+1=5^2+4^2$
$1+3+1=2^2+1^2$
and show that your generalisation always holds true.
I know that the answer (i.e. RHS of =) is always $\frac{(no. of different terms)^2}{2}+(\frac{(no. of different terms)}{2}-1)^2$ (supposed to say no. of different terms)
I also know that LHS is $a+(a+2)+(a+4)+...+(a+4)+(a+2)+a$, where a=1
but I'm not sure how to show all of this generally, so that it works for every case . Any ideas?
2. Have you studies arithmetic series before ? If you have you should without much difficulty be able to write down the formula for the sum of the first n odd numbers.
do you see how this relates to your problem ?
3. 1+3+5+...+(2k-1)+(2k+1)+(2k-1)+...+5+3+2+1
1+3+5+...+(2k-1)=(2k-1)+...+5+3+2+1=(1+{2k-1}}*k/2=k^2
Therefor:
1+3+5+...+(2k-1)+(2k+1)+(2k-1)+...+5+3+2+1=
=k^2 + (2k+1) +k^2=
={k^2 + (2k+1) } +k^2 =
=(k+1)^2+k^2
4. Hello, cozza!
bobak is absolutely correct . . . Did you catch his hint?
Express a single generality that generalises all three related observations that:
. . $\begin{array}{ccc}1 + 3 + 1 &=& 2^2 + 1^2 \\<br /> 1+3+5+3+1 &=& 3^2+2^2 \\<br /> 1+3+5+7+9+7+5+3+1 &=& 5^2+4^2 \end{array}$
and show that your generalisation always holds true.
Look at the left side:
. . $\underbrace{1 + 3 + 5 + 7 + 9}_{\text{sum of first 5 odd numbers}} + \underbrace{7 + 5 + 3 + 1}_{\text{sum of first 4 odd numbers}}$
In general, we have: . $\underbrace{1 + 3 + 5 + 7 + \hdots + (2k-1)}_{\text{sum of first }k\text{ odd numbers}}$
This is a arithmetic series with:
. . first term $a = 1$, common difference $d = 2$, and $n = k$ terms.
Its sum is: . $S \:=\:\dfrac{k}{2}\bigg[2(1) + (k-1)2\bigg] \;=\;k^2$
Hence, the sum of the first $k$ odd numbers is $k^2.$ .[1]
Therefore, we have:
. . $\underbrace{1 + 3 + 5 + \hdots + (2n-1)}_{\text{sum of first }n\text{ odd numbers}} + \underbrace{(2n-3) + \hdots 5 + 3 + 1}_{\text{sum of first }n-1\text{ odd numbers}} \;=\; n^2 + (n-1)^2$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Here is a visual "proof" of [1].
. . $\begin{array}{c} \bullet \end{array} \qquad<br /> \begin{array}{cc} \bullet & \bullet \\ \circ & \bullet\end{array} \qquad<br /> \begin{array}{ccc} \bullet & \bullet & \bullet \\ \circ & \circ & \bullet \\ \circ & \circ & \bullet \end{array} \qquad<br /> \begin{array}{cccc} \bullet & \bullet & \bullet & \bullet \\ \circ & \circ & \circ & \bullet \\ \circ & \circ & \circ & \bullet \\ \circ & \circ & \circ & \bullet \end{array}$
. . $^1$ . . . . . $^{1+3}$ . . . . . $^{1+3+5}$ . . . . . . . $^{1+3+5+7}$
5. 1 + 3 + 5 + 7(=n) + 5 + 3 + 1 = 25
You can also let n = the higher number, and use sum of odds formula : [(n + 1) / 2]^2
Since the n is added only once:
SUM = 2[(n + 1) / 2]^2 - n ..... get my drift?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7867125272750854, "perplexity_flag": "middle"}
|
http://www.sagemath.org/doc/reference/graphs/sage/graphs/graph_coloring.html
|
Graph coloring¶
This module gathers all methods related to graph coloring. Here is what it can do :
Proper vertex coloring
all_graph_colorings() Computes all $$n$$-colorings a graph
first_coloring() Returns the first vertex coloring found
number_of_n_colorings() Computes the number of $$n$$-colorings of a graph
numbers_of_colorings() Computes the number of colorings of a graph
chromatic_number() Returns the chromatic number of the graph
vertex_coloring() Computes Vertex colorings and chromatic numbers
Other colorings
grundy_coloring() Computes Grundy numbers and Grundy colorings
b_coloring() Computes a b-chromatic numbers and b-colorings
edge_coloring() Compute chromatic index and edge colorings
round_robin() Computes a round-robin coloring of the complete graph on $$n$$ vertices
linear_arboricity() Computes the linear arboricity of the given graph
acyclic_edge_coloring() Computes an acyclic edge coloring of the current graph
AUTHORS:
• Tom Boothby (2008-02-21): Initial version
• Carlo Hamalainen (2009-03-28): minor change: switch to C++ DLX solver
• Nathann Cohen (2009-10-24): Coloring methods using linear programming
Methods¶
class sage.graphs.graph_coloring.Test¶
This class performs randomized testing for all_graph_colorings. Since everything else in this file is derived from all_graph_colorings, this is a pretty good randomized tester for the entire file. Note that for a graph $$G$$, G.chromatic_polynomial() uses an entirely different algorithm, so we provide a good, independent test.
random(tests=1000)¶
Calls self.random_all_graph_colorings(). In the future, if other methods are added, it should call them, too.
TESTS:
```sage: from sage.graphs.graph_coloring import Test
sage: Test().random(1)
```
random_all_graph_colorings(tests=1000)¶
Verifies the results of all_graph_colorings() in three ways:
1. all colorings are unique
2. number of m-colorings is $$P(m)$$ (where $$P$$ is the chromatic polynomial of the graph being tested)
3. colorings are valid – that is, that no two vertices of the same color share an edge.
TESTS:
```sage: from sage.graphs.graph_coloring import Test
sage: Test().random_all_graph_colorings(1)
```
sage.graphs.graph_coloring.acyclic_edge_coloring(g, hex_colors=False, value_only=False, k=0, solver=None, verbose=0)¶
Computes an acyclic edge coloring of the current graph.
An edge coloring of a graph is a assignment of colors to the edges of a graph such that :
• the coloring is proper (no adjacent edges share a color)
• For any two colors $$i,j$$, the union of the edges colored with $$i$$ or $$j$$ is a forest.
The least number of colors such that such a coloring exists for a graph $$G$$ is written $$\chi'_a(G)$$, also called the acyclic chromatic index of $$G$$.
It is conjectured that this parameter can not be too different from the obvious lower bound $$\Delta(G)\leq \chi'_a(G)$$, $$\Delta(G)$$ being the maximum degree of $$G$$, which is given by the first of the two constraints. Indeed, it is conjectured that $$\Delta(G)\leq \chi'_a(G) \leq \Delta(G) + 2$$.
INPUT:
• hex_colors (boolean)
• If hex_colors = True, the function returns a dictionary associating to each color a list of edges (meant as an argument to the edge_colors keyword of the plot method).
• If hex_colors = False (default value), returns a list of graphs corresponding to each color class.
• value_only (boolean)
• If value_only = True, only returns the acyclic chromatic index as an integer value
• If value_only = False, returns the color classes according to the value of hex_colors
• k (integer) – the number of colors to use.
• If k>0, computes an acyclic edge coloring using $$k$$ colors.
• If k=0 (default), computes a coloring of $$G$$ into $$\Delta(G) + 2$$ colors, which is the conjectured general bound.
• If k=None, computes a decomposition using the least possible number of colors.
• solver – (default: None) Specify a Linear Program (LP) solver to be used. If set to None, the default one is used. For more information on LP solvers and which default solver is used, see the method solve of the class MixedIntegerLinearProgram.
• verbose – integer (default: 0). Sets the level of
verbosity. Set to 0 by default, which means quiet.
ALGORITHM:
Linear Programming
EXAMPLE:
The complete graph on 8 vertices can not be acyclically edge-colored with less $$\Delta+1$$ colors, but it can be colored with $$\Delta+2=9$$:
```sage: from sage.graphs.graph_coloring import acyclic_edge_coloring
sage: g = graphs.CompleteGraph(8)
sage: colors = acyclic_edge_coloring(g)
```
Each color class is of course a matching
```sage: all([max(gg.degree())<=1 for gg in colors])
True
```
These matchings being a partition of the edge set:
```sage: all([ any([gg.has_edge(e) for gg in colors]) for e in g.edges(labels = False)])
True
```
Besides, the union of any two of them is a forest
```sage: all([g1.union(g2).is_forest() for g1 in colors for g2 in colors])
True
```
If one wants to acyclically color a cycle on $$4$$ vertices, at least 3 colors will be necessary. The function raises an exception when asked to color it with only 2:
```sage: g = graphs.CycleGraph(4)
sage: acyclic_edge_coloring(g, k=2)
Traceback (most recent call last):
...
ValueError: This graph can not be colored with the given number of colors.
```
The optimal coloring give us $$3$$ classes:
```sage: colors = acyclic_edge_coloring(g, k=None)
sage: len(colors)
3
```
sage.graphs.graph_coloring.all_graph_colorings(G, n, count_only=False, hex_colors=False, vertex_color_dict=False)¶
Computes all $$n$$-colorings of the graph $$G$$ by casting the graph coloring problem into an exact cover problem, and passing this into an implementation of the Dancing Links algorithm described by Knuth (who attributes the idea to Hitotumatu and Noshita).
INPUT:
• G - a graph
• n - a positive integer the number of colors
• $$count_only$$ – (default: False) when set to True, it returns 1
for each coloring
• $$hex_colors$$ – (default: False) when set to False, it labels the colors [0,1,..,n-1], otherwise it uses the RGB Hex labeling
• $$vertex_color_dict$$ – (default: False) when set to True, it returns a dictionary {vertex:color}, otherwise it returns a dictionary {color:[list of vertices]}
The construction works as follows. Columns:
• The first $$|V|$$ columns correspond to a vertex – a $$1$$ in this column indicates that that vertex has a color.
• After those $$|V|$$ columns, we add $$n*|E|$$ columns – a $$1$$ in these columns indicate that a particular edge is incident to a vertex with a certain color.
Rows:
• For each vertex, add $$n$$ rows; one for each color $$c$$. Place a $$1$$ in the column corresponding to the vertex, and a $$1$$ in the appropriate column for each edge incident to the vertex, indicating that that edge is incident to the color $$c$$.
• If $$n > 2$$, the above construction cannot be exactly covered since each edge will be incident to only two vertices (and hence two colors) - so we add $$n*|E|$$ rows, each one containing a $$1$$ for each of the $$n*|E|$$ columns. These get added to the cover solutions “for free” during the backtracking.
Note that this construction results in $$n*|V| + 2*n*|E| + n*|E|$$ entries in the matrix. The Dancing Links algorithm uses a sparse representation, so if the graph is simple, $$|E| \leq |V|^2$$ and $$n <= |V|$$, this construction runs in $$O(|V|^3)$$ time. Back-conversion to a coloring solution is a simple scan of the solutions, which will contain $$|V| + (n-2)*|E|$$ entries, so runs in $$O(|V|^3)$$ time also. For most graphs, the conversion will be much faster – for example, a planar graph will be transformed for $$4$$-coloring in linear time since $$|E| = O(|V|)$$.
REFERENCES:
http://www-cs-staff.stanford.edu/~uno/papers/dancing-color.ps.gz
EXAMPLES:
```sage: from sage.graphs.graph_coloring import all_graph_colorings
sage: G = Graph({0:[1,2,3],1:[2]})
sage: n = 0
sage: for C in all_graph_colorings(G,3,hex_colors=True):
... parts = [C[k] for k in C]
... for P in parts:
... l = len(P)
... for i in range(l):
... for j in range(i+1,l):
... if G.has_edge(P[i],P[j]):
... raise RuntimeError, "Coloring Failed."
... n+=1
sage: print "G has %s 3-colorings."%n
G has 12 3-colorings.
```
TESTS:
```sage: G = Graph({0:[1,2,3],1:[2]})
sage: for C in all_graph_colorings(G,0): print C
sage: for C in all_graph_colorings(G,-1): print C
Traceback (most recent call last):
...
ValueError: n must be non-negative.
sage: G = Graph({0:[1],1:[2]})
sage: for c in all_graph_colorings(G,2, vertex_color_dict = True): print c
{0: 0, 1: 1, 2: 0}
{0: 1, 1: 0, 2: 1}
sage: for c in all_graph_colorings(G,2,hex_colors = True): print c
{'#00ffff': [1], '#ff0000': [0, 2]}
{'#ff0000': [1], '#00ffff': [0, 2]}
sage: for c in all_graph_colorings(G,2,hex_colors=True,vertex_color_dict = True): print c
{0: '#ff0000', 1: '#00ffff', 2: '#ff0000'}
{0: '#00ffff', 1: '#ff0000', 2: '#00ffff'}
sage: for c in all_graph_colorings(G, 2, vertex_color_dict = True): print c
{0: 0, 1: 1, 2: 0}
{0: 1, 1: 0, 2: 1}
sage: for c in all_graph_colorings(G, 2, count_only=True, vertex_color_dict = True): print c
1
1
```
sage.graphs.graph_coloring.b_coloring(g, k, value_only=True, solver=None, verbose=0)¶
Computes a b-coloring with at most k colors that maximizes the number of colors, if such a coloring exists
Definition :
Given a proper coloring of a graph $$G$$ and a color class $$C$$ such that none of its vertices have neighbors in all the other color classes, one can eliminate color class $$C$$ assigning to each of its elements a missing color in its neighborhood.
Let a b-vertex be a vertex with neighbors in all other colorings. Then, one can repeat the above procedure until a coloring is obtained where every color class contains a b-vertex, in which case none of the color classes can be eliminated with the same ideia. So, one can define a b-coloring as a proper coloring where each color class has a b-vertex.
In the worst case, after successive applications of the above procedure, one get a proper coloring that uses a number of colors equal to the the b-chromatic number of $$G$$ (denoted $$\chi_b(G)$$): the maximum $$k$$ such that $$G$$ admits a b-coloring with $$k$$ colors.
An useful upper bound for calculating the b-chromatic number is the following. If G admits a b-coloring with k colors, then there are $$k$$ vertices of degree at least $$k - 1$$ (the b-vertices of each color class). So, if we set $$m(G) = max$$ {$$k | `there are k vertices of degree at least `k - 1$$}, we have that $$\chi_b(G) \leq m(G)$$.
Note
This method computes a b-coloring that uses at MOST $$k$$ colors. If this method returns a value equal to $$k$$, it can not be assumed that $$k$$ is equal to $$\chi_b(G)$$. Meanwhile, if it returns any value $$k' < k$$, this is a certificate that the Grundy number of the given graph is $$k'$$.
As $$\chi_b(G)\leq m(G)$$, it can be assumed that $$\chi_b(G) = k$$ if b_coloring(g, k) returns $$k$$ when $$k = m(G)$$.
INPUT:
• k (integer) – Maximum number of colors
• solver – (default: None) Specify a Linear Program (LP) solver to be used. If set to None, the default one is used. For more information on LP solvers and which default solver is used, see the method solve of the class MixedIntegerLinearProgram.
• value_only – boolean (default: True). When set to True, only the number of colors is returned. Otherwise, the pair (nb_colors, coloring) is returned, where coloring is a dictionary associating its color (integer) to each vertex of the graph.
• verbose – integer (default: 0). Sets the level of verbosity. Set to 0 by default, which means quiet.
ALGORITHM:
Integer Linear Program.
EXAMPLES:
The b-chromatic number of a $$P_5$$ is equal to 3:
```sage: from sage.graphs.graph_coloring import b_coloring
sage: g = graphs.PathGraph(5)
sage: b_coloring(g, 5)
3
```
The b-chromatic number of the Petersen Graph is equal to 3:
```sage: g = graphs.PetersenGraph()
sage: b_coloring(g, 5)
3
```
It would have been sufficient to set the value of k to 4 in this case, as $$4 = m(G)$$.
sage.graphs.graph_coloring.chromatic_number(G)¶
Returns the minimal number of colors needed to color the vertices of the graph $$G$$.
EXAMPLES:
```sage: from sage.graphs.graph_coloring import chromatic_number
sage: G = Graph({0:[1,2,3],1:[2]})
sage: chromatic_number(G)
3
sage: G = graphs.PetersenGraph()
sage: G.chromatic_number()
3
```
sage.graphs.graph_coloring.edge_coloring(g, value_only=False, vizing=False, hex_colors=False, solver=None, verbose=0)¶
Properly colors the edges of a graph. See the URL http://en.wikipedia.org/wiki/Edge_coloring for further details on edge coloring.
INPUT:
• g – a graph.
• value_only – (default: False):
• When set to True, only the chromatic index is returned.
• When set to False, a partition of the edge set into matchings is returned if possible.
• vizing – (default: False):
• When set to True, tries to find a $$\Delta + 1$$-edge-coloring, where $$\Delta$$ is equal to the maximum degree in the graph.
• When set to False, tries to find a $$\Delta$$-edge-coloring, where $$\Delta$$ is equal to the maximum degree in the graph. If impossible, tries to find and returns a $$\Delta + 1$$-edge-coloring. This implies that value_only=False.
• hex_colors – (default: False) when set to True, the partition returned is a dictionary whose keys are colors and whose values are the color classes (ideal for plotting).
• solver – (default: None) Specify a Linear Program (LP) solver to be used. If set to None, the default one is used. For more information on LP solvers and which default solver is used, see the method solve of the class MixedIntegerLinearProgram.
• verbose – integer (default: 0). Sets the level of
verbosity. Set to 0 by default, which means quiet.
OUTPUT:
In the following, $$\Delta$$ is equal to the maximum degree in the graph g.
• If vizing=True and value_only=False, return a partition of the edge set into $$\Delta + 1$$ matchings.
• If vizing=False and value_only=True, return the chromatic index.
• If vizing=False and value_only=False, return a partition of the edge set into the minimum number of matchings.
• If vizing=True and value_only=True, should return something, but mainly you are just trying to compute the maximum degree of the graph, and this is not the easiest way. By Vizing’s theorem, a graph has a chromatic index equal to $$\Delta$$ or to $$\Delta + 1$$.
Note
In a few cases, it is possible to find very quickly the chromatic index of a graph, while it remains a tedious job to compute a corresponding coloring. For this reason, value_only = True can sometimes be much faster, and it is a bad idea to compute the whole coloring if you do not need it !
EXAMPLE:
```sage: from sage.graphs.graph_coloring import edge_coloring
sage: g = graphs.PetersenGraph()
sage: edge_coloring(g, value_only=True)
4
```
Complete graphs are colored using the linear-time round-robin coloring:
```sage: from sage.graphs.graph_coloring import edge_coloring
sage: len(edge_coloring(graphs.CompleteGraph(20)))
19
```
sage.graphs.graph_coloring.first_coloring(G, n=0, hex_colors=False)¶
Given a graph, and optionally a natural number $$n$$, returns the first coloring we find with at least $$n$$ colors.
INPUT:
• hex_colors – (default: False) when set to True, the partition returned is a dictionary whose keys are colors and whose values are the color classes (ideal for plotting).
• n – The minimal number of colors to try.
EXAMPLES:
```sage: from sage.graphs.graph_coloring import first_coloring
sage: G = Graph({0: [1, 2, 3], 1: [2]})
sage: first_coloring(G, 3)
[[1, 3], [0], [2]]
```
sage.graphs.graph_coloring.grundy_coloring(g, k, value_only=True, solver=None, verbose=0)¶
Computes the worst-case of a first-fit coloring with less than $$k$$ colors.
Definition :
A first-fit coloring is obtained by sequentially coloring the vertices of a graph, assigning them the smallest color not already assigned to one of its neighbors. The result is clearly a proper coloring, which usually requires much more colors than an optimal vertex coloring of the graph, and heavily depends on the ordering of the vertices.
The number of colors required by the worst-case application of this algorithm on a graph $$G$$ is called the Grundy number, written $$\Gamma (G)$$.
Equivalent formulation :
Equivalently, a Grundy coloring is a proper vertex coloring such that any vertex colored with $$i$$ has, for every $$j<i$$, a neighbor colored with $$j$$. This can define a Linear Program, which is used here to compute the Grundy number of a graph.
INPUT:
• k (integer) – Maximum number of colors
• solver – (default: None) Specify a Linear Program (LP) solver to be used. If set to None, the default one is used. For more information on LP solvers and which default solver is used, see the method solve of the class MixedIntegerLinearProgram.
• value_only – boolean (default: True). When set to True, only the number of colors is returned. Otherwise, the pair (nb_colors, coloring) is returned, where coloring is a dictionary associating its color (integer) to each vertex of the graph.
• verbose – integer (default: 0). Sets the level of verbosity. Set to 0 by default, which means quiet.
ALGORITHM:
Integer Linear Program.
EXAMPLES:
The Grundy number of a $$P_4$$ is equal to 3:
```sage: from sage.graphs.graph_coloring import grundy_coloring
sage: g = graphs.PathGraph(4)
sage: grundy_coloring(g, 4)
3
```
The Grundy number of the PetersenGraph is equal to 4:
```sage: g = graphs.PetersenGraph()
sage: grundy_coloring(g, 5)
4
```
It would have been sufficient to set the value of k to 4 in this case, as $$4 = \Delta(G)+1$$.
sage.graphs.graph_coloring.linear_arboricity(g, plus_one=None, hex_colors=False, value_only=False, solver=None, verbose=0)¶
Computes the linear arboricity of the given graph.
The linear arboricity of a graph $$G$$ is the least number $$la(G)$$ such that the edges of $$G$$ can be partitioned into linear forests (i.e. into forests of paths).
Obviously, $$la(G)\geq \lceil \frac {\Delta(G)} 2 \rceil$$.
It is conjectured in [Aki80] that $$la(G)\leq \lceil \frac {\Delta(G)+1} 2 \rceil$$.
INPUT:
• hex_colors (boolean)
• If hex_colors = True, the function returns a dictionary associating to each color a list of edges (meant as an argument to the edge_colors keyword of the plot method).
• If hex_colors = False (default value), returns a list of graphs corresponding to each color class.
• value_only (boolean)
• If value_only = True, only returns the linear arboricity as an integer value.
• If value_only = False, returns the color classes according to the value of hex_colors
• plus_one (integer) – whether to use $$\lceil \frac {\Delta(G)} 2 \rceil$$ or $$\lceil \frac {\Delta(G)+1} 2 \rceil$$ colors.
• If 0, computes a decomposition of $$G$$ into $$\lceil \frac {\Delta(G)} 2 \rceil$$ forests of paths
• If 1, computes a decomposition of $$G$$ into $$\lceil \frac {\Delta(G)+1} 2 \rceil$$ colors, which is the conjectured general bound.
• If plus_one = None (default), computes a decomposition using the least possible number of colors.
• solver – (default: None) Specify a Linear Program (LP) solver to be used. If set to None, the default one is used. For more information on LP solvers and which default solver is used, see the method solve of the class MixedIntegerLinearProgram.
• verbose – integer (default: 0). Sets the level of verbosity. Set
to 0 by default, which means quiet.
ALGORITHM:
Linear Programming
COMPLEXITY:
NP-Hard
EXAMPLE:
Obviously, a square grid has a linear arboricity of 2, as the set of horizontal lines and the set of vertical lines are an admissible partition:
```sage: from sage.graphs.graph_coloring import linear_arboricity
sage: g = graphs.GridGraph([4,4])
sage: g1,g2 = linear_arboricity(g)
```
Each graph is of course a forest:
```sage: g1.is_forest() and g2.is_forest()
True
```
Of maximum degree 2:
```sage: max(g1.degree()) <= 2 and max(g2.degree()) <= 2
True
```
Which constitutes a partition of the whole edge set:
```sage: all([g1.has_edge(e) or g2.has_edge(e) for e in g.edges(labels = None)])
True
```
REFERENCES:
[Aki80] Akiyama, J. and Exoo, G. and Harary, F. Covering and packing in graphs. III: Cyclic and acyclic invariants Mathematical Institute of the Slovak Academy of Sciences Mathematica Slovaca vol30, n4, pages 405–417, 1980
sage.graphs.graph_coloring.number_of_n_colorings(G, n)¶
Computes the number of $$n$$-colorings of a graph
EXAMPLES:
```sage: from sage.graphs.graph_coloring import number_of_n_colorings
sage: G = Graph({0:[1,2,3],1:[2]})
sage: number_of_n_colorings(G,3)
12
```
sage.graphs.graph_coloring.numbers_of_colorings(G)¶
Returns the number of $$n$$-colorings of the graph $$G$$ for $$n$$ from $$0$$ to $$|V|$$.
EXAMPLES:
```sage: from sage.graphs.graph_coloring import numbers_of_colorings
sage: G = Graph({0:[1,2,3],1:[2]})
sage: numbers_of_colorings(G)
[0, 0, 0, 12, 72]
```
sage.graphs.graph_coloring.round_robin(n)¶
Computes a round-robin coloring of the complete graph on $$n$$ vertices.
A round-robin coloring of the complete graph $$G$$ on $$2n$$ vertices ($$V = [0, \dots, 2n - 1]$$) is a proper coloring of its edges such that the edges with color $$i$$ are all the $$(i + j, i - j)$$ plus the edge $$(2n - 1, i)$$.
If $$n$$ is odd, one obtain a round-robin coloring of the complete graph through the round-robin coloring of the graph with $$n + 1$$ vertices.
INPUT:
• n – the number of vertices in the complete graph.
OUTPUT:
• A CompleteGraph with labelled edges such that the label of each edge is its color.
EXAMPLES:
```sage: from sage.graphs.graph_coloring import round_robin
sage: round_robin(3).edges()
[(0, 1, 2), (0, 2, 1), (1, 2, 0)]
```
```sage: round_robin(4).edges()
[(0, 1, 2), (0, 2, 1), (0, 3, 0), (1, 2, 0), (1, 3, 1), (2, 3, 2)]
```
For higher orders, the coloring is still proper and uses the expected number of colors.
```sage: g = round_robin(9)
sage: sum([Set([e[2] for e in g.edges_incident(v)]).cardinality() for v in g]) == 2*g.size()
True
sage: Set([e[2] for e in g.edge_iterator()]).cardinality()
9
```
```sage: g = round_robin(10)
sage: sum([Set([e[2] for e in g.edges_incident(v)]).cardinality() for v in g]) == 2*g.size()
True
sage: Set([e[2] for e in g.edge_iterator()]).cardinality()
9
```
sage.graphs.graph_coloring.vertex_coloring(g, k=None, value_only=False, hex_colors=False, solver=None, verbose=0)¶
Computes the chromatic number of the given graph or tests its $$k$$-colorability. See http://en.wikipedia.org/wiki/Graph_coloring for further details on graph coloring.
INPUT:
• g – a graph.
• k – (default: None) tests whether the graph is $$k$$-colorable. The function returns a partition of the vertex set in $$k$$ independent sets if possible and False otherwise.
• value_only – (default: False):
• When set to True, only the chromatic number is returned.
• When set to False (default), a partition of the vertex set into independent sets is returned if possible.
• hex_colors – (default: False) when set to True, the partition returned is a dictionary whose keys are colors and whose values are the color classes (ideal for plotting).
• solver – (default: None) Specify a Linear Program (LP) solver to be used. If set to None, the default one is used. For more information on LP solvers and which default solver is used, see the method solve of the class MixedIntegerLinearProgram.
• verbose – integer (default: 0). Sets the level of
verbosity. Set to 0 by default, which means quiet.
OUTPUT:
• If k=None and value_only=False, then return a partition of the vertex set into the minimum possible of independent sets.
• If k=None and value_only=True, return the chromatic number.
• If k is set and value_only=None, return False if the graph is not $$k$$-colorable, and a partition of the vertex set into $$k$$ independent sets otherwise.
• If k is set and value_only=True, test whether the graph is $$k$$-colorable, and return True or False accordingly.
EXAMPLE:
```sage: from sage.graphs.graph_coloring import vertex_coloring
sage: g = graphs.PetersenGraph()
sage: vertex_coloring(g, value_only=True)
3
```
Table Of Contents
Previous topic
Static Sparse Graphs
Next topic
Cliquer: routines for finding cliques in graphs
Quick search
Enter search terms or a module, class or function name.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 127, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7992983460426331, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/89977-continuous-random-variables-their-pdfs.html
|
# Thread:
1. ## Continuous random variables and their PDFs
The question goes:
Suppose the continuous random variable X has a probability density function (PDF)
$f_{X}(x)=\frac{c}{x^6}, x>1$
i) Show that for X to be a valid PDF, $c$ must be equal to $5$.
ii) Calculate $E(X^6e^{-2X})$.
For part i) I keep getting $-5$, not $5$ for $c$, and I just can't fathom out part ii)
2. Hello,
Originally Posted by chella182
The question goes:
Suppose the continuous random variable X has a probability density function (PDF)
$f_{X}(x)=\frac{c}{x^6}, x>1$
i) Show that for X to be a valid PDF, $c$ must be equal to $5$.
ii) Calculate $E(X^6e^{-2X})$.
For part i) I keep getting $-5$, not $5$ for $c$, and I just can't fathom out part ii)
i) Remember that a pdf has to be a positive function. Hence the answer you're given is more likely to be correct than yours
$1=\int_1^{\infty} \frac{c}{x^6} ~dx=\int_1^\infty cx^{-6} ~dx$
so this is $\left. \frac{c}{-5} \cdot\frac{1}{x^6}\right|_1^{\infty}$
$=\lim_{x\to\infty} \frac{c}{-5}\cdot\frac{1}{x^6} {\color{red}-}\frac{c}{-5}\cdot 1=-\frac{c}{-5}=\frac c5$
just out from curiosity, which minus sign did you forget ?
ii) recall that for any continuous function, $\mathbb{E}(h(X))=\int_{\mathbb{R}} h(x)f(x) ~dx$, where f is the pdf of the random variable X.
This formula applies here and you get : $\int_1^\infty x^6e^{-2x} \cdot \frac{5}{x^6} ~dx$
3. Fully aware my answer was wrong, hence why I asked
I didn't forget a minus sign either. I wrote it down wrong & confused myself.
I've never seen that formula before (obviously been taught it in a different way), but cheers.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398841857910156, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/classical-mechanics?page=5&sort=votes&pagesize=15
|
# Tagged Questions
[tag:classical-mechanics] entails the study of the trajectory of bodies under the influence of forces. More specific subtopics are: [tag:newtonian-mechanics], [tag:lagrangian-mechanics], [tag:hamiltonian-mechanics] for point particles and [tag:fluid-dynamics], [tag:statistical-mechanics] and ...
1answer
137 views
### In $\textbf{f} = -\boldsymbol{\nabla} u$, what is $u$?
I know that force is the negative gradient of the potential: $$\textbf{f} = -\boldsymbol{\nabla} u$$ where force $\textbf{f}$ is a vector and $u$ is a scalar. This is a relatively soft question, ...
3answers
1k views
### What is the physical meaning of diffusion coefficient?
In Fick's first law, the diffusion coefficient is velocity, but I do not understand the two-dimensional concept of this velocity. Imagine that solutes are diffusing from one side of a tube to another ...
1answer
375 views
### Can relativistic kinetic energy be derived from Newtonian kinetic energy?
Relativistic kinetic energy is usually derived by assuming a scalar quantity is conserved in an elastic collision thought experiment, and deriving the expression for this quantity. To me, it looks ...
1answer
278 views
### The form of Lagrangian for a free particle
I've just registred here, and I'm very glad that finally I have found such a place for questions. I have small question about Classical Mechanics, Lagrangian of a free particle. I just read Deriving ...
2answers
218 views
### The Double Integrator: Matching velocity and position as quickly as possible with only a limited amount of force available
If a body with mass $m$ begins at position $x_0$ with velocity $v_0$ and experiences a force that varies as a function of time $f(t)$ (and we ignore gravity, friction, and everything else that might ...
2answers
485 views
### Classical car collision
I have a very confusing discussion with a friend of mine. 2 cars ($car_a$ and $car_b$) of the same mass $m$ are on a collision course. Both cars travel at $50_\frac{km}{h}$ towards each other. They ...
1answer
165 views
### Quantum $n$-body problem
Is the quantum $n$-body problem as difficult as the classical $n$-body problem? Or quantum mechanics allows to get a simpler exact solution? Suppose there are 3 particles with uniform potential ...
1answer
271 views
### Finding the tension in rope tied to ladder using the principle of virtual work
A ladder $AB$ of mass $m$ has its ends on a smooth wall and floor (see figure). The foot of the ladder is tied by an inextensible rope of negligible mass to the base $C$ of the wall so the ladder ...
1answer
86 views
### Orbits for space missions
I am just wondering say if there is an expedition where some astronauts are sent to the moon, how do they choose the trajectory for the spaceshuttle (or whatnot)? I mean there are many possible ...
2answers
134 views
### Does locality emerge from (classical) Lagrangian mechanics?
Consider a (classical) system of several interacting particles. Can it be shown that, if the Lagrangian of such a system is Lorenz invariant, there cannot be any space-like influences between the ...
1answer
382 views
### When is the principle of virtual work valid?
The principle of virtual work says that forces of constraint don't do net work under virtual displacements that are consistent with constraints. Goldstein says something I don't understand. He says ...
2answers
178 views
### Why doesn't phase space contain acceleration/forces?
I'm watching some Physics lectures on the internet by Leonard Susskind: http://www.youtube.com/watch?v=pyX8kQ-JzHI&feature=BFa&list=PL189C0DCE90CB6D81&lf=plpp_video In this lecture, and ...
2answers
273 views
### What are some interesting calculus of variation problems? [closed]
That I could create as a classical mechanics class project? Other than the classical examples that we see in textbooks (catenary, brachistochrone, Fermat, etc..)
2answers
444 views
### Complete vs General Integral of first order PDE
The following is an excerpt from Landau's Course on Theoretical Physics Vol.1 Mechanics: ... we should recall the fact that every first-order partial differential equation has a solution depending ...
3answers
358 views
### D'Alembert's Principle: Where does $-Q_j$ come from?
This is a follow-up question to D'Alembert's Principle and the term containing the reversed effective force. From the second term of Eq. (1.45) \begin{align*} \sum_i{\dot{\mathbf{p}}_i \cdot ...
1answer
831 views
### what is uniform velocity?
i have a very basic question from school days. what does it mean to say an object is moving with uniform speed? it seems to me now that it should be an unit dependent concept. for example if speed is ...
4answers
371 views
### Rolling stone on a frictional surface
Consider a spherical rigid stone rotating with angular velocity $\omega$ being dropped vertically onto a horizontal rigid surface with the coefficient of friction $\mu$. Can the stone roll on the ...
3answers
111 views
### Physical interpretation of Poisson bracket properties
In classical Hamiltonian mechanics evolution of any observable (scalar function on a manifold in hand) is given as $$\frac{dA}{dt} = [A,H]+\frac{\partial A}{\partial t}$$ So Poisson bracket is a ...
1answer
84 views
### Calculating the path of a ball with spin moving across a table
A ping pong ball is rolling over a smooth (but not frictionless) table. During its travel, a clockwise spin is placed on the ball. The ball's path is changed to move to the right (in perspective from ...
1answer
227 views
### Intuition behind classical virial theorem
I am continuing to brush up my statistical physics. I just want to gain a better understanding. I have gone through the derivation of the classical virial theorem once more. I have thought about it, ...
1answer
382 views
### Differences of behaviour of a particle in a box in quantum theory between that in classic physics
Can anyone help me enlist 3 major differences between the quantum and classical physics of the behaviour of a particle in a box? I would like some insight into the differences without solving PDEs ...
3answers
122 views
### Virtual differentials approach to Euler-Lagrange equation - necessary?
I'm currently teaching myself intermediate mechanics & am really struggling with the d'Alembert-based virtual differentials derivation for the Euler-Lagrange equation. The whole notion of, and ...
2answers
205 views
### Bowling ball on a rubber sheet
After reading a layman's guide to general relativity, I began to wonder what shape a bowling ball on a large rubber sheet would produce. For simplicity, I would like to assume that Hooke's law applies ...
2answers
191 views
### Classical Limit of Commutator
In Dirac's book on quantum mechanics (4th ed., pgs 87-88), he seems to give a very elementary argument as to how the commutator [X,P] reduces to the Poisson brackets {x,p} in the limit h_bar->0. ...
2answers
448 views
### Classical Limit of the Feynman Path Integral
I understand that in the limit that h_bar goes to zero, the Feynman path integral is dominated by the classical path, and then using the stationary phase approximation we can derive an approximation ...
2answers
353 views
### Connections between classical and quantum mechanics?
I've done basic or introductory mechanics at the level of Resnick and Halliday. I'm currently studying calculus of variations and the Lagrangian formulation of mechanics on my own. I read somewhere ...
2answers
890 views
### why is mechanical waves faster in denser medium while EM waves slower?
Why is it that mechanical waves/longitudinal waves/sound travel faster in a denser/stiffer medium as in steel compared to say air, while EM waves/trasverse waves/light travels slower in a (optically) ...
1answer
156 views
### How can you tell a model explosion from the real thing?
Movies and TV shows frequently show buildings being bombed, cars blowing up, etc. Frequently these are really explosions of miniatures filmed up close. Aside from the speed that the explosion ...
2answers
351 views
### “Work” when biking up a hill
So, when biking, I noticed that when going up hills, it was less tiring if I went up them more quickly. This is not total Work done as is Force * Distance, as that should be the same. But the longer ...
2answers
132 views
### what's the physical significance of the off-diagonal element in the matrix of moment of inertia
In classical mechanics about rotation of rigid object, the general problem is to study the rotation on a given axis so we need to figure out the moment of inertia around some axes. In 3-dimensional ...
2answers
72 views
### Conservation of Linear Momentum at the point of collision
This is a pretty basic conceptual question about the conservation of linear momentum. Consider an isolated system of 2 fixed-mass particles of masses $m_1$ and $m_2$ moving toward each other with ...
2answers
124 views
### A partial differential equation for kinetic energy
The kinetic energy of a point particle of mass $m$ and speed $v$ is $K = \frac{1}{2}mv^2$. An elementary mathematics textbook I saw asked one to show that \frac{\partial K}{\partial ...
1answer
111 views
### all the 1-dimensional problems in newtonian mechanics are solvable?
i mean given a system with a conserved Energy in one dimension $$E= \frac{p^{2}}{2m}+V(x)$$ then the 'solution' to this problem is implicitly given by t(x)= \frac{1}{2m} ...
1answer
155 views
### Is there any case in classical mechanics where Newton's (strong) third law doesn't hold?
Is there any case in classical (non relativistic) mechanics where the strong form of Newton's third law does not hold (that is, reaction forces are not collinear)? For example, if we consider a system ...
1answer
279 views
### Writing equation for amplitude of driven harmonic oscillator in Lorentzian form
This harmonic oscillator is driven and damped, with the form: $$\ddot{x} + \lambda \dot{x} + \omega_0^2 x = A \cos(\omega_d t)$$ Now, I have used the ansatz (guess): \$x(t) = B \cos(\omega_d t + ...
1answer
99 views
### What are the fields in this problem?
In problem 3 of chapter 2 of Landau Lifshitz "Mechanics," I don't understand the meaning of the fields as defined in the following statement: Which components of momentum and angular momentum are ...
1answer
173 views
### Can I find a potential function in the usual way if the central field contains $t$ in its magnitude?
I'm working on a classical mechanics problem in which the problem states that a particle of mass $m$ moves in a central field of attractive force of magnitude: $$F(r, t) = \frac{k}{r^2}e^{-at}$$ ...
3answers
235 views
### Particles as a limit of classical field theory
A common academic exercise has been to show that classical mechanics is a limit of quantum mechanics, usually by putting $\hbar \rightarrow 0$. Similarly is it possible to show that a limit to field ...
1answer
270 views
### Rotating/Translating Disk
I was trying to understand an aspect of rotational dynamics and thought of a problem to help me learn. I'm sure this problem has been considered by countless people in the past, but I'm having some ...
1answer
476 views
### Conservation of linear and angular momentum
Suppose I have two rigid bodies A and B and they are connected by a spring which is attached off-center (thus possibly causing torques). Due to the spring a force $f$ acts on A and a force $-f$ acts ...
1answer
256 views
### How do you know if a coordinate is cyclic if its generalized velocity is not present in the Lagrangian?
Goldstein's Classical Mechanics says that a cyclic coordinate is one that doesn't appear in the Lagrangian of the system, even though its generalized velocity may appear in it (emphasis mine). For ...
2answers
1k views
### Lagrangian of two particles connected with a spring, free to rotate
Two particles of different masses $m_1$ and $m_2$ are connected by a massless spring of spring constant $k$ and equilibrium length $d$. The system rests on a frictionless table and may both oscillate ...
1answer
304 views
### A Question about Virtual Work related to Newton's Third Law
In describing D'Alembert's principle, the lecture note I was provided with states that the total force $\mathbb F_l$ acting on a particle can be taken as, $$\mathbb F_l=F_l+\sum_mf_{ml}+C_l,$$ ...
2answers
237 views
### Hamilton's equations in terms of initial conditions
I'm trying to understand the way that Hamilton's equations have been written in this paper. It looks very similar to the usual vector/matrix form of Hamilton's equations, but there is a difference. ...
1answer
764 views
### Is it possible to break bulletproof glass with your voice?
In The Adventures of Tintin, an opera singer (the Milanese Nightingale) broke a bulletproof glass case using her voice. Is that scientifically possible? From the Wikipedia page, a typical bulletproof ...
1answer
157 views
### What does this infinitesimal Eulerian change describe?
This is a question I originally posted in math.se which received an answer that was far too mathematically sophisticated for what I wanted; given that basic multivariable calculus was used through out ...
1answer
183 views
### Is there a Newton's third law for the em field?
There is a momentum associated with the em field that ensures the conservation of total momentum for a system of interacting charges. Can the same be done in an analagous way to ensure Newton's ...
3answers
187 views
### Are quantum mechanics and determinism actually irreconcilable? [closed]
As a preface, I am not a physicist. I'm simply interested in abstract physics and fundamental principles of the universe and such. As such, if you can provide an answer for the layman (as ...
3answers
182 views
### Is there a mathematical relationship here or am I looking for relations when there are none?
When I was taking classical mechanics, we dealt a lot with pendulums, and orbiting bodies problems. This lead me to think about the two situations depicted above. Left: Shows two balls of equal mass ...
2answers
767 views
### Normal Forces and Ferris Wheels
At the moment, I am reading an example problem regarding what was alluded to in the title. In this example problem, they say, "Based on experiences you may have had on a ferris wheel or driving over ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272251725196838, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/20007?sort=oldest
|
## Visualizing a complex plane cubic together with the real plane
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Alain Roberts "Elliptic curves: notes from postgraduate lectures given in Lausanne 1971/72" page 11 (available on google books unless you already tried to read another chapter), there is a hand drawn picture of a real 2-dimensional torus and a real plane, which topologically represent the way a complex cubic (with two real components) and the real projective plane sit in the complex projective plane. Taking the picture on face value, one should be able to project an open subset of the complex projective plane to $\mathbb{R}^3$, so that there is some real line $L$ that passes through the "doughnut" defined by the image of the complex cubic.
I tried to reproduce this picture on a computer, using the map $\mathbb{CP}^2\to\mathbb{R}^7$ given by
$(z_1:z_2:z_3)\mapsto(z_2\overline{z_3},z_3\overline{z_1},z_1\overline{z_2},|z_1|^2-|z_2|^2)/(|z_1|^2+|z_2|^2+|z_3|^2)$,
projecting to various $\mathbb{R}^3$s, and looking for $L$ by trial and error; all in vain. Which brings me to....
Questions:
• Is there such a line (the map I used does not send the real projective plane to a plane, so it does not have to be the case even if Roberts picture is correct) ?
• Is there an algorithm to find such a line ?
• Is there a "better" way to project an open part of the complex projective plane to $\mathbb{R}^3$ ?
-
I guess you're rejecting the obvious map to $\mathbb{C}^2 = \mathbb{R}^4$ (normalizing out $z_3$) because the elliptic curve necessarily intersects the $\mathbb{CP}^1$ you removed to make that picture? I suspect that's what Roberts is trying to suggest anyway, since the intersection is just at most a few points. – Dylan Thurston Apr 2 2010 at 4:41
@Dylan: yup - I want to see the entire torus, not the one with the infinity part thrown away. Note that there is one obvious cheat: embedding the elliptic curve in CP^1 times CP^1 which sits nicely in R^6, and projecting; the problem with this approach is that you are loosing all hope of seeing the Fubini Studi metric on CP^2. – David Lehavi Apr 2 2010 at 21:22
## 1 Answer
I found this article: "Visualizing Elliptic Curves" by Donu Arapura it is available at the following URL: http://www.math.purdue.edu/~dvb/graph/elliptic.pdf In it he discusses a projection that sends sends the real part of $x$ to $x_1$ and the real part of $y$ to $x_3$ thus it would seem to preserve the entire real plane and any line in it. So this might be useful to you.
-
Thanks - this is exactly what I looked for ! – David Lehavi May 4 2010 at 7:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390529990196228, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/88405/finding-the-decay-constant
|
# finding the decay constant
Given the following function, how does one rewrite the exponential part of the equation into $e^{-L/L_{0}}$, where $L_{0}$ is the decay constant
$$f(L)=16\frac{a}{b}\left(1-\frac{a}{b}\right)\exp\left(\frac{-2L}{c}\bigl(2d(b-a)\bigr)^{1/2}\right)$$
I know this problem (demi-vie) from school and it is solved usually by setting $f(x)$ to $\frac{1}{2}$.
So for $f(L)=\frac{1}{2}$ this gives: $$\begin{align*} \frac{1}{32}\frac{b^{2}}{a(b-a)} &=\frac{1}{32}\frac{b}{a}\left(1-\frac{a}{b}\right)^{-1}\\ &= \exp\left(\frac{-2L_{0}}{c}\bigl(2d(b-a)\bigr)^{1/2}\right)\\ \end{align*}$$
$$\Rightarrow \frac{c}{-2(2d(b-a))^{1/2}}\log\left(\frac{b^{2}}{32a(b-a)}\right) = L_{0}$$
Wikipedia gives another definition of decay constant: $$t_{1/2}=\frac{\log(2)}{\lambda},$$ where $\lambda$ is the decay constant. So the "correct" $L_{0}$ would be:$$\frac{L_{0}}{\log(2)} = \frac{c}{-2(2d(b-a))^{1/2}\log(2)}\log\left(\frac{b^{2}}{32a(b-a)}\right)= L_{0corrected}$$
Now the exp part in $f(L)$ should be rewritten as $e^{-L/L_{0}}$. I don't see how to achieve that. Does somebody see how this is possible? Merci.
-
You write $f(x)$, but I can't quite tell where $x$ is in your exponential expression... – J. M. Dec 5 '11 at 1:00
Sorry for the confusion. – VVV Dec 5 '11 at 1:03
## 1 Answer
I think the problem is that you're including the constants in front of the exponential in your calculations. Decay constants, half-lives etc. only relate to the exponential decay, not to the amplitude. So in this case you simply have
$$f(L)=16\frac{a}{b}\left(1-\frac{a}{b}\right)\exp\left(-L/L_0\right)$$
with the characteristic decay length
$$L_0=\frac c{2\left(2d(b-a)\right)^{1/2}}\;,$$
or
$$f(L)=16\frac{a}{b}\left(1-\frac{a}{b}\right)\exp\left(-\lambda L\right)$$
with the decay rate
$$\lambda=\frac{2\left(2d(b-a)\right)^{1/2}}c\;.$$
-
Thank you, joriki. – VVV Dec 5 '11 at 10:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312713742256165, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/44722/what-exactly-are-hamiltonian-mechanics-and-lagrangian-mechanics/44723
|
# What exactly are Hamiltonian Mechanics (and Lagrangian mechanics)
• What exactly are Hamiltonian Mechanics (and Lagrangian mechanics)?
I want to self-study QM, and I've heard from most people that Hamiltonian mechanics is a prereq. So I wikipedia'd it and the entry only confused me more. I don't know any differential equations yet, so maybe that's why.
• But what's the difference between Hamiltonian (& Lagrangian mechanics) and Newtonian mechanics?
• And why is Hamiltonian mechanics used for QM instead of Newtonian?
• Also, what would the prereqs for studying Hamiltonian mechanics be?
-
1
– Claudius Nov 20 '12 at 21:22
@Claudius Are there no physics prereqs? Can I jump straight into Hamiltonian mechanics from Newtonian if I understand the math behind it, or would you recommend studyinh langrangian first? – user14445 Nov 20 '12 at 21:26
2
You should do Lagrangian first, then Hamiltonian. You don’t necessarily need Newtonian for that, but it is nice to derive Newtonian mechanics from Lagrangian/Hamiltonian mechanics, so it might help knowing $F = m a$ (it also helps in recalling the Hamiltonian equations of motion, $F = - \nabla U \equiv \dot p = - \frac{\partial H}{\partial q}$ and $v = \frac{1}{2} \frac{\partial m v^2}{\partial p} = \frac{\partial H}{\partial p}$). – Claudius Nov 20 '12 at 21:31
1
– Qmechanic♦ Nov 20 '12 at 21:40
1
Watch first Leonard Susskind courses in youtube. About ten lessons each, one called Classical Mechanics, and then Quantum Mechanics. They are an excellent introduction with extremely simplified, yet serious maths – Eduardo Guerras Valera Nov 20 '12 at 21:44
show 1 more comment
## 3 Answers
I'd say there were almost no prerequisites for learning Langrangian and Hamiltonian mechanics.
First thing to say is that there's almost no difference between them. They're both part of the same overarching framework. Basically it's a convenient way to write down general laws of physics. There's nothing too difficult or scary about it, and it's a lot more elegant than Newtonian theory.
If you have a rough grasp of basic physics, I don't think you need to formally learn Newtonian theory first. I had to as an undergraduate and it was a horrible mess. I've never needed to do anything using purely Newtonian theory since.
You might need to know how to solve differential equations, both ordinary and partial, but it's possible to pick this up as you go along. There's almost no linear algebra needed, so don't worry about that.
If you're looking for a book, the best one is Landau and Lifschitz, Volume I. Their exposition is very clear and concise, ideal in a textbook! Good luck!
-
2
Definite vote for Landau and Lifschitz! – Dylan Sabulsky Nov 20 '12 at 22:40
But one definitely needs some linear algebra once one has more than a few degrees of freedom. – Arnold Neumaier Nov 21 '12 at 10:05
In univeristy, here is the way the material was presented to me:
1. Newtonian Mechanics
2. Learn solutions to ODE and PDE
3. Lagrangian Mechanics
4. Hamiltonian Mechanics
This was over two course, back to back. Hamiltonian and Lagrangian Mechanics provide a formalism for looking at problems using a generalized coordinate system with generalized momenta. Hamiltonians and Lagrangians are written in terms of energy, a departure somewhat from Newtonian mechanics, if I recall properly.
Hamiltonian Mechanics is suitable for quantum mechanics in that one can describe a system's energy in terms of generalized position and momentum. Newtonian mechanics is for macro scale systems, like throwing a baseball. Quantum mechanics is on a much smaller scale. It is the only way I have been taught QM and it is the only way in which I've seen it taught.
Prereqs for Hamiltonian mechanics would be solving ODEs and PDEs, familiarity with matrix operations and some linear algebra. This would do well for hamiltonian mechanics through beginners Quantum Mechanics. Books for starting; I would recommend Boas (Mathematical Methods of Physics) and Arfkan (Another math methods book). For classical mechanics, Taylor. For intro QM, Griffiths.
Best of luck!
-
Please, anyone correct me if I'm wrong or misleading! – Dylan Sabulsky Nov 20 '12 at 21:42
What topics in Linear Algebra are necessary for Hamiltonian Mechanics? I'm currently self-studying LA, so I'd like to know what I actually need. – user14445 Nov 20 '12 at 21:48
You will need LA for QM, but I was introduced to LA before I took CM and it helped. I think matrix operations and such are important. Learning about vector spaces is key. Perhaps I did not say this right, because I think it is worth studying specifically for QM. If you go deep into CM you will use LA, but you can get through Hamiltonian and Lagrangian Mech without it. – Dylan Sabulsky Nov 20 '12 at 22:39
I always felt like I learned Hamiltonian and Lagrangian mechanics without meaning to. The moment you can comprehend the implications of a differential equation (and by that i dont mean just being able to solve it) you kind of grasp Hamiltonian mechanics if you have a solid background in physics. As for that background you definitely need to know Newtonian mechanics and calculus (you should be able to solve ordinary and partial differentials) obviously. You can start to self study QM by the way. I learned quite a bit of QM way before I even heard of Hamiltonian mechanics. However it is always better to know HM and LM before QM to have the intuition and the math behind most of the basic concepts of QM.
Hamiltonian and Lagrangian mechanics are generally regarded as very similar to one another if not synonymous. They are both reformulations of classical mechanics and if I remember correctly Hamiltonian mechanics is derived from Lagrangian mechanics but it could be the other way around.
As learning material I would recommend Classical Mechanics: Hamiltonian and Lagrangian Formalism by A. Deriglazov. It isn't a very good source if you do not know much physics and differential equations however if you do it's a pretty good textbook.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491270780563354, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/35382?sort=votes
|
## Untrustworthy people picking a random number
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Inspired by the party game Mafia, in particular those situations where nobody is clearly innocent or guilty and the group wants to decide on someone random to eliminate.
Suppose n people each have their own personal random number generator (a machine which generates a 0 or 1 at the push of a button, each with equal likelihood), that these random number generators (henceforth RNGs) operate independently from each other, and, most critically, that each RNG can only be read by its owner. They would like to, as a group, decide on one of two courses of action, and they'd like to do so randomly with 50% probability for each choice.
But, a complication: some members of this group are secretly saboteurs, and they have their own preferences for which of the two options to pick. They'll do anything in their power to sway the decision in a particular direction. On the other hand, the non-saboteurs all have one goal in mind: to make this decision process truly unbiased, to wrench the control from those saboteurs. Nobody knows who the saboteurs are (but let's say there aren't very many of them), and nobody knows which of the two options they're trying to sway things towards.
Is there a strategy the group can employ to remove all bias from the selection process? All anyone can do is talk, push the button on his or her own RNG, and tell the results to the others (though they might not believe it).
EDIT: As a further clarification, the players can't all talk at once. So it's not enough to for everyone to pick a number and sum them mod 2, since the last person to give the number might be a saboteur.
-
Clearly it doesn't matter that I made the RNGs binary and had two options for the decision; if your answer is easier to explain with some different number of choices, by all means change it up. – Jonah Ostroff Aug 12 2010 at 18:58
If you want a uniform random integer in [0,K), ask everyone to provide an integer. Add them up and take modulus K. This result will be uniformly random as long as one of the inputs is uniformly random. – Tom Sirgedas Aug 12 2010 at 19:06
I think this (everyone chooses an interger and mod K) does not work, since the last person to tell the number could be the saboteur... – Daniel Krenn Aug 12 2010 at 19:10
2
See Bruce Schneier's Applied Cryptography, which discusses this very problem, and describes the solution mentioned by others below. – Nate Eldredge Aug 12 2010 at 19:32
4
One thing I find interesting is that the traditional "I flip a coin, you call it in the air" implements this solution for 2 players: if I use a fair coin, you can't cheat me with your choice of call; and if you choose your call randomly, I can't cheat you by using a biased coin. Interestingly, the NFL's coin flip procedure fails in this regard, because the visiting captain calls as the referee flips; the home captain has no way to ensure that the others are not colluding to cheat him. – Nate Eldredge Aug 12 2010 at 19:54
show 3 more comments
## 6 Answers
The group agrees on a strong hash function H. Each person in sequence generates a number x and reveals H(x). Then each person in sequence reveals x. If all hashes can be verified, then the first bit of sum(x) mod 1 is used.
-
1
You beat me to posting this. Note that this implements Aaron Roth's "sealed envelope", and it also addresses Charles' objection: the last player to reveal $H(x)$ certainly determines the outcome by doing so. However, she doesn't know which way she is determining it! – Nate Eldredge Aug 12 2010 at 19:29
2
I'm still concerned because asking them to agree on H may be just as hard as asking them to choose a random bit. – Dan Brumleve Aug 12 2010 at 19:32
1
Since there are a small number of saboteurs, perhaps they can put it to a majority vote? – Dan Brumleve Aug 12 2010 at 19:34
Seems like some of the answers here were really about different questions, but I like this one best. – Jonah Ostroff Aug 12 2010 at 20:25
6
Aw shoot, you guys! The last player to reveal x has heard all the other values of x and knows if the hashes for them can be verified. If he doesn't like the resulting answer that would result if he were to reveal his x, he can just give a bad value of x and make them start the process over (since it won't match his H(x)). Sure, they'll know he's a saboteur and exclude him the second time around, but it still gives the last player a one-time-only veto power over the decision, which is enough bias to invalidate this strategy. So, now what? – Jonah Ostroff Aug 14 2010 at 17:41
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
What Tom said. Have everyone place their bit into a sealed envelope, open them all up, and take the parity of the bits. (Determine if the sum of the bits is even or odd). So long as at least one of the bits was random, the resulting parity will be random. To see this, imagine it is known that person 1 is honest, but it is unknown whether everyone else is. Imagine that everyone else flips their bits first. This fixes a parity on the other bits, and person 1's bit now (at random) uniquely determines the parity of their sum.
Note: Its important that everyone first place their bits into sealed envelopes before any of them are read, to prevent a dishonest party from specifically choosing their bit at the end to manipulate the sum.
If you don't have an envelope and you believe in one-way functions, you can use a bit commitment scheme: http://en.wikipedia.org/wiki/Commitment_scheme
Edit: Check out this classic paper, "Coin Flipping by Telephone" http://portal.acm.org/citation.cfm?id=1008911
-
1
Fine, but I never gave you an envelope. – Jonah Ostroff Aug 12 2010 at 19:13
And how do they reveal their keys, if not in turn? Again the last person can be sneaky. – Jonah Ostroff Aug 12 2010 at 19:16
1
Edit: What is needed is a "bit commitment" mechanism, which exist contingent on one-way permutations existing – Aaron Aug 12 2010 at 19:22
3
I don't have enough reputation points to comment on other answers, but a key distinction to make is whether the parties participating in the protocol can perform unbounded computations or not. If they can only perform polynomial time computations, then bit commitment mechanisms exist (probably), and you can do what you want. If they can perform arbitrary computations, then it shouldn't be possible, as observed below. – Aaron Aug 12 2010 at 19:35
If the final outcome doesn't need to be perfectly unbiased, but instead may have a bias of up to say 10 percent, then there are good algorithms, even if the people have unlimited computational power (which renders cryptographic methods ineffective). For example, the following baton-passing algorithm works well. At the start, give the baton to an arbitrary person (say the first person). At each step, the current baton-holder gives the baton to a random person who has not yet held the baton. Whoever receives the baton last gets to make the collective coin flip.
If the number of saboteurs is fewer than $n / \log n$, then this algorithm is known to produce a low-biased coin. Mike Saks proposed and analyzed this algorithm, and Miki Ajtai and Nati Linial refined the analysis. Here are the references:
M. Saks (1989), A robust non-cryptographic protocol for collective coin flipping, SIAM Journal on Discrete Mathematics 2, pages 240-244.
M. Ajtai and N. Linial (1993), The influence of large coalitions, Combinatorica 13, pages 129-145.
You can find a scanned copy of the Ajtai-Linial paper at Linial's homepage: http://www.cs.huji.ac.il/~nati/.
Babu Narayanan and I showed that there exists an algorithm that can handle up to 49 percent of the players being saboteurs and yet still produces a low-biased coin. Here is the reference:
R. Boppana and B. Narayanan (2000), Perfect-Information Leader Election with Optimal Resilience, SIAM Journal on Computing 29:4, pages 1304-1320.
-
Suppose that there is a process for solving the problem that involves each of $k$ players (possibly including repetition) showing a number, these numbers being combined in a predetermined deterministic fashion. Either the last player can modify the outcome, or the last player can't. In the latter case, look at player $k-1$ and so on; either you eventually get to a player who can alter the outcome or there is no such player. If there is no such player, the outcome is constant. If there is such a player, and that player is a saboteur, then the outcome can be rigged.
So unless the players can determine who are saboteurs, there is no such process.
-
1
I like where this proof is headed. Not sure it's there yet, though: what if the number of players who show a number depends on what those numbers are, e.g. the process stops after some particular pattern is revealed. Then the last player can only "modify the outcome" in that he gives more players (who perhaps aren't saboteurs) the opportunity to do so. – Jonah Ostroff Aug 12 2010 at 19:28
Ah, but that's not such a big deal: even if a saboteur can't certainly change the outcome in that case, he or she still gets to veto an undesired outcome, so the bias is there. Great. – Jonah Ostroff Aug 12 2010 at 19:30
(Someone else back me up on this before I give the check, huh? I'm shaky on these sorts of proofs.) – Jonah Ostroff Aug 12 2010 at 19:30
Of course this only shows the boundaries of what's possible in terms of information-theoretic security. If the parties have only limited computational power and one-way functions exist then there is a solution, as shown by jdb19937; if players can reveal information to each other and there are enough truthful players there's also probably a solution. – Charles Aug 13 2010 at 3:39
In this article it is shown how to select a bit string of length $n$ with entropy close to $n$ by two parties. One party could be adversary. More precisely, it is possible to design a protocol that produces a string of entropy $n - O(1)$ in $4 \log^* n$ rounds.
-
Go to each player in turn and learn what their number is. It is OK if the other players hear. Use your own RNG to decide whether you 'AND' or 'OR' the players bit with the 'group bit' you have thus far. Because no player knows how their bit will be interpreted, they lose any power over the outcome.
-
What if the person collecting all these bits is the saboteur? – Jonah Ostroff Aug 15 2010 at 20:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453386664390564, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/43581/list
|
## Return to Question
2 deleted 118 characters in body
I'm not sure how long this iterative questions can go on, but let me try again. Let's say $X$ is a Cohen-Macaulay scheme with an action of $\mathbb{G}_m$ (i.e. if $X$ is affine, a grading on the coordinate ring). Are the schematic fixed points $X^{\mathbb{G}_m}$ of $X$ Cohen-Macaulay?
Of course, by the usual business, we can assume that $X$ is spec of a local ring with homogeneous maximal ideal.
1
# Are schematic fixed-points of a Cohen-Macaulay scheme Cohen-Macaulay?
I'm not sure how long this iterative questions can go on, but let me try again. Let's say $X$ is a Cohen-Macaulay scheme with an action of $\mathbb{G}_m$ (i.e. if $X$ is affine, a grading on the coordinate ring). Are the schematic fixed points $X^{\mathbb{G}_m}$ of $X$ Cohen-Macaulay?
Of course, by the usual business, we can assume that $X$ is spec of a local ring with homogeneous maximal ideal.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8798218965530396, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/24746/proving-the-time-evolution-of-momentum-operator
|
# Proving the time-evolution of momentum operator
In QFT the evolution of momentum and field operators is given by $∂_0φ=i[H,φ]$ and $∂_0\pi=i[H,\pi]$.
Is it possible to derive these equations from the basic operator commutation relations or are they postulated?
Note: this is a follow-up to Canonical quantization of quantum field
-
See the "Heisenberg picture". This is just the Schrodinger equation in a different point of view. – Ron Maimon May 3 '12 at 5:33
## 1 Answer
The basic canonical commutation relations are equal time relations, which carry no information about evolution. In the operator formalism, the Heisenberg equations of motion are postulated as evolution equations.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8990346789360046, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/326511/graphing-x2-3-a-question-of-domain
|
# Graphing $x^{2/3}$: a question of domain
I'm trying to graph $x^{2/3}$. If I enter $y=x^{2/3}$, my graphing program excludes negatives from the domain: However, if I enter it as either $y=\sqrt[3]{x^2}$ or $y=(x^{1/3})^2$, it includes the negative values of $x$:
I'd like to understand what's going on. My best guess is that while $x^{2/3}$ is well defined for negative values of $x$ (since the fractional exponent's denominator is an odd integer), the graphing program is being overly cautious in interpreting the expression. But that's just a guess. Any ideas what's happening here?
Thanks for the responses. As it seems this is a duplicate of a previously posted question, I'll vote to close this.
-
What about $x^{4/6}$? – Hagen von Eitzen Mar 10 at 15:12
1
– Babak S. Mar 10 at 15:14
1
I can't find it, but there was an earlier post with this exact problem: graphing $x^{2/3}$, with the same outcomes, depending on the way the function was keyed in, as in your case. So it's a hardware/software "thing" with the calculator. I think it was a TI-83+/84+/titanium (can't remember that). The post showed the same graphs for the same input. I can picture the post now! – amWhy Mar 10 at 16:16
2
This boils down to: Never trust your pocket calculator (or other software) to use exactly the definition of a function that you have in mind. – Hagen von Eitzen Mar 10 at 16:19
1
@HagenvonEitzen You're right about that :) – ivan Mar 10 at 16:30
show 2 more comments
## 4 Answers
The issue is that there are three sorts of things one could mean when one writes down an exponentiation, and the differences between them become quite significant when you consider negative bases.
Here is an answer I've written to another question talking about a similar question.
-
It likely comes down to the fact that your graphing program is using an approximation for $2/3,$ and said approximation has an even denominator.
-
I would think if that were the case it would do the same for 1/3, but it doesn't ($x^{1/3}$ handles the negatives no problem). You could be right though. – ivan Mar 10 at 16:09
Can't post this as a comment, but note:
WolframAlpha: $y = x^{2/3}$:
WolframAlpha: $y = \sqrt[3]{x^2}$:
Note: the scales for graphing are different, but there is clearly a difference with respect to how Wolfram interprets each of the inputs. And the first plot adds an additional anomaly.
-
Here is the reason I think Softwares interpret both expressions in a different way
$$x^{\frac{2}{3}} = \exp\left ( \frac{2}{3} \ln \left ( x \right ) \right )$$ Works only if $x>0$
$$\sqrt[3]{x^{2}} = \exp\left ( \frac{1}{3} \ln \left ( x^{2} \right ) \right )$$ Works whatever $x$ value
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528969526290894, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/16998/duality-and-fourier-transforms
|
# Duality and Fourier Transforms [closed]
I read that
$(FF(f))(x)=2\pi f(-x)$, where $F$ is the Fourier transform
and $F(f(x-a))(k)=\exp(-ika) X(k)$ where $X(k)=F(f(x))$
implies $F(\exp(iax)f(x))(k)=X(k-a)$.
But I don't see how that is done... I am quite happy with getting $F^{-1}X(k-a)=\exp(iax)f(x)$ by brute force calculation. I would like to see how to use duality though.
-
1
math.stackexchange.com may be! – MBN Nov 15 '11 at 11:58
@MBN: Okay, thanks. – paul Nov 15 '11 at 12:10
– Qmechanic♦ Nov 15 '11 at 18:11
## closed as off topic by Colin K, Qmechanic♦, David Zaslavsky♦Nov 15 '11 at 21:02
Questions on Physics Stack Exchange are expected to relate to physics within the scope defined in the FAQ. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about closed questions here.
## 2 Answers
You need to know the basic Fourier transform delta-function identity
$$\int_{-\infty}^{\infty} e^{ikx} {dk\over 2\pi} = \delta(x)$$
Which implies Fourier inversion. Proving this identity is slightly subtle, because the right hand side is a distribution, but you can do the integral explicitly over a long interval from -M to M to get an object which has a unit integral and is shrinking in size with M as 1/M, so it must be a delta-function in any reasonable sense of limits.
The double fourier-transform is
$$FF(f) (x') = \int dk e^{ix'k} (\int e^{ikx} f(x) dx) dk$$
And you can do the k integral using the identity to get the result.
-
There are quite a few good dualities involving Fourier transforms and they are all really good to know. The one you are dealing with is quite descriptive. First consider an wavefunction in the time domain. Multiplying it by an exponential is like modulating an AM radio signal: it adds two sidebands to the frequency spectrum which are images of the original signal, displaced by plus/minus the multiplying frequency. This is how it works for sine wave modulation: for exponential modulation, its only one sideband, so you get simple displacement.
This is one half of the duality. To bring it full circle, you now have to do the complimentary problem: take a signal in the time domain, delay it by a fixed time T, and ask what this does to the spectrum in the frequency domain. Obviously, it has to contain all the same frequencies, and to the exact same amplitude. Multiplication by an exponential...any exponential...obviously does this. So far so good. But that is far from proving the duality. An argument of phase delays can actually tie things together in a fairly convincing way. That's pretty much how I understand it.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346920251846313, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.