url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://physics.stackexchange.com/questions/52699/how-can-one-determine-at-which-distance-the-lennard-jones-potential-reaches-a-gi/52704
|
# How can one determine at which distance the Lennard-Jones potential reaches a given value?
My question is fairly simple, but I do need clarification on how to get the inverse of the Lennard-Jones potential V(x).
I am working with the following expression: $$V(x) = e\times[(R/x)^{12} -2\times(R/x)^6]$$
So given a value $V$, how can I find $x(V)$ ?
-
The rather arbitrary choice of the exponent 12 gives you a hint. Define $y = 1/x^6$ and you got yourself a simple quadratic equation. – Lagerbaer Feb 1 at 0:31
1
– Ϛѓăʑɏ βµԂԃϔ Feb 1 at 1:07
@CrazyBuddy I forget the definition of homework tag, I should give one less step in my answer ;). @ MeloMCR Homework tag does not only apply to the homework, but for the people to not give full ansewr, please dont be angry on it – hwlau Feb 1 at 1:13
1
@CrazyBuddy I'm not entirely sure I agree with the homework tag designation in this case even given the "definition". Surely you agree that "any question where it is preferable to guide the asker to the answer rather than giving it away outright" is quite subjective? It sounds like MeloMCR's use for the answer is rather research/industry-oriented, does the phrase "question of primarily educational value" really apply here? Isn't that a rather vague phrase anyway? Couldn't every question on here be considered of primarily educational value? – joshphysics Feb 1 at 1:20
1
– David Zaslavsky♦ Feb 2 at 5:18
show 1 more comment
## 2 Answers
What you are asking is to take the inverse of the function $V(x)$. Using the $z=(R/x)^6$ as suggested, you will get the quadratic equation:
$$z^2-2z-V/e=0$$
and the solution are
$$z_{\pm} = 1 \pm \sqrt{1+V/e}$$
and so
$$x_{\pm}(V) = \frac{R}{(1\pm\sqrt{1+V/e})^{1/6}} \tag{1}$$
Note that there are at most two real solutions (not twelve) as shown in the figure. The equation (1) needs the condition $z_{\pm}>0$ hold which means $x_{+}$ exists for $V>-e$ and $x_{-}$ exists for $-e>V>0$
-
Hey hwlau. I still have some suspicion that the question would probably get closed. Because, an overview simply asks for "How can I find X"..? - which makes me suspicious ;-) – Ϛѓăʑɏ βµԂԃϔ Feb 1 at 1:18
@CrazyBuddy Probably, you may flag it if you want. It is just a mathematical question so it might also be off topic here. After writing few lines to get solution, I cant resist to type the outline here :) – hwlau Feb 1 at 1:22
Substitute $u=\left(\frac{R}{x}\right)^6$:
$V=e(u^2-2u)$
$u^2-2u+\frac V e=0$
Solve the quadratic equation:
$u_{1,2}=1\pm \sqrt{1+\frac V e}$
Insert into the substitution:
$x_{1,2}=R u^{-\frac 1 6}=R \left( 1\pm \sqrt{1+\frac V e}\right)^{-\frac 1 6}$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213437438011169, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/31035?sort=votes
|
## Deeper meanings of barycentric subdivision
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I just want to ask if there is any deeper motivation or clear geometric "sense" behind the barycentric subdivision. Some friend asked me about this a few months ago, looking back the section at Hatcher, I still feel quite confused. I remember one friend told me combinatorically one can do this from posets back to posets, but this does not give me any way to "understand" it properly. In some books (Bredon, for example), the author use excision property as one of the axioms, I'm wondering "where they came from, why they make any sense?".
-
3
Poincare duality in terms of dual cell subdivision might be interesting for you. – Anweshi Jul 8 2010 at 12:38
1
There is more to life (as an algebraic topologist) than simplicial homology. In particular fractals probably (depending on your understanding of this word) have homology groups. The keyword is singular homology which generalises simplicial homology (and other types of homology) to arbitrary topological spaces. – Johannes Hahn Jul 8 2010 at 13:12
1
I think the OP has seen singular homology, and has seen the proof of the excision property in Hatcher's book. – Tom Goodwillie Jul 8 2010 at 13:53
1
While we are at it, the cubical definition and subdivision in Massey's "Singular homology theory" might be of interest, just to see that you don't need to use simplexes. – Anweshi Jul 8 2010 at 14:53
Hi. I know Poincare duality. But I don't believe one can define singular homology successfully for fractals for the simple reason they they are not continuous objects. So maybe I should search on this before raising the question. The second part got deleted by Daniel's request. I don't enough on this to "edit", so I delete it. – Changwei Zhou Jul 9 2010 at 7:10
show 2 more comments
## 7 Answers
There can be many reasons for subdividing simplices, barycentrically or otherwise.
For a simplicial complex (triangulated space) there are the simplicial homology groups. These are known to be isomorphic to the singular homology groups, therefore (1) invariant under homeomorphism, and in particular (2) invariant under (not necessarily barycentric) subdivision. Before the invention of singular homology, I believe that (1) was unknown. Fact (2) was a key part of the theory. Subdivision is important simply because even if your space is made out of simplices you will sometimes care about subsets which are only unions of simplices after you cut the space up finer. In simplicial homology, excision is an easy algebraic fact, stemming from the fact that when a complex is a union of two subcomplexes then every simplex is in one or the other (or both).
In singular theory, as you know, invariance under homeomorphism is a triviality but excision requires some work. The point is that when a space is a union of two open sets then (bad news) not every singular simplex is in one or the other but (good news) simplices can be systematically replaced by combinations of smaller simplices to show that this does not matter. This is where subdivision is used, and there is no reason it has to be barycentric. It's like with the fundamental group: you might explore a space by using maps of a standard unit interval into it, but in proving the Seifert-Van-Kampen Theorem you might want to subdivide that interval into little pieces.
Barycentric subdivision also rises in PL (piecewise linear) topology in one other specific technical way that has nothing much to do with homology: regular neighborhoods. In a finite simplicial complex $K$, the smallest neighborhood of a given subcomplex $L$ that is itself a subcomplex does not in general have $L$ as a deformation retract, but this becomes true if you first barycentrically subdivide twice.
And in the interplay between categories and simplicial constructions barycentric subdivision turns up in various ways.
ADDED in response to Hatcher's answer and its comment thread:
Yes, there is a way of extending to all $n$ the pattern that begins: cut a segment in half, cut a triangle into four equal pieces using midpoints of edges ... It is sometimes called "edgewise subdivision", I believe. It may be realized for simplicial sets as follows: A simplicial set is a functor $\Delta^{op}\to Set$ where $\Delta$ is the category of standard nonempty ordered finite sets; its subdivision is obtained by composing with (the opposite of) the functor $\Delta\to\Delta$ which takes an ordered set to two copies of that set laid end to end. This leaves the realization unchanged. Applied to a standard $n$-simplex, it gives a certain subdivision with $2^n$ pieces. If $n>2$ then the pieces are not all the same shape. If $n=3$ you get a tetrahedron cut into four scaled-down models of itself sitting in the corners and four more whose union is an octahedron; these four all share an edge, the only internal edge that there is. It's not immediately clear to me what diameter estimate is available for the pieces.
This can be generalized so that you now cut an edge into $k$ equal pieces and a triangle into $k^2$ congruent pieces (almost half of which are upside down) and in general cut an $n$-simplex into $k^n$ pieces. This $k$-fold edgewise subdivision plays a role in the area of cyclic homology and related things: when a simplicial set $X$ has the kind of extra structure that makes it a cyclic set (a suitable action of a cyclic group of order $m$ on the set $X_{m-1}$ for all $m>0$) then its realization has an action of the circle group, and to make the action of the subgroup of order $k$ appear as a simplicial action you can do the $k$-fold edgewise subdivision described above.
There is also another edgewise subdivision. In this one the $1$-simplex is cut in half as before and the $2$-simplex is cut into four pieces in the following way: join the middle vertex to the midpoint of the opposite side, and join that midpoint to the midpoints of both of the other sides. This construction corresponds to the functor $\Delta\to\Delta$ that takes an ordered set to two copies of the same laid end to end but with the order reversed in one copy.
See also my answer to the recent MO question "Endofunctors of the Simplex Category".
The second edgewise subdivision that I described can be used to analyze the relationship between two definitions of algebraic $K$-theory: Quillen's $Q$-construction is essentially a subdivision of Waldhausen's $S$-construction.
-
Thanks! This really helps. I need sometime to finish reading this. – Changwei Zhou Jul 9 2010 at 7:17
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In the other replies there has been some mention of alternative methods for subdivision besides barycentric subdivision, but these are rarely encountered in algebraic topology. What are some of these other methods, in fact? Preferably they should be natural and canonical, not based on random choices. I dimly recall seeing somewhere (in a paper of Quillen or Segal?) a subdivision method generalizing the simple idea of subdividing a triangle into four triangles by adding new vertices at the midpoints of the three edges, but the generalization to higher dimensions isn't obvious. Does anyone know a reference for this? Another approach might be to use the canonical subdivision of an n-simplex into n+1 cubes, one at each vertex of the simplex, then subdivide each cube into small cubes in the obvious way, then subdivide the small cubes into simplices in some natural way. This seems a bit cumbersome, however.
A drawback of barycentric subdivision is that it takes some work to show that sufficiently many iterations of barycentric subdivision produce arbitrarily small simplices. It would be nice to have a subdivision method for which this was obvious.
-
This could be a very interesting question! Perhaps post it separately though? It's probably hard to answer in comments. – Daniel Litt Jul 8 2010 at 15:38
By the way, I don't know about the Quillen-Segal paper, but I think the generalization of the $2$-simplex subdivision you sketch is not bad. For a $1$-simplex, subdivision is just bisection; for an $n$-simplex, perform the $n-1$-subdivision on all the boundary simplices. This gives the $n-2$-skeleton of the subdivision exactly; you don't need to add anything else in codimension $2$. There's still the matter of signs and the $n, n-1$-skeletons, but I can't imagine it's too hard. Also, I feel a little odd having just analyzed your book's motivation in my answer above. Great text! – Daniel Litt Jul 8 2010 at 16:00
2
That's one of the ways in which singular homology is easier with cubes than with simplices: there is a natural way to subdivide a cube and it is obvious that sufficiently many iterations produce arbitrarily small cubes. – Michael Hutchings Jul 8 2010 at 16:24
I'm pretty sure this discussion is about what's usually called (Segal's) edge-wise subdivision, and it seems to come up quite a bit actually. A google search turns up a bunch of references. – Dan Ramras Jul 8 2010 at 17:10
There are two kinds of edgewise subdivision. See my edited answer to this question. I think that Segal's is the second one that I mention there. – Tom Goodwillie Jul 8 2010 at 17:59
show 6 more comments
You can think of a simplex as a finite ordered list (i.e., the vertices). The simplices of its barycentric subdivision are the lists of subsets of the first list, ordered by inclusion.
-
This is really just a comment/question for Tom (or for any other knowledgable topologist), but it has got far too long. It's also an attempt made from a position of almost complete ignorance to (re)construct an alternative foundation for singular homology :-)
Is the following a correct description of his first "edgewise decomposition" of the $3$-simplex?
Let our standard tetrahedron have vertices (0,0,0), (1,0,0), (1,1,0) and (1,1,1) in $\mathbb{R}^3$. So it consists of the $(x,y,z)$ with $1\ge x\ge y\ge z\ge0$. We split this into eight small tetrahedra. To avoid fractions, let's first double the size so we split the tetrahedron with vertices (0,0,0), (2,0,0), (2,2,0) and (2,2,2) into tetrahedra with vertices:
• (0,0,0), (1,0,0), (1,1,0) and (1,1,1), (*)
• (1,0,0), (2,0,0), (2,1,0) and (2,1,1), (*)
• (1,0,0), (1,1,0), (2,1,0) and (2,1,1),
• (1,0,0), (1,1,0), (1,1,1) and (2,1,1),
• (1,1,0), (2,1,0), (2,2,0) and (2,2,1), (*)
• (1,1,0), (2,1,0), (2,1,1) and (2,2,1),
• (1,1,0), (1,1,1), (2,1,1) and (2,2,1),
• (1,1,1), (2,1,1), (2,2,1) and (2,2,2). (*)
Then the starred tetrahedra are those in the "corners" of the large tetrahedron while the other four share the "internal" edge from (1,1,0) to (2,1,1).
How can this be generalized? Take as the standard $n$-simplex $\Delta_n$ that defined in $\mathbb{R}^n$ by the inequalities $1\ge x_1\ge x_2\ge\cdots\ge x_n\ge0$. Then the unit cube can be decomposed as $n!$ copies of $\Delta_n$ obtained by coordinate permutations. Then $\mathbb{R}^n$ itself can be decomposed into copies of $\Delta_n$ by translating the decomposition of the unit cube by vectors in the integer lattice. Call this our standard decomposition of $\mathbb{R}^n$.
We can now decompose our standard simplex $\Delta_n$ into $k^n$ congruent simplices each similar to $\Delta_n$. Again it's more convenient to scale $\Delta_n$ by a factor of $k$ and then decompose into simplices congruent to $\Delta_n$. But $k\Delta_n$ is a union of $k^n$ simplices in the standard decomposition of $\mathbb{R}^n$, and this does it.
What I haven't worked out yet, is if there an analogue of the barycentric chain maps $S$ and $T$ (in Hatcher's notation) and what those should be. If so then this decomposition would provide an alternative approach to the excision theorem in singular homology. There would be a couple of advantages to this over the classical approach.
• The standard simplex $\Delta_n$ lives in $\mathbb{R}^n$ rather than $\mathbb{R}^{n+1}$.
• Repeated subdivision of the standard simplex results in simplices similar to the original, as opposed to repeated barycentric subdivision which produces a plethora of different-shaped simplices. The technical advantage here is that we immediately see what the diameter of the simplices in our subdivision are, rather than have the bound of the factor $n/(n+1)$ occurring in the barycentric subdivision which takes some effort to prove.
On the other hand, there are disadvantages: for instance, most of the faces of $\Delta_n$ are not congruent to $\Delta_{n-1}$.
Is there some reference where these ideas are fully worked out?
-
The following paper includes a very detailed and elegant description of how to construct edgewise subdivisions that subdivide a $d$-simplex into $k^d\cdot d$-simplices all of the same volume and shape characteristics, for every integer $k\geq 1$
Edgewise Subdivision of a Simplex H. Edelsbrunner and D. R. Grayson DISCRETE AND COMPUTATIONAL GEOMETRY Volume 24, Number 4, 707-719
-
I won't address the second part of your question, since I'm not sure it's well-defined. But the first (as I understand it, "where does barycentric subdivision come from?") is a good question.
If you've read Hatcher's proof of the excision theorem, you'll remember he defines, for an open cover $\mathcal{U}$ of $X$, the chain complex $C^\mathcal{U}(X)$ to be the subcomplex of $C(X)$ given by singular simplices whose images are contained in an element of $\mathcal{U}$. He shows that the inclusion $C^\mathcal{U}(X)\to C(X)$ is a homotopy equivalence using barycentric subdivision---and excision, in the second form he states it, is obvious enough for the homology of $C^\mathcal{U}(X)$.
So there are two questions:
(1) What is the motivation for introducing the complex $C^\mathcal{U}(X)$?
(2) Why do we want barycentric subdivision to prove the homotopy equivalence?
Question (2) is easy enough -- we don't actually need barycentric subdivision, we just need something like it. We want to be able to send an arbitrary simplices $\sigma$ to sums of simplices contained within elements of $\mathcal{U}$, such that the sum in question is homologous to $\sigma$ (i.e. the boundaries cancel out). The obvious thing to do is to apply the Lebesgue number lemma, so we need some way of making simplices smaller by some definite factor. Furthermore, we need the map in question to be a chain map (to commute with the boundary map), which means it has to be built up inductively -- $\partial S=S\partial$ means that restricting the subdivision of an $n+1$-simlex to its faces must give the same subdivision as simply subdividing the faces. Barycentric subdivision is an obvious way to do this. You can motivate it yourself by trying to come up with a subdivision satisfying these two criteria for $1$-simplices and $2$-simplices; I bet you'll come up with barycentric subdivision. But it's by no means necessary -- there are any number of similar subdivisions. (You might for example define singular homology via cubes; then there's an extremely obvious subdivision!)
Question (1) is much deeper. Of course, excision in the second form Hatcher gives it suggests an approach like this -- but the intuition is a bit more exciting than that. What this approach means is that homology can be computed "locally"! Introducing singular cohomology, this idea leads directly to sheaf cohomology -- motivationally, if not historically.
-
Hi, to be honest, I don't think your answer is really related to what I'm asking about. These thoughts are what I already have in mind when I wrote down the question in here, not the "deeper thoughts" I'm looking for. But thanks for commenting and answering. – Changwei Zhou Jul 9 2010 at 7:03
Could you give an explicit description of "Introducing singular cohomology, this idea leads directly to sheaf cohomology -- motivationally, if not historically."? Thanks. – Arkus Dec 26 at 0:28
Not sure whether this is an answer to the question but thought of sharing what I feel makes the idea of barycentric subdivision very natural.
On metric spaces one has the crutch of the notion of distance to make sense of what is "small" and then things like Lebesgue Number help one get covers with as small open sets as possible. This notion of smallness gets hard to emulate on general topological spaces and here barycentric subdivision helps get across that by exploiting together the benefit of having a notion of smallness in Euclidean space and the notion of continuity. If something is small in the euclidean norm (as available on the simplex) it will map to something small in an arbitrary topological space under a continuous map.
Further barycentric subdivision sort of very optimally exploits the notion of convexity unlike say thinking in terms of cubes. I am not sure how to make it precise but using cubes instead of tetrahedrons in 3D will bring in many more maps than necessary.
Homology in some sense depends on lower amount of information than a priori it seems to need. Like instead of all continuous maps from the simplex to the space if one takes only those maps which are non-degenerate on some face of the simplex, even then one gets the same homology theory. (This is what Massey does). This kind of think might get harder to see with out barycentric subdivision.
I am not sure what sense it would make if one needed to do infinite barycentric subdivision since in that "limit" one will land up looking at maps from 0 volume subsets of euclidean space to the topological space and won't these simply not allow non-trivial continuous maps from them? I am not sure how to make it precise.
In "reasonable" topological spaces for any open cover chosen on it, the corresponding cover gotten on the simplex has a Lebesgue number which is a finite number. Hence one can do a finite number of barycentric subdivisions till each piece has diameter less than this and hence a finite number of barycentric subdivisions should be enough.
It would be illuminating if someone can explain why this might fail for things like fractals. And if it does then how does one go around it?
Also the notion of barycenter coincides with the idea of "center of mass" if one can attach equal masses to all the vertices of the simplex. This gives quite an intuitive help. I guess should be cases where this identification is more tangibly fruitful.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 83, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359707832336426, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/33404-proof-26-only-number-between-cubed-squared-number.html
|
# Thread:
1. ## proof that 26 is the only number between a cubed and a squared number
I was reading Singh's Fermats Last Theorem, and it mentioned that fermat was the first person to prove that 26 is the only number between a cubed and a squared number. I have been unable to prove this, so i thought I'd ask, apologies if its been asked before.... cheers
2. It comes done to proving that $x^2 + 2 = y^3$ has only $x=5$ and $y=3$ as its solutions. By far the best way to prove this is over the unique factorization domain $\mathbb{Z}[\sqrt{-2}] = \{ a+bi\sqrt{2}|a,b\in\mathbb{Z} \}$. Because you can factor the left hand side as $(x+i\sqrt{2})(x-i\sqrt{2}) = y^3$.
Of course Fermat did not in any way prove this using the idea above. Unique factorization was only developed about 100 years ago by Kummer/Kronecker. I have no idea how Fermat, or Euler who later solved it also, did it by using elementary results.
3. Here's a link on the matter. Interesting little algebraic number theory problem.
Math Forum - Ask Dr. Math
4. that maths is beyond me, i think i was decieved by the seeminly simple nature of the problem. But i will look into some of the areas I don't understand, thanks
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9724779725074768, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/03/03/the-riemann-stieltjes-integral-iii/?like=1&source=post_flair&_wpnonce=c1c9c39881
|
# The Unapologetic Mathematician
## The Riemann-Stieltjes Integral III
Last Friday we explained the change of variables formula for Riemann integrals by using Riemann-Stieltjes integrals. Today let’s push it a little further and prove a change of variables formula for Riemann-Stieltjes integrals.
We start with a function $f:\left[a,b\right]$ which we assume to be Riemann-Stieltjes integrable by the function $\alpha$. Now, instead of the full generality we used before, let’s just let $g:\left[c,d\right]\rightarrow\left[a,b\right]$ be a strictly increasing continuous function with $g(c)=a$ and $g(d)=b$. Define $h$ and $\beta$ to be the composite functions $h(x)=f(g(x))$ and $\beta(x)=\alpha(g(x))$. Then $h$ is Riemann-Stieltjes integrable by $\beta$ on $\left[c,d\right]$, and we have the equality
$\displaystyle\int\limits_{\left[a,b\right]}fd\alpha=\int\limits_{\left[c,d\right]}hd\beta$
For decreasing functions we get almost the exact same thing, so you should figure out the parallel statement and proof yourself.
Since $g$ is strictly increasing, it must be one-to-one, and it’s onto by assumption. In fact, $g$ is an explicit homeomorphism of the intervals $\left[a,b\right]$ and $\left[c,d\right]$, and its inverse $g^{-1}:\left[a,b\right]\rightarrow\left[c,d\right]$ is also a strictly increasing continuous function. We can now use $g$ and its inverse to set up a bijection between partitions of $\left[a,b\right]$ and $\left[c,d\right]$: if $a=x_0<x_1<...<x_n=b$ is a partition, then $c=g^{-1}(x_0)<g^{-1}(x_1)<...<g^{-1}(x_n)=d$ is a partition, and vice versa. Further, refinements of partitions of one side correspond to refinements of partitions on the other side.
So if we’re given an $\epsilon>0$ then there’s some partition $y_\epsilon$ of $\left[a,b\right]$ so that for any finer partition $y$ we have $|f_{\alpha,y}-\int_{\left[a,b\right]}f,d\alpha|<\epsilon$. Let $x_\epsilon=g^{-1}(y_\epsilon)$ be the corresponding partition of $\left[c,d\right]$, and let $x$ be a partition of $\left[c,d\right]$ finer than it. Then it’s easily verified that the Riemann-Stieltjes sum $h_{x,\beta}$ is equal to the Riemann-Stieltjes sum $f_{g(x),\alpha}$. Everything else follows quickly from here.
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 12 Comments »
1. I can’t resist making a few comments on Riemann-Stieltjes from the point of view of functional analysis:
(1) The Riemann-Stieltjes integral $\int_{a}^{b} f d\alpha$ is defined whenever $f$ is continuous on $\left[a, b\right]$ and $\alpha$ is of bounded variation on $\left[a, b\right]$.
(2) The Riemann-Stieltjes integral $\int_{a}^{b} f d\alpha$ is defined for all $f$ continuous on $\left[a, b\right]$ only if \$\latex \alpha\$ is of bounded variation on $\left[a, b\right]$, and is defined for all $\alpha$ of bounded variation on $\left[a, b\right]$ only if $f$ is continuous on $\left[a, b\right]$.
(3) The Riemann-Stieltjes integral satisfies the inequality $|\int_{a}^{b}f d\alpha| \leq ||f|| \cdot V(\alpha)$ where $||f||$ is the sup norm and $V(\alpha)$ is the total variation. It is therefore a continuous or bounded linear map in each of its separate variables $f$ and $\alpha$.
(4) The pairing $(f, \alpha) \mapsto \int_{a}^{b} f d\alpha$ is “perfect” in the sense that it expresses the Banach space of functions of bounded variation (modulo constants) as the strong dual of the Banach space of continuous functions on $\left[a, b\right]$. (However, it doesn’t work the other way: the dual of $BV\left[a, b\right]/$constants is not $C\left[a, b\right]$.)
Normally the space dual to $C\left[a, b\right]$ is expressed in terms of measures, so this gives a nice “concrete” way of thinking about measures (for example, Dirac measures are of the form $d\alpha$ where $\alpha$ is a Heaviside function, which is locally constant but for a single jump discontinuity — the size of the jump gives the weight of the measure).
It also sheds some light on things like the Radon-Nikodym theorem: a measure on $\left[a, b\right]$ which is absolutely continuous with respect to Lebesgue measure $d x$ is of the form $d\alpha$ where $\alpha$ is an absolutely continuous function, and the derivative of $\alpha$ (as in $d\alpha = g(x) d x$) is an $L^1$ function $g(x)$ with respect to Lebesgue measure $d x$. Indeed, differentiation gives a Banach space isomorphism $AC\left[a, b\right]/\mbox{constants} \to L^1\left[a, b\right]$.
I find this approach much gentler and more intuitive than that taken in, e.g., Rudin’s Real and Complex Analysis (which is undeniably efficient and frequently very clever, but brutally abstract in the discussion of things like Radon-Nikodym).
Comment by Todd Trimble | March 4, 2008 | Reply
2. Hmm… all those formulas in my comment above which did not parse are the same thing, where I tried to write the interval [a, b] in LaTeX. With that, it should be readable.
Comment by Todd Trimble | March 4, 2008 | Reply
3. Yeah, the parser here started choking on brackets a while back. You have to use \left[ and \right].
Comment by | March 4, 2008 | Reply
4. Anyhow, these are good points, most of which I’m going to get to eventually. In fact, after I rework integration by parts today I’m going to run through the basics of bounded variation tomorrow through Friday before coming back to more about Riemann-Stieltjes integration once I have that language down.
Comment by | March 4, 2008 | Reply
5. Oh, and the Radon-Nikodym observation is key. That’s one of the reasons I think that the Riemann-Stieltjes theory (and later the Lebesgue-Stieltjes theory) are really useful for understanding what’s really going on.
Comment by | March 4, 2008 | Reply
6. [...] unknown wrote an interesting post today onHere’s a quick excerptLast Friday we explained the change of variables formula for Riemann integrals by using Riemann-Stieltjes integrals. Today let’s push it a little further and prove a change of variables formula for Riemann-Stieltjes integrals. … [...]
Pingback by | March 5, 2008 | Reply
7. Todd Trimble, might you give me a proof für thesis no. 3 (with the total variation)? Or say me where to read it?
Many thanks
Comment by D.W. | June 16, 2008 | Reply
8. D.W., it’s covered in lots of places (e.g., Measure and Integral by Wheeden and Zygmund). It has likely been covered here somewhere on this blog, too. To prove it, just note that any Riemann-Stieltjes sum approximating the Riemann-Stieltjes integral,
$\sum_{i=1}^n f(\xi_i)(\alpha(x_i) - \alpha(x_{i-1})),$
has its norm bounded above by
$||f|| \cdot |\sum_{i=1}^n \alpha(x_i) - \alpha(x_{i-1})|$
where $||f||$ is the supremum of $|f(x)|$ over the given interval $\left[a, b\right]$. By the triangle inequality, this is bounded above by
$||f|| \cdot \sum_{i=1}^n |\alpha(x_i) - \alpha(x_{i-1})|$
and this in turn is bounded above by $||f|| \cdot V(\alpha)$, by definition of the variation $V(\alpha)$.
Comment by Todd Trimble | June 17, 2008 | Reply
I have the difference of two integrals over the whole real axis
(int K dFn – int K dF).
Fn and F are both nonnegative increasing functions, K is of bounded variation. Fn is discontinous, F is continous. For K, the limes for |x| to infty is 0.
I would like to make integration by parts, so that the term becomes
int (Fn-F) dK.
Under which assumptions does this work? I think that the integral is not well defined if K is discontinous. But can I be sure that it works if K is continous?
Comment by D.W. | June 18, 2008 | Reply
10. If I understand the question, one will have to pay attention to growth rates as well. For example, if F_n – F grows very rapidly but K(x) tends to 0 slowly as |x| -> \infty, then there is no reason to suppose the integral over the real line will converge. Other than that, I think the main obstruction to the existence of the Riemann-Stieltjes integral will be when the functions F_n – F and K have a common point of discontinuity. In particular, if K is continuous, then under your assumptions we know that F_n – F is of bounded variation on any finite interval [a, b], and therefore the Riemann-Stieltjes integral over that integral exists, and one can perform an integration by parts over that interval without worry. Under further assumptions about growth rates at infinity (e.g. if the product of (F_n – F)(x) and K(x) tends to 0 as |x| -> \infty), the integral over (-\infty, \infty) also exists and again one can perform the integration by parts.
It might be wise to check with your local analyst. I’d have to think a little further to make sure I’m being 100% accurate here, but I think I’m okay.
Comment by Todd Trimble | June 18, 2008 | Reply
11. It might be wise to check with your local analyst.
This is popular (and sound!) advice. People are always telling me that I need an analyst.
Comment by | June 18, 2008 | Reply
12. [...] of Variables in Multiple Integrals I In the one-variable Riemann and Riemann-Stieltjes integrals, we had a “change of variables” formula. This let us replace our variable of [...]
Pingback by | January 5, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 77, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9233174324035645, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=568686
|
Physics Forums
## What is a photon in respect to electromagnetic waves?
I've always thought that photons and electromagnetic waves are one in the same. And I still do, but I'm trying to get a better grasp on the idea and am finding it difficult.
1) As I understand it, they are the same. But an electromagnetic wave with definite frequency is a perfect sine wave and so goes on forever. I assumed previously that a single photon corresponded to a electromagnetic wave of definite frequency but that can't be, else the photon exists in an infinite length... so then is a photon an electromagnetic packet? If so, then a single photon must be multiple electromagnetic waves. So then what does a single electromagnetic wave of definite frequency represent?
2) I also thought that the electric field of an electromagnetic wave was related to the probability of a photon's position. If this is so, then the probability of the photon existing in a location is highest at the wave's crests and troughs and zero at the nodes. I'm positive that what I just said is wrong, but it's currently how I visualize it so please someone steer me in the correct direction.
3) Also if the electric field portion of the wave describes the photon's probability, then what does the magnetic field represent?
4) If the EM wave is just an oscillating charge, would I be able to produce an EM wave by moving an electron up and down over and over again? If so would I be producing a photon then?
What I said probably doesn't make any sense... so please if anyone could help correct me...
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Quote by khkwang I've always thought that photons and electromagnetic waves are one in the same.
They are not the same so much as different interpretations of the same phenomenon - light. Electromagnetic waves represent the classical physics view of light, and photons are a part of quantum theory's description of light.
Quote by khkwang I assumed previously that a single photon corresponded to a electromagnetic wave of definite frequency but that can't be, else the photon exists in an infinite length... so then is a photon an electromagnetic packet? If so, then a single photon must be multiple electromagnetic waves. So then what does a single electromagnetic wave of definite frequency represent?
Common mistake is to try and understand photons and EM waves as a single entity part of the same theory. They are not. There are no continuous EM waves in quantum theory and there are no photons in classical physics.
Quote by khkwang 3) Also if the electric field portion of the wave describes the photon's probability, then what does the magnetic field represent?
In quantum theory, the probability of finding a photon at a certain position is described by its wavefunction, which propagates through space as a wave. The probability density vector of this wave behaves the same way as an EM wave's electric field vector, but they are not the same thing.
Quote by khkwang 4) If the EM wave is just an oscillating charge, would I be able to produce an EM wave by moving an electron up and down over and over again?
You would, yes. This is what happens in a radio antenna for example.
Quote by khkwang If so would I be producing a photon then?
According to classical physics, you will produce a continuous EM wave. According to quantum theory, you will produce lots of photons.
Wow okay, so I guess that means we're still at a standstill between consolidating the classical and quantum mechanical interpretations? Thank you Hydr0matic, you've helped clear up something very fundamental for me. I do have a couple more questions though if you're willing to stick around for a moment... 1) In interference experiments such as those involving interferometers, which wave is it that is being measured? The EM wave or the wave function (as you say they are not the same thing)? If it is the EM wave, then it kind of confuses me because in a way it is determining the intensity of the light, which I'm assuming is related to the probability of a photon's arrival. So then it seems like the EM wave is related to the wave function. 2) About the uncertainty principle; If we shoot a photon through an extremely tiny hole, it's my understanding the the possible final locations smear out. But if we were to measure backwards from where the final position was to the position of the hole we can determine the velocity after leaving the hole AND the position to extreme accuracy. So we are allowed to know both the momentum and position fully at the same time, but only after the fact. Is this allowed?
Blog Entries: 19
Recognitions:
Science Advisor
## What is a photon in respect to electromagnetic waves?
Quote by khkwang I've always thought that photons and electromagnetic waves are one in the same.
They are not. Photon is a state in the Hilbert space on which the electromagnetic-field operator acts. Photon is NOT an eigenstate of the electromagnetic-field operator, i.e., the value of the electromagnetic field is UNCERTAIN for a photon. Likewise, a state with a certain value of electromagnetic field contains an uncertain number of photons.
Quote by khkwang Wow okay, so I guess that means we're still at a standstill between consolidating the classical and quantum mechanical interpretations?
AFAIK, there is no effort to consolidate these two. Quantum theory came about due to the failures of classical physics to explain certain phenomena, most importantly the hydrogen spectra. The hydrogen series and Rydberg formula was discovered in the 1880s, but it wasn't until 1913 that Bohr proposed his quantized atomic model. You really need to read physics from a historical perspective to understand the current state of theories.
Quote by khkwang 1) In interference experiments such as those involving interferometers, which wave is it that is being measured? The EM wave or the wave function (as you say they are not the same thing)?
What you measure is light. The result of the experiment can then be interpreted or explained with whatever theory you like - classical, quantum theory, etc.. or if you're ancient greek - "Zeus did it". Ofcourse, some theories are more successful than others, which is why quantum theory is used. The wave function is not a "real" physical entity like an EM wave - see this, and compare with this for example.
Quote by khkwang 2) ... So we are allowed to know both the momentum and position fully at the same time, but only after the fact. Is this allowed?
I would say yes.
Quote by khkwang 1) In interference experiments such as those involving interferometers, which wave is it that is being measured? The EM wave or the wave function (as you say they are not the same thing)? If it is the EM wave, then it kind of confuses me because in a way it is determining the intensity of the light, which I'm assuming is related to the probability of a photon's arrival. So then it seems like the EM wave is related to the wave function. 2) About the uncertainty principle; If we shoot a photon through an extremely tiny hole, it's my understanding the the possible final locations smear out. But if we were to measure backwards from where the final position was to the position of the hole we can determine the velocity after leaving the hole AND the position to extreme accuracy. So we are allowed to know both the momentum and position fully at the same time, but only after the fact. Is this allowed?
1) An interferometer experiment with waves is different than one with photons. The wave will split and travel both paths simultaneously. Likewise, both detectors will be triggered at the same time by a wave. But with photons the detectors are triggered one at a time, never simultaneously.
The electromagnetic wave is not related to the quantum probability. The electric and magnetic waves satisfy the classical wave equation, which is second order in time. They are real waves that transport energy and momentum through space-time. On the other hand, the quantum wavefunction satisfies Schrodinger’s time dependent equation, which is first order in time and which does not have real solutions. The quantum wavefunctions are necessarily complex, not real. They are defined as complex scalar products in a Hilbert space. As far as we know, they do not have energy and momentum for us to measure.
2) Once the photon hits the detection screen, we have a position measurement result and the experiment is over. Bohr called this “closure”. You can then repeat the experiment, if you like, but there is no “after the (measurement) fact”. Or, you can do a different experiment that determines the position and momentum at the tiny hole, as you suggest. That is, of course, also possible.
The uncertainty principle does not prohibit knowing both momentum and position at the same time. But it does prohibit repeating the same experiment a great many times and always getting the same value for momentum and always getting the same value for position. Then the uncertainty would be zero for both momentum and position in the same experiment. And that is impossible.
Quote by eaglelake 1) An interferometer experiment with waves is different than one with photons. The wave will split and travel both paths simultaneously. Likewise, both detectors will be triggered at the same time by a wave. But with photons the detectors are triggered one at a time, never simultaneously.
This would only be the case if you had a perfect source, perfect identical beamsplitters and perfect identical detectors. In reality, you do not. if you performed this experiment and both detectors were triggered simultaneously, that would not imply that light is a wave. One could simply argue that they were hit by two seperate photons. Likewise, if only one detector is triggered, this does not imply that light is quantized. The two split waves will likely never have identical intensity, and if their intensity borders on what is detectable, only one of them will be detected.
Quote by eaglelake The uncertainty principle does not prohibit knowing both momentum and position at the same time. But it does prohibit repeating the same experiment a great many times and always getting the same value for momentum and always getting the same value for position. Then the uncertainty would be zero for both momentum and position in the same experiment. And that is impossible.
I've never heard this interpretation. From wiki:
In quantum mechanics, the Heisenberg uncertainty principle states a fundamental limit on the accuracy with which certain pairs of physical properties of a particle, such as position and momentum, can be simultaneously known. In layman's terms, the more precisely one property is measured, the less precisely the other can be controlled, determined, or known
This says the principle is applicable on a single particle at any given time.
Sorry. I want to know how the next experiment (Fake of Bell inequality violation) has been thought since it was published. Fake violation of Bell tests reinforce inportance of closing loopholes. In the current study, the scientists showed that Eve can send strong, classical light pulses, with a polarization of her choice, into both Alice and Bob’s photon detectors at the same time. This classical light produces photocurrents that are interpreted as photons. Therefore, Alice and Bob are unknowingly measuring classical light pulses, which means that some of the coincidences that they count are not due to quantum entanglement but to Eve’s manipulation. --- This very bright pulse causes sufficient photocurrent to cross the detection threshold only in one of the four polarizer settings. In the other three polarizer settings, the polarizer partially blocks light, the photocurrent stays below the threshold and the detector remains blind. Thus, Eve classically controls exactly which of the four polarizations Alice and Bob register. So this experiment showed that the classical light can maipulate photon detection in photodetector (above or below detection threshold ) ?
When discussing the fundamental principles of quantum mechanics we always assume ideal experiments to purposely exclude effects caused by any limitations in the devices used in the experiments. It is understood, then, that any non-classical behavior is due to the quantum nature of the experiment and not due to any human error or experimental imperfections. We are currently able to do real experiments with only one photon in the experimental apparatus at any time. Only one detector is ever triggered. This is an experimental fact. We never see the two detectors in a Mach-Zehnder interferometer, for example, triggered simultaneously when only one photon is present. This seems obvious if there is only one photon available to do anything. A wave, being associated with a continuum, should trigger both detectors at least some of the time, even with imperfect devices. The effect is more dramatic with photons hitting a detection screen, which is a continuum of detectors. In a one-photon interference experiment, the one photon produces one dot on the screen. We do not observe the total distribution of dots all appearing simultaneously. (As an aside, how would we get a zillion photons from the original one??) The interference pattern is built up one dot at a time, not continuously as a wave would do. The result obtained in a single measurement is always a single dot on the detection screen. The point is this – Some experiments with light exhibit particle properties. This has been demonstrated in many experiments done over many years. That is what is being discussed here. That is why quantum mechanics was invented. The wave nature of light cannot explain the results of all these experiments. Nor can we explain all such experiments as being due to imperfections in our instruments or due to human ignorance, as you suggest. Quantum mechanics is indeterminate. There are many different possible results of a quantum measurement and, generally, quantum mechanics does not predict which result will happen. It only predicts the probability of getting each possible result. Each result is an eigenvalue of the observable being measured, For example, if we perform a position measurement, there are very many locations where the particle can be found. If we repeat the same experiment many times we generate a statistical distribution of all the dots that contains the entire eigenvalue spectrum of the position operator. The position does not have a unique value as it does in classical physics where the same experiment always yields the same result. This is what we mean when we say that the position is uncertain. (Bohr, among others, preferred to say the position is “indeterminate”, which I believe is more descriptive and less confusing.) The uncertainty principle is a consequence of the purely statistical nature of quantum events. Theoretically, the uncertainty in position is defined to be the root-mean-square deviation from the mean value of all the measurement results, called the standard deviation in ordinary statistics: $$\Delta x = \sqrt {\left\langle {\left. {\psi \left| {\left( {\hat x - \left\langle {\left. {\hat x} \right\rangle } \right.} \right)^2 } \right|\psi } \right\rangle } \right.}$$ Likewise the uncertainty in momentum is: $$\Delta p_x = \sqrt {\left\langle {\left. {\psi \left| {\left( {\hat p_x - \left\langle {\left. {\hat p_x } \right\rangle } \right.} \right)^2 } \right|\psi } \right\rangle } \right.}$$ ]. Notice that the uncertainties depend on the wavefunction $$\psi (x).$$ Every wavefunction gives uncertainties that satisfy $$\Delta x\Delta p \ge \hbar /2$$. Notice, also, that the uncertainties depend on the operators $$\hat x$$ and $$\hat p$$. This is how we calculate uncertainties. It is these definitions, along with the commutation relation $$\left[ {\hat x,\hat p_x } \right] = i\hbar$$, that gives the uncertainty relation [tex}\Delta x\Delta p_x \ge \hbar /2[/tex] I apologize for having to resort to the mathematical formalism, but, in the hope of minimizing misconceptions, the exact definitions should be used in any discussion of the uncertainty principle. Of course, it is $$\left| \psi \right|^2$$ that predicts the statistical distribution of all the measurement results. When we see the statistical distribution of many repeated position measurements, we know there is an uncertainty in position. If we always get the same position in repeated measurements, then, when we calculate the position uncertainty, we get $$\Delta x = 0$$, and we say the position is absolutely certain, as in classical physics. If the statistical spread is clustered around only one location, then it is or less uncertain. If the dots are scattered over a wide range of positions, then the position is more uncertain. In any case, you must repeat the experiment many times to determine whether the position is certain $$\Delta x = 0$$ or whether it is uncertain $$\Delta x \ne 0$$. We intentionally use “certain” in place of words like “accurate” and “precise”, which refer to a single measurement and which can be misleading in this discussion. A single measurement tells us nothing about uncertainties. Uncertainties are not measured values. When a particle hits a detection screen we see a dot on the screen that gives the particle’s position at the instant it hit. We have a value for the position. Yet, there is no way to obtain the uncertainty in position from that number. We repeat, for emphasis, THE UNCERTAINTY PRINCIPLE IS ABOUT UNCERTAINTIES. (I apologize for yelling, but too many discussions ignore this essential fact.) It is not about the actual values of position and momentum obtained in single measurements. The uncertainty principle tells us that there is no experiment, and no wavefunction, for which both position and momentum are certain. The uncertainty principle also states that an uncertainty in position is accompanied by an uncertainty in momentum, but in no case will the product of the uncertainties be less than $$\hbar /2$$. The part of the Wiki article you site is In quantum mechanics, the Heisenberg uncertainty principle states a fundamental limit on the accuracy with which certain pairs of physical properties of a particle, such as position and momentum, can be simultaneously known. In layman's terms, the more precisely one property is measured, the less precisely the other can be controlled, determined, or known. IMHO, a more meaningful statement is In quantum mechanics, the Heisenberg uncertainty principle states a fundamental limit on the product of certain pairs of uncertainties, such as the position uncertainty and the momentum uncertainty. There is no experiment in which the product of those uncertainties can be less than $$\hbar /2$$. The more certain we are of one property, the more uncertain is the other. Best wishes
Quote by eaglelake When discussing the fundamental principles of quantum mechanics we always assume ideal experiments to purposely exclude effects caused by any limitations in the devices used in the experiments. It is understood, then, that any non-classical behavior is due to the quantum nature of the experiment and not due to any human error or experimental imperfections.
When discussing classical physics and waves, the single detections are caused by the limitations in the devices, so we can't assume ideal experiments.
Quote by eaglelake We are currently able to do real experiments with only one photon in the experimental apparatus at any time. Only one detector is ever triggered. This is an experimental fact.
See section 2.1 in Generation of single photons and correlated photon pairs using InAs quantum dots.
Counts at τ = 0 correspond to events where two photons were detected from the same pulse. Few such events should occur for a good single-photon source. The peaks at τ = nTrep , where n is a nonzero integer, and Trep = 13 ns is the laser repetition period, correspond to events where one photon was detected from each of two different pulses. These peaks can provide information about long-timescale memory effects in the quantum dot, and are useful for normalization. The results in Fig. 2 indicate that the two-photon probability for this device was a factor of 40 smaller than for an equivalent Poisson-distributed source [10].
If what you're saying is in fact "an experimental fact", a concept called "suppression of the two-photon probability" would not exist.
Also see this article (published three days ago) about a new innovative single photon light source. If creating single photons is as easy as you believe, why are physicists still working on improving these sources?
Trick number two: The physicists provided for a proper distribution of the color molecules within the matrix. Too densely packed molecules would have interacted, no longer emitting single independent photons.
Quote by eaglelake The effect is more dramatic with photons hitting a detection screen, which is a continuum of detectors.
A "continuum of detectors" cannot exist. There will always be a limit on resolution, as well as a limit on minimum energy required to excite cells.
Quote by eaglelake In a one-photon interference experiment, the one photon produces one dot on the screen. We do not observe the total distribution of dots all appearing simultaneously. (As an aside, how would we get a zillion photons from the original one??) The interference pattern is built up one dot at a time, not continuously as a wave would do. The result obtained in a single measurement is always a single dot on the detection screen.
You seem to be defining a photon by what a single photon source emits and a single cell in a detector detects. You are aware that light can hit a detector and not set it off? See Quantum efficiency. Just because light waves diffracted through a slit will spread over an area of the detector doesn't mean multiple cells has to go off simultaneously. If a "single photon" source is used, the initial intensity borders on what can be detected even without the diffraction. So if you add diffraction, it's only the peak in the distribution that's intense enough to trigger a detector cell.
Quote by eaglelake The point is this – Some experiments with light exhibit particle properties. This has been demonstrated in many experiments done over many years. That is what is being discussed here. That is why quantum mechanics was invented. The wave nature of light cannot explain the results of all these experiments. Nor can we explain all such experiments as being due to imperfections in our instruments or due to human ignorance, as you suggest.
Where did I suggest that? The OP question was "What is a photon in respect to electromagnetic waves?", not "is light a particle or a wave?". As you can see in a previous post, I already explained that "Quantum theory came about due to the failures of classical physics to explain certain phenomena". But in order to understand the difference between photons and electromagnetic waves you have to explain the concepts and predictions of both theories.
Quote by eaglelake When a particle hits a detection screen we see a dot on the screen that gives the particle’s position at the instant it hit. We have a value for the position. Yet, there is no way to obtain the uncertainty in position from that number.
Why not? According to any "individual system" interpretation (e.g. Copenhagen), the uncertainty in position at that instant would be confined to the extent of the detector cell that was hit.
Quote by eaglelake We repeat, for emphasis, THE UNCERTAINTY PRINCIPLE IS ABOUT UNCERTAINTIES. (I apologize for yelling, but too many discussions ignore this essential fact.) It is not about the actual values of position and momentum obtained in single measurements.
That's one interpretation, one of many. The interpretation you're describing seems to be the Ensemble interpretation. Which, ironically, is the one closest to classical physics where particles have real properties and no wavefunction collapse is necessary. But this is not the most common view, so you shouldn't state it as if it was fact.
Blog Entries: 19
Recognitions:
Science Advisor
Quote by Hydr0matic That's one interpretation one of many. The interpretation you're describing seems to be the Ensemble interpretation. Which, ironically, is the one closest to classical physics where particles have real properties and no wavefunction collapse is necessary. But this is not the most common view, so you shouldn't state it as if it was fact.
While it is true that there are many interpretations, the ensemble interpretation of the uncertainty principle (UP) is the only interpretation of UP which can be directly tested experimentally. So from an experimental point of view, the ensemble view of UP is more than a mere interpretation; it is an experimental fact!
Recognitions:
Gold Member
Science Advisor
Quote by Hydr0matic If creating single photons is as easy as you believe, why are physicists still working on improving these sources?
One reason is that most good single photon sources operate at unusual wavelengths, i.e. wavelengths where you have to use expensive and difficult to use equipment (because e.g. it requires cooling to cryogenic temperatures). What is needed for commercial applications are good sources and detectors that work at standard telecom wavelengths so that normal fibre networks can be used. The equipment also needs to be cheap and reliable.
There is also a need for sources and detectors that work at other frequencies (lets say MW frequencies).
Remember that the field of single photonics is mainly driven my cryptography, and that is very much reflected in the work that is done.The foundations of QM is very much a non-issue for most people.
Also, yes you will always have dark counts etc. ; but it is down to the skill and experience of the people performing the experiments to take that into account. There is no such as a perfect experiment, and in any real-world situation there are effects that needs to be taken into consideration and corrected for.
Thread Tools
| | | |
|----------------------------------------------------------------------------|------------------------------|---------|
| Similar Threads for: What is a photon in respect to electromagnetic waves? | | |
| Thread | Forum | Replies |
| | Classical Physics | 4 |
| | Quantum Physics | 2 |
| | Special & General Relativity | 31 |
| | Classical Physics | 10 |
| | Quantum Physics | 5 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438872933387756, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/153369-equality-binomial-sums.html
|
# Thread:
1. ## An Equality of Binomial Sums
If $n\ge 0$, show that
$\displaystyle \sum_{k=0}^{n}\binom{2n}{k}\binom{2n-2k}{n-k} = \sum_{k=0}^{n}\binom{2n+1}{k} = \sum_{k=0}^{2n}\binom{2n}{k}$
2. Originally Posted by Vandermonde
If $n\ge 0$, show that
$\displaystyle \sum_{k=0}^{n}\binom{2n}{k}\binom{2n-2k}{n-k} = \sum_{k=0}^{n}\binom{2n+1}{k} = \sum_{k=0}^{2n}\binom{2n}{k}$
This is false for n=2.
3. The first one seems to the the odd one out. The last two evaluate to the same closed form expression. I suppose there is a typo in the first sum.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467380046844482, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=1194482
|
Physics Forums
## intuitive explanation why frequency is fixed
From treating light as a wave it is possible using Huygen’s theory to deduce that the frequency of the light will not change whether in vacuum or some other material. I have seen a mathematical proof of it and understand it but is there an intuitive explanation for it? Does it match Maxwell theory of light?
What about this explanation: Light is emitted by an accelerating charge that is also changing direction so after the light is emitted the frequency is fixed but wavelength change depending on the material. This explanation is just like treating light as a mechanical wave. Water waves are generated by a vibrator and if the medium it travels through changes the frequency is the same but wavelength changes. Correct? If so than its as if Huygen is treating light as a mechanical wave. Which is wrong as proved by Maxwell? So frequency is really not fixed according to Maxwell?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Homework Help Science Advisor The boundary conditions of a wave at an interface can only be satisfied at all times if the incident wave and the reflected and transmitted waves all have the same frequency.
Quote by Meir Achuz The boundary conditions of a wave at an interface can only be satisfied at all times if the incident wave and the reflected and transmitted waves all have the same frequency.
Could you give an example?
So not only mechanical (i.e. sound, water) but electromagnetic waves also obey the law that frequency is fixed in any material?
Recognitions:
Homework Help
Science Advisor
## intuitive explanation why frequency is fixed
If you tie two strings of differeing mass densilties together, the end of each stilng will oscillate at the frequency of the knot. The same is true for the E and B fields at a change of dielectric constant.
Mentor
Quote by pivoxa15 Could you give an example?
Using Gauss's Law (one of Maxwell's equations), one can show that the component of the electric field E parallel to a boundary between two media must be continuous across the boundary. That is, it can't "jump" discontinuously as you cross the boundary. One can also show that at a boundary that carries no net surface charge, the perpendicular component of E must also be continuous across the boundary. See for example
http://farside.ph.utexas.edu/teachin...es/node59.html
Using Ampere's Law, one can come to similar (but sort of "opposite") conclusions about the magnetic field B: the perpendicular component must always be continuous, and the parallel component must be continuous across a boundary that carries no net surface current.
If an electromagnetic wave had different frequencies on the two sides of a refracting boundary, the E and B fields would have to be usually discontinuous at the boundary.
Quote by Meir Achuz If you tie two strings of differeing mass densilties together, the end of each stilng will oscillate at the frequency of the knot. The same is true for the E and B fields at a change of dielectric constant.
For a smooth oscillation of the two strings tied together, the place of the knot must oscillate smoothly and each string must also oscillate smoothly. Smooth oscillation imply constant frequency - correct? Hence the knot frequency must match the frequency of both strings. The knot is the end of one string and the start of the other. Therefore the frequency of the two strings are equal.
When applied to E and B fields upon entering two different media, consider the electric fields E1 and E2 in media 1 and 2 respectively. Even though they are different, they must be tied together and oscillate smoothly so from the above analogy must oscillate at a single constant frequency. Same applies for B.
Quote by jtbell Using Gauss's Law (one of Maxwell's equations), one can show that the component of the electric field E parallel to a boundary between two media must be continuous across the boundary. That is, it can't "jump" discontinuously as you cross the boundary. One can also show that at a boundary that carries no net surface charge, the perpendicular component of E must also be continuous across the boundary. See for example http://farside.ph.utexas.edu/teachin...es/node59.html Using Ampere's Law, one can come to similar (but sort of "opposite") conclusions about the magnetic field B: the perpendicular component must always be continuous, and the parallel component must be continuous across a boundary that carries no net surface current. If an electromagnetic wave had different frequencies on the two sides of a refracting boundary, the E and B fields would have to be usually discontinuous at the boundary.
Could you explain a bit more about how the frequency ties in with this example? Should I be thinking about the E and B wave equations?
Mentor
Quote by pivoxa15 Should I be thinking about the E and B wave equations?
$$E = E_{max} \sin (kx - \omega t + \phi_0) = E_{max} \sin \left( \frac {2 \pi x}{\lambda} - 2 \pi f t + \phi_0 \right)$$
You have two waves like this, one on each side of the boundary.
$$E_1 = E_{1,max} \sin \left( \frac {2 \pi x}{\lambda_1} - 2 \pi f_1 t + \phi_{01} \right)$$
$$E_2 = E_{2,max} \sin \left( \frac {2 \pi x}{\lambda_2} - 2 \pi f_2 t + \phi_{02} \right)$$
For simplicity, let x = 0 at the boundary so the terms with x disappear. Now suppose $f_1 \ne f_2$. Can you make $E_1 = E_2$ at all times t, while keeping $E_{1,max}$, $E_{2,max}$, $\phi_{01}$ and $\phi_{02}$ constant?
So you first proved the electric fields in both media must equal at the boundary for all time. Then one can see that the wave equation for E at a given location has one variable t. The constant scaling this variable is f. In order to keep both E equal at the boundary for all t, the freqeuncy must be the same for both E (otherwise as t changes the two E will differ). The other constants such as Emax, lambda, thi can be different for each E (note the term $$2\pi$$ft will occur in both equations) provided they 'combine' in the end so that both E are equal This would be Maxwell's way of showing that frequency is fixed in any media? So it agrees with Huygen's method by treating light as a water wave. So no matter what sort of wave the frequency will be the same in any media while the velocity and wavelength change.
Thread Tools
| | | |
|-------------------------------------------------------------------|--------------------------------------------|---------|
| Similar Threads for: intuitive explanation why frequency is fixed | | |
| Thread | Forum | Replies |
| | General Astronomy | 8 |
| | Atomic, Solid State, Comp. Physics | 2 |
| | Set Theory, Logic, Probability, Statistics | 5 |
| | Introductory Physics Homework | 4 |
| | General Discussion | 29 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302395582199097, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/69141/atkin-lehner-involution-and-class-number/69294
|
## Atkin-Lehner involution and class number
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I was told of a relation between the number of fixed points of the Atkin-Lehner involution and the class number of certain number fields.
Can someone point me to a reference where I could learn about it? (I am not very familiar with CM theory so I don't want a one-line proof without further explanation).
Thanks.
-
This might be not very elegant on my side, but you can take a look to section 2 of: www1.iwr.uni-heidelberg.de/groups/arith-geom/… The idea is to interpret the Atkin-Lehner involution on $X_0(p)$ in terms of moduli of elliptic curves, and then use the relationship between fractional ideals of $Q(\sqrt{-p})$ and certain CM elliptic curves. – Tommaso Centeleghe Jun 29 2011 at 20:43
Hi Tommaso, this might be a silly question but why is \sum = h(-4p) ? Or why is \sum = h(-p) + h(-4p) (in the other case)? – expmat Jun 30 2011 at 15:02
Dear expmat, This is exactly what I'm referring to in my answer. For $X_0(p)$, we must have $D = 1$, $N=p$, $m=p$ and so if $p\equiv 1\bmod 4$, $\mathbf{Z}[\sqrt{-p}]$ is maximal and the number of fixed points is $h(-4p)$. If $p\equiv 3\bmod 4$, an embedding of $\mathbf{Z}[\sqrt{-p}]$ must be optimal either for $\mathbf{Z}[\sqrt{-p}]$ or for $\mathbf{Z}[\dfrac{1 + \sqrt{-p}}{2}]$ so you have to count optimal embeddings of both. Hence the number of fixed points is $h(-p) + h(-4p)$. – stankewicz Jun 30 2011 at 23:30
Regarding the exposition on definite quaternion algebras, you could take a look at Pizer's articles, and I suggest to look at Gross article (Heights and special values of L-series) where he first talks about the optimal embeddings. The idea is that the number of optimal embeddings is roughly speaking the number of bilateral ideals times the class number of the order you are embedding (in case there is such an ideal). So the formula written below gives you $2^t$, where $t$ is the number of prime divisors of ND (the level) which is exactly the number of bilateral ideals (if ND is square free). – A. Pacetti Jul 1 2011 at 12:02
@expmat. If you are interested in fixed points of AL involution on $X_0(p)$ then you should classify elliptic curves over C, up to isomorphism, that admit an endomorphism whose square is -p. If you think about the correspondence between elliptic curves up to isom. and lattices up to homothety then what you are doing is classifying lattices inside C that are stable under multiplication by $\sqrt{-p}$. This leads to the class number interpretation. You will have to consider the class number of all quadratic imaginary orders containing $\sqrt{-p}$. If p\equiv 3 mod 4 then there are two of them. – Tommaso Centeleghe Jul 2 2011 at 14:28
## 3 Answers
Given that I don't know exactly which relation you're talking about, I'll give you something old and something new:
A priori, asking for a formula for the number of fixed points of Atkin-Lehner is asking for the trace of the matrix representing the Atkin-Lehner involution. Hence you're asking for a trace formula, in particular the Eichler-Selberg trace formula. The original reference for that, featuring many relations between class numbers is
Eichler, M. Modular correspondences and their representations. J. Indian Math. Soc. (N.S.) {\bf 20} (1956), 163-206.
On the other hand a more modern view of fixed points of an Atkin-Lehner involution $w_m$ is that they're in bijection with conjugacy classes of embeddings $\mathbf{Z}[\sqrt{-m}] \hookrightarrow \mathcal{O}_0(N)$, the order used to define the Shimura curve $X^D_0(N)$. You said you wanted me to sweep the CM theory under the rug, so I won't elaborate on Shimura curves.
Anyways, this can by done by counting conjugacy classes of optimal embeddings of either $R = \mathbf{Z}[\sqrt{-m}]$ or $\mathbf{Z}[\dfrac{1 + \sqrt{-m}}{2}]$ into your quaternion order.
For counting these things, probably the book of Vigneras is best, but I like the exposition of Santiago Molina here http://www.crm.es/Publications/10/Pr928.pdf or here http://arxiv.org/PS_cache/arxiv/pdf/0912/0912.5217v4.pdf (same paper in section 2)
In particular let's simplify things and say both that $N$ is squarefree and $\mathbf{Z}[\sqrt{-m}]$ is a maximal order so every embedding is optimal. In this case the number of fixed points of $w_m$ is
$$h(-4m)\prod_{p|D}\left(1 -\left(\dfrac{-4m}{p}\right)\right)\prod_{q|N}\left(1 +\left(\dfrac{-4m}{q}\right)\right)$$
-
This is an interesting answer, which I have learned from, but I worry that it is addressing a more difficult question than the intended one. Let me leave a comment that may help: This answer (at least the second part) is about the Atkin-Lehner involution on Shimura curves. My answer is about the Atkin-Lehner involution on modular curves, which are older objects, and are the ones which are most immediately related to modular forms. – David Speyer Jul 2 2011 at 14:23
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I learned from Harald Helfgott's thesis (on the arxiv here) that this connection between class numbers and the Fricke involution goes back to Fricke. See his Appendix A.4.
-
Let me try to explain the CM connection, as I think it really is the most intuitive way to understand this. Fear not, this is not a one line answer!
I'll be working with $\mathbb{C}$ throughout. Every elliptic curve over $\mathbb{C}$ is of the form $\mathbb{C}/\Lambda$, where $\Lambda$ is a discrete rank two sublattice of $\mathbb{C}$. This description is not unique: If $\alpha$ is any nonzero complex number, then $\mathbb{C}/\Lambda$ and $\mathbb{C}/\alpha \Lambda$ define the same elliptic curve. If we have a two elliptic curves, $\mathbb{C}/\Lambda_1$ and $\mathbb{C}/\Lambda_2$, and a map $\phi$ between them, then there is a unique complex number $\beta$ such that $\beta \Lambda_1 \subseteq \Lambda_2$ and $\phi$ arises as the map which takes the coset $z+\Lambda_1$ to the coset $\beta z + \Lambda_2$.
I'll first explain the basic idea of complex multiplication, and then talk about the Atkin-Lehner case. Complex multiplication is all about describing the possible self maps of an elliptic curve. Consider a complex number $\beta$ and a lattice $\Lambda$. When does multiplication by $\beta$, as a map from $\mathbb{C}$ to $\mathbb{C}$, descend to a map from $\mathbb{C}/\Lambda$ to $\mathbb{C}/\Lambda$? This happens, if and only if $\beta \Lambda \subseteq \Lambda$.
Now, let's fix $\beta$ and consider which $\Lambda$ have this property. Notice that, if $\lambda$ is a nonzero element of $\Lambda$, and $\theta$ is any element in the ring $\mathbb{Z}[\beta]$, then $\theta \lambda$ is in $\lambda$. This means that $\mathbb{Z}[\beta] \cdot \lambda$ must form a discrete sublattice of $\Lambda$, so the ring $\mathbb{Z}[\beta]$ must be a discrete sublattice of $\mathbb{C}$.
Case 1: $\mathbb{Z}[\beta]$ is a rank $1$ sublattice of $\mathbb{C}$. In this case, $\beta$ is in $\mathbb{Z}$ and $\beta \Lambda \subseteq \Lambda$ for every $\Lambda$.
Case 2: $\mathbb{Z}[\beta]$ is a rank $2$ sublattice of $\mathbb{C}$. In this case (this is not obvious) $\mathbb{Z}[\beta]$ must either be of the form $\mathbb{Z}[\sqrt{-d}]$ or $\mathbb{Z}[(1+\sqrt{-d})/2]$, where $d>0$ and, in the latter case, $d$ must be $3 \mod 4$. In this case, there are finitely many lattices $\Lambda$ such that $\beta \Lambda \subseteq \Lambda$ (up to treating $\Lambda$ and $\alpha \Lambda$ as equivalent, as mentioned in the second paragraph.). The number of these lattices is more or less the class number of $\mathbb{Q}[\sqrt{-d}]$. (It is exactly this if $d$ is square free and $1$ or $2$ mod $4$; otherwise there are some details to fix up.)
Case 3: $\mathbb{Z}[\beta]$ is not a discrete sublattice of $\mathbb{C}$. As discussed above, in this case there are no $\Lambda$'s for which $\beta \Lambda \subseteq \Lambda$.
Now, for the Atkin-Lehner connection. The modular curve $Y_0(p)$ (the one without the cusps) parameterizes ordered pairs $(\Lambda_1, \Lambda_2)$, where $\Lambda_2$ is an index $p$ sublattice of $\Lambda_1$, and where $(\Lambda_1, \Lambda_2)$ is identified with $(\alpha \Lambda_1, \alpha \Lambda_2)$ for any nonzero complex number $\alpha$.
The Atkin-Lehner involution sends $(\Lambda_1, \Lambda_2)$ to $(\Lambda_2, p \Lambda_1)$. So a fixed point of Atkin-Lehner must correspond to a pair $(\Lambda_1, \Lambda_2)$ such that $(\Lambda_2, p \Lambda_1) = (\alpha \Lambda_1, \alpha \Lambda_2)$ for some $\alpha$. In particular, $$p \Lambda_1 = \alpha (\alpha \Lambda_1) = \alpha^2 \Lambda_1.$$
Set $\gamma = \alpha^2/p$. Then both $\gamma$ and $\gamma^{-1}$ take $\Lambda_1$ to itself, so both $\mathbb{Z}[\gamma]$ and $\mathbb{Z}[\gamma^{-1}]$ are discrete lattices. Looking at the case by case analysis above, one works out that $\gamma$ is one of $1$, $-1$, $\pm i$, $\pm e^{2 \pi i/6}$ and $\pm e^{4 \pi i/6}$. Now, $\alpha = \sqrt{p \gamma}$ and we have $\alpha \Lambda_1 = \Lambda_2 \subset \Lambda_1$. So $\sqrt{p \gamma}$ must also generate a discrete sublattice of $\mathbb{C}$. Looking at the previous list of cases, the only one that survives is $\gamma = -1$ and $\alpha = \sqrt{-p}$.
So the fixed points of Atkin-Lehner come from lattices $\Lambda_1$ such that $\sqrt{-p} \Lambda_1 \subset \Lambda_1$; for each such lattice $\Lambda_1$ we get the fixed point $(\Lambda_1, \sqrt{-p} \Lambda_1)$. Using the previous discussion, the number of such $\Lambda_1$'s is essentially the class number of $\mathbb{Q}(\sqrt{-p})$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 129, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309006929397583, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/36815/a-simple-explanation-of-eigenvectors-and-eigenvalues-with-big-picture-ideas-of/36818
|
# A simple explanation of eigenvectors and eigenvalues with 'big picture' ideas of why on earth they matter
A number of areas I'm studying in my degree (not a maths degree) involve eigenvalues and eigvenvectors, which have never been properly explained to me. I find it very difficult to understand the explanations given in textbooks and lectures. Does anyone know of a good, fairly simple but mathematical explanation of eigenvectors and eigenvalues on the internet? If not, could someone provide one here?
As well as some of the mathematical explanations, I'm also very interested in 'big picture' answers as to why on earth I should care about eigenvectors/eigenvalues, and what they actually 'mean'.
-
2
The reasons you might care could depend on what areas you're studying that involve eigenvalues and eigenvectors. Could you please share what those areas are, and possibly what the involvement is? – Jonas Meyer May 3 '11 at 23:27
3
– Will Jagy May 3 '11 at 23:32
2
– InterestedGuest May 3 '11 at 23:32
– amWhy May 4 '11 at 0:18
## 2 Answers
To understand why you encounter eigenvalues/eigenvectors everywhere, you must first understand why you encounter matrices and vectors everywhere.
In a vast number of situations, the objects you study and the stuff you can do with them relate to vectors and linear transformations, which are represented as matrices.
So, in many many interesting situations, important relations are expressed as $$\vec{y} = M \vec{x}$$ where $\vec{y}$ and $\vec{x}$ are vectors and $M$ is a matrix. This ranges from systems of linear equations you have to solve (which occurs virtually everywhere in science and engineering) to more sophisticated engineering problems (finite element simulations). It also is the foundation for (a lot of) quantum mechanics. It is further used to describe the typical geometric transformations you can do with vector graphics and 3D graphics in computer games.
Now, it is generally not straight forward to look at some matrix $M$ and immediately tell what it is going to do when you multiply it with some vector $\vec{x}$. Also, in the study of iterative algorithms you need to know something about higher powers of the matrix $M$, i.e. $M^k = M \cdot M \cdot ... M$, $k$ times. This is a bit awkward and costly to compute in a naive fashion.
For a lot of matrices, you can find special vectors with a very simple relationship between the vector $\vec{x}$ itself, and the vector $\vec{y} = Mx$. For example, if you look at the matrix $\left( \begin{array}{cc} 0 & 1 \\ 1 & 0\end{array}\right)$, you see that the vector $\left(\begin{array}{c} 1\\ 1\end{array}\right)$ when multiplied with the matrix will just give you that vector again!
For such a vector, it is very easy to see what $M\vec{x}$ looks like, and even what $M^k \vec{x}$ looks like, since, obviously, repeated application won't change it.
This observation is generalized by the concept of eigenvectors. An eigenvector of a matrix $M$ is any vector $\vec{x}$ that only gets scaled (i.e. just multiplied by a number) when multiplied with $M$. Formally, $$M\vec{x} = \lambda \vec{x}$$ for some number $\lambda$ (real or complex depending on the matrices you are looking at).
So, if your matrix $M$ describes a system of some sort, the eigenvectors are those vectors that, when they go through the system, are changed in a very easy way. If $M$, for example, describes geometric operations, then $M$ could, in principle, stretch and rotate your vectors. But eigenvectors only get stretched, not rotated.
The next important concept is that of an eigenbasis. By choosing a different basis for your vector space, you can alter the appearance of the matrix $M$ in that basis. Simply speaking, the $i$-th column of $M$ tells you what the $i$-th basis vector multiplied with $M$ would look like. If all your basis vectors are also eigenvectors, then it is not hard to see that the matrix $M$ is diagonal. Diagonal matrices are a welcome sight, because they are really easy to deal with: Matrix-vector and Matrix-matrix multiplication becomes very efficient, and computing the $k$-th power of a diagonal matrix is also trivial.
I think for a "broad" introduction this might suffice?
-
This is a question I've had myself for many years, so thanks for the excellent summary. Would you have any references that expands the general reasoning you have put forward here? – daven11 May 24 '11 at 13:06
This made it clearer for me: http://www.khanacademy.org/video/linear-algebra--introduction-to-eigenvalues-and-eigenvectors I often find it easier to understand via illustration like this.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9511480331420898, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/15931-rectangle-within-semicircle-print.html
|
# Rectangle Within Semicircle
Printable View
• June 13th 2007, 01:45 PM
blueridge
Rectangle Within Semicircle
A rectangle is inscribed in a semicircle of radius 2. Let P = (x,y) be the point in quadrant 1 that is a vertex of the rectangle and is on the circle.
(a) Express the area A of the rectangle as a function of x.
(b) Express the perimeter p of the rectangle as a function of x.
• June 13th 2007, 01:58 PM
topsquark
Quote:
Originally Posted by blueridge
A rectangle is inscribed in a semicircle of radius 2. Let P = (x,y) be the point in quadrant 1 that is a vertex of the rectangle and is on the circle.
(a) Express the area A of the rectangle as a function of x.
(b) Express the perimeter p of the rectangle as a function of x.
a) The width (or height or whatever you wish to call it) of the rectangle is x, the length is 2y. So the area A = x(2y) = 2xy.
b) P = 2l + 2w = 2(2y) + 2(x) = 2x + 4y
-Dan
• June 13th 2007, 09:56 PM
earboth
1 Attachment(s)
Quote:
Originally Posted by blueridge
A rectangle is inscribed in a semicircle of radius 2. Let P = (x,y) be the point in quadrant 1 that is a vertex of the rectangle and is on the circle.
(a) Express the area A of the rectangle as a function of x.
(b) Express the perimeter p of the rectangle as a function of x.
Hello,
the circle line of the semicircle is described by:
$y = \sqrt{4-x^2}$ according to Pythagorean theorem.
If the height of the rectangle isy then the length of the rectangle is 2x.
The area of a rectangle can be calculated by:
$A_{\text{rectangle}} = length \cdot height$ Plug in the terms of length and height:
$A(x) = 2x \cdot \sqrt{4-x^2}\ , 0 \leq x \leq 2$
.................................................. ......
EDIT: I forgot to show you how to calculate the perimeter of the rectangle:
In general the perimeter is: P = 2*length + 2*height
Using the terms for length and height you get:
$p = 2 \cdot 2x + 2 \cdot \sqrt{4-x^2} = 4x + 2 \cdot \sqrt{4-x^2}$
All times are GMT -8. The time now is 10:49 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8892513513565063, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-equations/166274-non-homogeneous-pde-non-homogeneous-bc-s-eigenfunction-expansion.html
|
Thread:
1. Non-homogeneous PDE with Non-homogeneous BC's - Eigenfunction Expansion
I have an exam tomorrow, and I came across one question that I don't really know how to start and I was really hoping someone could help me out with it.
Use the method of eigenfunction expansion to obtain a solution to
$u_t = u_{xx} + q(x,t)$
with initial condition:
$u(x,0) = f(x)$
BC's: $u(\pi,t) = u_\pi, u(0,t) = u_0$
where $u_\pi, u_0$ are given constants.
So I need to start with a trial solution based on the homogeneous part of the problem i.e.
$u_t = u_{xx}$ which if the boundary conditions were something like $u(\pi,t) = 0, u(0,t) = 0$ I would start with a solution
$\displaystyle u(x,t) = \sum_1^{\infty} c_n(t) \sin (n \pi)$
but I don't know how to determine a trial solution if the BC's are non-homogeneous.
2. You'll need to transform the problem such that the new problem has these kind of BC's. One usually tries
$u = v + ax + b$
and find $a$ and $b$ such that $v(0,t) = 0, v(\pi,t) = 0$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9744302034378052, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/114697/an-invariant-method-of-stationary-phase
|
## An invariant method of stationary phase
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The method of stationary phase is very well-known and employed in many areas of physics and mathematics, and, of course, included in various versions as theorem in textbooks, especially on pseudors and microlocal analysis.
However, it always is somewhat dependent on local coordinates and the Fourier transform, despite being a quite invariant problem. To be precise, the question would be the following.
Let $M$ be a manifold and $\phi: M \longrightarrow \mathbb{C}$ be a smooth function with values in the closed right half plane. Let $u$ be an volume density on $M$ with compact support in $M$. Determine an asymptotic expansion as $t \rightarrow \infty$ of the integral $$I(\phi, u, t) = \int_M e^{-\phi t} u$$ under some nondegeneracy conditions on $\phi$.
(For example, one could require $\phi$ to be Morse or, more general, require that the set where it vanishes is a submanifold $C$ of $M$ and that at a point $p \in C$, the Hessian of $\phi$ is non-degenerate on the space $T_pM/T_pC$.)
It is well-known that in these cases $I(\phi, u, t)$ has an asymptotic expansion of the form $$I(\phi, u, t) = (t/\pi)^{-(n-k)/2}\sum_{j=0}^\infty t^{-j} \int_C s_j,$$ where $k$ is the dimension of $C$ and the $s_j$ are certain volume densities on $C$. In fact, they have to be certain universal terms, depending only on the $2j$-th jets of $\phi$ and $u$ at $C$. This is not stated in most textbooks.
I wonder if it is possible to find these terms $s_j$ using Invariance theory alone. I would like if someone ever thought about this and knows a reference to this more invariant, geometric approach.
/Edit: To clarify my question: I was wondering if it is possible to determine the constants by invariance theory, i.e. some argument like "there is only one polynomial on the $2j$-jets of $u$ and $\phi$ that is invariant under coordinate transformation" or so. For the first term, this goes like this, supposed that $\phi$ is purely real:
Define the $n-k$-density $\mathrm{H}\phi$ on $C$ by setting $$\mathrm{H}\phi[X_1, \dots, X_{n-k}] := \sqrt{\left|\det \bigl( D^2\phi[X_i, X_j] \bigr)_{ij}\right|},$$ where $D^2\phi$ is the (on $C$ well-defined) Hessian of $\phi$. Now $u/\mathrm{H}\phi$ is a $k$-density on $C$ -- this is $s_0$.
Now there should be similar characterizations of the higher $s_j$ (which obviously can get arbitrarily complicated).
-
## 2 Answers
Check Proposition 1.2.4 from the book Fourier Integral Operators by the late great Duistermaat. This result applies in the case when the phase $\phi$ is Morse. If the phase is not Morse, but the critical points are finitely determined (finite Milnor number) then things are a bit more complicated. The vol 1 book of Arnold-Gusein-Zade Varchenko Singularities of differentiable maps is a good source. You can also have a look at the senior thesis of a Zach Lamberty, a former student of mine. There he deals with the $2$-dimensional case ($\dim M =2$) and he essentially works out the toric resolution trick of Arnold and comp. for a special and quite degenerate two variable phase.
-
I know the statement in Duistermaat, and I know that one can determine the sj more specifically using local coordinates. In fact, it is just a Laplacian when using Morse coordinates. However, it should be possible to figure out the $s_j$ without using any coordinates, just by using invariance theory. – Kofi Nov 27 at 20:51
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
$\phi=\Re \phi+i\Im \phi$. You have assumed $\Re \phi\ge 0$ and you deal with a complex phase function. Note that the standard notation is not yours, since what is usually called the real stationary phase method coincides here with the case $\phi$ purely imaginary.
Never mind, let's follow your notations and note that $t\rightarrow+\infty$ (the $+$ is missing in your formulation and is quite important since $\Re \phi\ge 0$).
(1) Let us assume that $\Im \phi$ does not have a stationary point on the support of the amplitude $u$($d\Im \phi\not=0$ on $supp u\cap${$\Re \phi= 0$}: then $I(\phi, u,t)=O(t^{-N})$ for any $N>0$.
(2) Let us assume that $\Im \phi$ is such that $$d\Im \phi=0\text{ at}\quad supp u\cap{\text{{$\Re \phi= 0$}}\text{}}\Longrightarrow \text{Hessian}(\Im \phi) \text{ non-singular}$$ then $I(\phi, u,t)\sim ct^{-n/2}$, where $n=dim M$. The constant $c$ can be computed explicitly in terms of the indices of the Hessian at the stationary points, the value of the amplitude there and appears as a finite sum corresponding to the finite number of stationary points of $\Im \phi$ on the compact $supp u\cap${$\Re \phi= 0$}. A complete expansion is available in Hormander ALPDO first volume, Chapter 7, section devoted to the complex stationary phase method. To sum-up the simple case exposed here: the integral is largest when $\Re \phi$ vanishes at a critical point of $\Im \phi$, and if that critical point is non-degenerate, you find a behavior in $t^{-n/2}$. No coordinate choice is involved here.
I should say that the real stationary phase method is easier to understand: it corresponds here to your case with $\phi$ purely imaginary (!). You may for instance assume that $i\phi$ is a real-valued Morse function and the Morse lemma is providing a normal form (No such thing exists for a complex valued function). You find a finite number of stationary points on the support of $u$, and you can take advantage of the normal form on a neighborhood of each critical point. Anyhow the contribution elsewhere is $O(t^{-\infty}).$ Morse lemma reduces the problem to an integral with an exactly quadratic phase for which you have a full expansion since you know explicitly the Fourier transform of a Gaussian function.
-
Yes, this is the standard way. It works just as well in the case that $\phi$ is purely real (and gets even simpler). However, it describes the asymptotic expansion via differential operators given in Morse coordinates. And there is a vast amount of Morse coordinates for a given function. Of course, the term cannot depend on this choice, however when looking at its definition, this is not obvious at all. I am looking for a description of these terms such that one can see their invariance of the choice of Morse chart by its definition. – Kofi Nov 27 at 21:39
The $c$ in my answer is, assuming $\phi=-i\psi$, $\psi$ real-valued Morse function $$c=\sum_{x\in supp u,d\psi(x)=0}e^{it\psi(x)}\frac{e^{i\frac{\pi}{4}sign \psi''(x)}}{\vert \psi''(x)\vert^{1/2}} u(x),$$ a coordinate-free expression. Note that $sign \psi''(x)$ stands for the signature of the quadratic form $\psi''(x)$, that is the number of positive eigenvalues minus the number of negative eigenvalues. The sum above is finite by compactness of $supp u$. – Bazin Nov 28 at 13:02
The notation $\vert \psi''(x)\vert^{1/2}$ means $$\vert\det \psi''(x)\vert^{1/2}.$$ – Bazin Nov 28 at 13:05
Yes, this is basically what I wrote in my edit to the above post, except that I considered the real case (I think the $e^{it\psi(x)}$ is wrong in your definition of $c$?). But the higher coefficients always make use of a chart. – Kofi Nov 28 at 15:06
The $e^{it\psi}$ must be there: the value of the phase at a critical point has to be taken into account. Imagine that you multiply everything by $i$. The signature is invariantly defined as well as the square root of the determinant, which is a half-density et the critical set. – Bazin Nov 28 at 19:44
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 81, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314349293708801, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/9169/request-for-reference-banach-type-spaces-as-algebraic-theories
|
## Request for reference: Banach-type spaces as algebraic theories.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Sparked by Yemon Choi's answer to Is the category of Banach spaces with contractions an algebraic theory? I've just spent a merry time reading and doing a bit of reference chasing. Imagine my delight at finding that one of my old favourites (functional analysis) and one of my new fads (category theory, and in particular algebraic theories) are actually very closely connected!
I was going to ask about the state of play of these things as it's a little unclear exactly what stage has been achieved. Reading the paper On the equational theory of $C^\ast$-algebras and its review on MathSciNet then it appears that although it's known that $C^\ast$-algebras do form an algebraic theory, an exact presentation in terms of operations and identities is still missing (at least at the time of that paper being written), though I may be misreading things there. It's possible to do a little reference chasing through the MathSciNet database, but the trail does seem to go a little cold and it's very hard to search for "$C^\ast$ algebra"!
But now I've decided that I don't want to just know about the current state of play, I'd like to learn what's going on here in a lot more detail since, as I said, it brings together two seemingly disparate areas of mathematics both of which I quite like.
So my real question is
• Where should I start reading?
Obviously, the paper Yemon pointed me to is one place to start but there may be a good summary out there that I wouldn't reach (in finite) time by a reference chase starting with that paper. So, any other suggestions? I'm reasonably well acquainted with algebraic theories in general so I'm looking for specifics to this particular instance.
Also, I'll write up my findings as I find them on the n-lab so anyone who wants to join me is welcome to follow along there. I probably won't actually start until the new year though.
-
## 1 Answer
Do an emath search for Waelbroeck, L*; note especially his paper "The Taylor spectrum and quotient Banach spaces". For more recent things, search for Castillo, J*. Also, Mariusz Wodzicki at Berkeley has unpublished notes that contain many things. I don't know if they are in a form for distribution.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9719796180725098, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/228692/is-u-fx-fx-in-p-3-operatornamedeg-fx-3-a-subspace-of-p/228693
|
# Is $U = \{f(x)| f(x) \in P_{3}, \operatorname{deg} f(x) = 3\}$ a subspace of $P_{3}$?
Given : $U = \{f(x)| f(x) \in P_{3}, \operatorname{deg} f(x) = 3\}$
Does: U is a subspaces of $P_{3}$
I think the answer is yes. But in my textbook, they say no. And explain that zero is not in set, and scalar multiplication and addition are not closed under U.
Thanks :)
-
2
The zero polynomial is not in $U$, because the condition for entry into $U$ is being of degree 3, and the zero polynomial is not of degree 3. A subspace must contain the zero element of the space of which it is a subspace. – Gerry Myerson Nov 4 '12 at 5:42
@hqt To display {} in math-mode you have to add backslash `$\{...\}$`. – Martin Sleziak Nov 4 '12 at 7:19
## 1 Answer
$U$ is not closed under addition. For example, take $x^3$ and $-x^3$. Adding these gives the polynomial $0$, which does not have degree $3$, whence $0\notin U$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934938371181488, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/53026/finite-abelian-group/53036
|
# Finite Abelian Group
Let $G$ be a finite abelian group, $G = \{e, a_{1}, a_{2}, ..., a_{n} \}$. Prove that $(a_{1}a_{2}\cdot \cdot \cdot a_{n})^{2}$ = $e$.
I've been stuck on this problem for quite some time. Could someone give me a hint?
Thanks in advance.
-
1
every element has an inverse, and you can reorder terms – RHP Jul 22 '11 at 4:50
Can I assume $(a_{1}a_{2} \cdot \cdot \cdot a_{n})^{2} = e$ is true? If I can, then $(a_{1}a_{2} \cdot \cdot \cdot a_{n})(a_{1}a_{2} \cdot \cdot \cdot a_{n}) = e$. If $ab = e$, then $b = a^{-1}$. So $(a_{1}a_{2} \cdot \cdot \cdot a_{n}) = (a_{1}a_{2} \cdot \cdot \cdot a_{n})^{-1}$. Then $(a_{1}a_{2} \cdot \cdot \cdot a_{n})(a_{1}a_{2} \cdot \cdot \cdot a_{n})^{-1} = e$. Thus $e = e$??? – Student Jul 22 '11 at 4:59
8
@Jon: You cannot assume that what you are trying to prove is true; that leads to a circular argument. – Arturo Magidin Jul 22 '11 at 5:00
@ Arturo: Thanks for the clear up :) – Student Jul 22 '11 at 5:12
## 5 Answers
Here is a hint: for any given $a_i\in G$, there are two possibilities: either
• $a_i$ is its own inverse, or
• $a_i$ is not its own inverse, but rather $a_j=a_i^{-1}$ for some $j\neq i$.
-
So for the first case where $a_{i}$ = $a_{i}^{-1}$. $(a_{1}a_{2} \cdot \cdot \cdot a_{n})^{2} = (a_{1}a_{2} \cdot \cdot \cdot a_{n})(a_{1}a_{2} \cdot \cdot \cdot a_{n}) = (a_{1}a_{2} \cdot \cdot \cdot a_{n})(a_{1}^{-1}a_{2}^{-1} \cdot \cdot \cdot a_{n}^{-1}) = a_{1}a_{1}^{-1} \cdot \cdot \cdot a_{n}a_{n}^{-1} = e$ – Student Jul 22 '11 at 5:09
3
No, that reasoning isn't correct; for example, it might be the case that $a_3=a_3^{-1}$, but $a_4=a_5^{-1}$ and $a_5=a_4^{-1}$. In other words, the two cases I described above concern any single given $a_i\in G$, but there is no reason that every $a_i\in G$ should fall into the same case. – Zev Chonoles♦ Jul 22 '11 at 5:14
2
To explain it in yet another manner, it is false that either $$\bullet\quad\text{For every }a_i\in G,\,\,a_i\text{ is its own inverse.}$$ or $$\bullet\quad\text{For every }a_i\in G,\,\,a_i\text{ is not its own inverse.}$$ There may be some mixture. Combine the two arguments you have thought of for each case, and you will have an argument that will work in general. – Zev Chonoles♦ Jul 22 '11 at 5:22
$(a_{1}a_{2} \cdot \cdot \cdot a_{n})(a_{1}a_{2} \cdot \cdot \cdot a_{n})$. Then each $a_{i}$ in the second set of parentheses can be replaced appropriately by $a_{i}^{-1}$ if $a_{i}$ is its own inverse or by $a_{i}^{-1} = a_{j}$ for some $j \neq i$. Then since $G$ is Abelian, $a_{1}a_{1}^{-1} \cdot \cdot \cdot a_{n}a_{n}^{-1} = e$. Is this better? – Student Jul 22 '11 at 5:28
This is precisely the right idea! However, there is one small detail you should also prove to make a fully rigorous argument: you should show that an element and its inverse are paired uniquely. Supposing it were possible that $a_k=a_i^{-1}$ and $a_k=a_j^{-1}$ for $i\neq j$, then after rewriting the terms in the second set of parentheses, we'd have two copies of $a_k$, and be missing some other element of $G$. However, this cannot happen. Do you see why? – Zev Chonoles♦ Jul 22 '11 at 5:31
show 1 more comment
The map $\phi:x\in G\mapsto x^{-1}\in G$ is an automorphism of $G$ so, in particular, it induces a bijection $G\setminus\{e\}\to G\setminus\{e\}$. It maps $b=a_1\cdots a_n$ to itself, so that $b=b^{-1}$ and, therefore, $b^2=e$.
-
1
Automorphism? What's an automorphism? Well, you know, and I know, but do you reckon OP knows? – Gerry Myerson Jul 22 '11 at 5:57
@Gerry: Not yet, but hopefully I'll read about automorphisms soon enough :) – Student Jul 22 '11 at 6:04
1
@Gerry: he'll know soon enough :) – Mariano Suárez-Alvarez♦ Jul 22 '11 at 6:05
1
+1: this seems to be a genuinely different argument. – Pete L. Clark Jul 22 '11 at 6:59
This is very clever and succinct, +1! – Zev Chonoles♦ Jul 22 '11 at 7:22
Note that a question which is in some sense more natural is: "What is the product of all the elements of a finite abelian group?" The given question gives some information about this product, namely that it is an element of order at most $2$.
The answer to the more general question is given by the following result.
Wilson's Theorem in a Finite Abelian Group: Let $G$ be a finite abelian group. Then the product of all elements of $G$ is equal to the identity unless $G$ has exactly one element $t$ of order $2$, in which case the product is $t$. For a proof see e.g. $\S 6$ of these notes.
In fact one can view the answer to the OP's question as 2/3 of the way towards a proof of WTFAG. Namely, we take the product of all of the elements in the group, and note that the elements which have order greater than $2$ occur in pairs $x,x^{-1}$. Therefore the product of all the elements in a finite abelian group $G$ is also equal to the product of all the elements of $G[2]$, i.e., the product of all elements of order at most $2$. In an abelian group, the subset $G[2]$ is a subgroup, so the product has order $2$ and hence squares to the identity element $e$, completing the answer to the OP's question.
But now, on to WTFAG: First of all, if there are no elements of order $2$ then $G[2] = \{e\}$, verifying WTFAG in that case. Second of all, if there is exactly one element $t$ of order $2$, then $G[2] = \{e,t\}$ and the product of all elements in $G[2]$ is equal to $t$.
To complete the proof of WTFAG, one needs to consider the case in which $G[2]$ has more than two elements. In my notes, I do this by first establishing the CLAIM that $G[2]$ (or really any finite group in which every element is self-inverse) is isomorphic to a finite direct product of copies of groups of order $2$, and then using some simple counting arguments.
The CLAIM follows either from the structure theory for finite abelian groups -- which is proved in the same set of notes, but nevertheless the emphasis in the notes is on the fact that in many applications, especially in number theory, the big structure theorem can be avoided -- or from the fact that a group in which each element has order $2$ is necessarily a vector space over the field $\mathbb{F}_2$ of two elements (and then we invoke the structure theorem for finite-dimensional vector spaces). I would actually prefer to have a third, more elementary proof of this fact. Perhaps someone can suggest one?
-
HINT:
An element and its inverse are unique, and each element is either its own inverse or not.
-
In addition to Pete Clark's answer, there is also a very neat answer to the question What is the set of all different products of all the elements of a finite group $G$? So $G$ not necessarily abelian. Well, if a 2-Sylow subgroup of $G$ is trivial or non-cyclic, then this set equals the commutator subgroup $G'$. If a 2-Sylow subgroup of $G$ is cyclic, then this set is the coset $xG'$ of the commutator subgroup, with $x$ the unique involution of a 2-Sylow subgroup.
-
$@$Nicky: +1! This is a very nice generalization of the abelian case (which, in fact, I had been wondering about). Do you have any references to where these facts are discussed? – Pete L. Clark Jul 22 '11 at 7:59
@Pete: have to look it up, author was the Hungarian mathematician J. Dénes. See also J. Dénes and P. Hermann, `On the product of all elements in a finite group', Ann. Discrete Math. 15 (1982) 105-109. The theorem also connects to the theory of Latin Squares and so-called complete maps. – Nicky Hekster Jul 22 '11 at 9:40
Thanks for the reference! – Mariano Suárez-Alvarez♦ Jul 22 '11 at 23:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406773447990417, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4111273
|
Physics Forums
Quadruples of Integers
This should be a simple combinatorial problem. Suppose I have a number n which is a positive integer. Suppose, that there are four numbers a,b,c,d such that 0<=a<=b<=c<=d<=n.
The question is how many quadruples of the form (a,b,c,d) can be formed out such arrangement?
I realize that this is a homework-like question, but I am really interested in seeing which principles of combinatoris would apply here.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Hey YAHA. Recall that if you get a number x on a particular draw, then the number of possibilities in the next draw will be n-x. The rest is to apply the multiplication rule on these to get the number of possibilities for one particular set of observations. Then sum over all possibilities and this will give you the total number of combinations. So as an example, you look at the (n+1)*(n+1-x)*(n+1-y)*(n+1-z) for getting (*,x,y,z). Now sum all of these over all consistent values of x, y, and z and that will give you the number of combinations.
The number of quadruples is the number of multisets of 4 numbers drawn from the integers 0, 1, 2, ..., n (n+1 integers), hence $$\binom{n+1+4-1}{4} = \binom{n+4}{4}$$
Recognitions:
Homework Help
Science Advisor
Quadruples of Integers
Quote by awkward The number of quadruples is the number of multisets of 4 numbers drawn from the integers 0, 1, 2, ..., n (n+1 integers), hence $$\binom{n+1+4-1}{4} = \binom{n+4}{4}$$
Quite so, but I suspect that will need more detailed explanation. Here's my version, yours may be simpler.
We can map the problem into one where a< b < c < d, merely by adding 0, 1, 2, 3 respectively. Since we added 3 to d, we must also add 3 to n. So now it's a matter of the number of ways of choosing 4 distinct numbers from 0,..,n+3.
Thread Tools
| | | |
|---------------------------------------------|----------------------------------|---------|
| Similar Threads for: Quadruples of Integers | | |
| Thread | Forum | Replies |
| | Linear & Abstract Algebra | 4 |
| | Calculus & Beyond Homework | 6 |
| | Linear & Abstract Algebra | 4 |
| | Precalculus Mathematics Homework | 7 |
| | Linear & Abstract Algebra | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.872749388217926, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/4959/how-do-you-calculate-the-expectation-of-left-sum-i-1n-x-i-right2
|
# How do you calculate the expectation of $\left(\sum_{i=1}^n {X_i} \right)^2$?
If $X_i$ is exponentially distributed $(i=1,...,n)$ with parameter $\lambda$ and $X_i$'s are mutually independent, what is the expectation of
$$\left(\sum_{i=1}^n {X_i} \right)^2$$
in terms of $n$ and $\lambda$ and possibly other constants?
Note: This question has gotten a mathematical answer on http://math.stackexchange.com/q/12068/4051. The readers would take a look at it too.
-
2
The two copies of this question reference each other and, appropriately, the stats site (here) has a statistical answer and the math site has a mathematical answer. It seems like a good division: let it stand! – whuber♦ Mar 4 '11 at 21:59
## 3 Answers
If $x_i \sim Exp(\lambda)$, then (under independence), $y = \sum x_i \sim Gamma(n, 1/\lambda)$, so $y$ is gamma distributed (see wikipedia). So, we just need $E[y^2]$. Since $Var[y] = E[y^2] - E[y]^2$, we know that $E[y^2] = Var[y] + E[y]^2$. Therefore, $E[y^2] = n/\lambda^2 + n^2/\lambda^2 = n(1+n)/\lambda^2$ (see wikipedia for the expectation and variance of the gamma distribution).
-
2
+1 Nice answer! – whuber♦ Nov 27 '10 at 17:10
Thanks. A very neat way of answering the question (leading to the same answer) was also provided on math.stackexchange (link above in the question) a few minutes ago. – Wolfgang Nov 27 '10 at 17:24
1
The math answer computes the integrals using linearity of expectation. In some ways it's simpler. But I like your solution because it exploits statistical knowledge: because you know a sum of independent Exponential variables has a Gamma distribution, you're done. – whuber♦ Nov 27 '10 at 21:19
1
I enjoyed it quite a bit and I am by no means a statistician or a mathematician. – Kortuk Nov 29 '10 at 18:04
very elegant answer. – Cyrus S Nov 30 '10 at 16:44
show 2 more comments
The answer above is very nice and completely answers the question but I will, instead, provide a general formula for the expected square of a sum and apply it to the specific example mentioned here.
For any set of constants $a_1, ..., a_n$ it is a fact that
$$\left( \sum_{i=1}^{n} a_i \right)^2 = \sum_{i=1}^{n} \sum_{j=1}^{n} a_{i} a_{j}$$
this is true by the Distributive property and becomes clear when you consider what you're doing when you calculate $(a_1 + ... + a_n) \cdot (a_1 + ... + a_n)$ by hand.
Therefore, for a sample of random variables $X_1, ..., X_n$, regardless of the distributions,
$$E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) = E \left( \sum_{i=1}^{n} \sum_{j=1}^{n} X_i X_j \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} E(X_i X_j)$$
provided that these expectations exist.
In the example from the problem, $X_1, ..., X_n$ are iid ${\rm exponential}(\lambda)$ random variables, which tells us that $E(X_{i}) = 1/\lambda$ and ${\rm var}(X_i) = 1/\lambda^2$ for each $i$. By independence, for $i \neq j$, we have
$$E(X_i X_j) = E(X_i) \cdot E(X_j) = \frac{1}{\lambda^2}$$
There are $n^2 - n$ of these terms in the sum. When $i = j$, we have
$$E(X_i X_j) = E(X_{i}^{2}) = {\rm var}(X_{i}) + E(X_{i})^2 = \frac{2}{\lambda^2}$$
and there are $n$ of these term in the sum. Therefore, using the formula above,
$$E \left( \sum_{i=1}^{n} X_i \right)^2 = \sum_{i=1}^{n} \sum_{j=1}^{n} E(X_i X_j) = (n^2 - n)\cdot\frac{1}{\lambda^2} + n \cdot \frac{2}{\lambda^2} = \frac{n^2 + n}{\lambda^2}$$
-
This problem is just a special case of the much more general problem of 'moments of moments' which are usually defined in terms of power sum notation. In particular, in power sum notation:
$$s_1 = \sum_{i=1}^{n} X_i$$
Then, irrespective of the distribution, the original poster seeks $E[s_1^2]$ (provided the moments exist). Since the expectations operator is just the 1st Raw Moment, the solution is given in the mathStatica software by:
[ The '___ToRaw' means that we want the solution presented in terms of raw moments of the population (rather than say central moments or cumulants). ]
Finally, if $X$ ~ Exponential($\lambda$) with pdf $f(x)$:
````f = Exp[-x/λ]/λ; domain[f] = {x, 0, ∞} && {λ > 0};
````
then we can replace the moments $\mu_i$ in the general solution `sol` with the actual values for an Exponential random variable, like so:
All done.
P.S. The reason the other solutions posted here yield an answer with $\lambda^2$ in the denominator rather than the numerator is, of course, because they are using a different parameterisation of the Exponential distribution. Since the OP didn't state which version he was using, I decided to use the standard distribution theory textbook definition Johnson Kotz et al … just to balance things out :)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9036828875541687, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/177424-complex-number-proof.html
|
# Thread:
1. ## Complex Number Proof
For the complex equation $z^4 = cosx + isinx$, prove that the sum of the four solutions is always zero, no matter what size $x$ is.
This is what I've done so far.
$Z = (cis(x + 360k))^1^/^4<br /> = cis(x/4 + 90k)$
Can someone please teach me how to do the rest? Thanks in advance
2. Rewrite it as $\displaystyle z^4 = e^{ix}$.
This means that the first fourth root is $\displaystyle z = \left(e^{ix}\right)^{\frac{1}{4}} = e^{i\frac{x}{4}} = \cos{\left(\frac{x}{4}\right)} + i\sin{\left(\frac{x}{4}\right)}$.
The other fourth roots are evenly spaced around a circle, so differ by an angle of $\displaystyle \frac{\pi}{4}$. What are the other solutions? What do you get when you add them together?
3. A little twist on ProveIt's approach:
Let $w = \cos x + i \sin x$, so the equation can be written
$z^4 - w = 0$.
If the roots are $r_1, r_2, r_3, r_4$, then we must have
$z^4 - w = (z - r_1) (z - r_2) (z - r_3) (z - r_4)$.
If we expand the right-hand side of this equation, what can we say about the coefficient of $z^3$?
« graph | example »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920246422290802, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/74770/distribution-of-the-normal-cdf
|
# distribution of the normal cdf
I am wondering what is the probability density function for the normal cdf $\Phi (aX+b)$, where $\phi$ is the usual standard normal cumulative distribution function
I want to calculate $\mathbb{E}[\Phi(aX+b)]$ but i am stuck on how to get the distribution. thank you =]
-
What does $X$ stand for? Where does the problem come from? – André Nicolas Oct 22 '11 at 5:40
X is a random variable, I thought it up, trying to calculate the expected value of a cumulative distribution – Jess C Oct 22 '11 at 5:57
The expected value of a function $f(X)$ of a random variable $X$ depends in general on the distribution of $X$, and not only the mean of $X$. There was no specification made in the post about the distribution of $X$, only about what $f$ was. – André Nicolas Oct 22 '11 at 6:12
oops hehe, thanks for reminding me =] – Jess C Oct 22 '11 at 6:26
X is normally distributed – Jess C Oct 22 '11 at 6:26
show 1 more comment
## 1 Answer
Let $X$ and $Y$ be the standard normal random variables. Then $$\mathbb{E}(\Phi(a X + b)) = \mathbb{E}( \mathbb{P}( Y \le a x + b \vert X = x ) ) = \mathbb{P}(Y- a X \le b )$$ But the combination $Z = Y-a X$ also follows normal distribution (being a linear combination of normals), with zero mean and variance $\mathbb{E}((Y-a X)^2) = 1 + a^2$. Hence $$\mathbb{E}(\Phi(a X + b)) = \Phi\left(\frac{b}{\sqrt{1+a^2}}\right)$$
Here is numerical checks:
````In[14]:= With[{a = 3.,
b = 1/2}, {NExpectation[CDF[NormalDistribution[], a x + b],
x \[Distributed] NormalDistribution[]],
CDF[NormalDistribution[], b/Sqrt[1 + a^2]]}]
Out[14]= {0.562816, 0.562816}
````
-
how do you get the first equality? thanks – Jess C Oct 22 '11 at 6:32
Sasha: +1. – Did Oct 22 '11 at 8:18
@JessC The first equality is the definition of the cumulative density function, namely $\Phi(y) = \mathbb{P}(Y \le y )$. – Sasha Oct 22 '11 at 9:22
+1 indeed! But may I request that the first inequality be expanded out a little e.g. as in $$E[\Phi(aX+b)] = \int_{-\infty}^{\infty}\Phi(ax+b)\phi(x)dx = \int_{-\infty}^{\infty}P(Y \leq aX + b\mid X = x)\phi(x)dx = P(Y \leq aX + b)\ldots...$$ so as to make the connection very clear? – Dilip Sarwate Oct 22 '11 at 21:06
@DilipSarwate Yes, I agree this would add clarity. I will be able to make the change only in few hours from now. Thanks for the comment and the upvote – Sasha Oct 22 '11 at 21:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8596932888031006, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2010/12/16/dimensions-of-young-tabloid-modules/?like=1&_wpnonce=d19428bcb3
|
# The Unapologetic Mathematician
## Dimensions of Young Tabloid Modules
Last time our efforts to calculate the characters of the modules $M^\lambda$ were stymied. But at least we can calculate their dimensions. The dimension of $M^\lambda$ is the number of Young tabloids of shape $\lambda$.
Again, we pick some canonical Young tableau $Y$ of shape $\lambda$ so that every other tableau $t$ can be written uniquely as $t=\tau Y$ for some $\tau\in S_n$. That is, the set of all Young tabloids $\{t\}$ is the orbit $S_n\{Y\}$ of the canonical one. By general properties of group actions we know that there is a bijection between the orbit and the index of the stabilizer of $\{Y\}$ in $S_n$. That is, we must count the number of permutations $\tau\in S_n$ with $\tau Y$ row-equivalent to $Y$.
It doesn’t really matter which $Y$ we pick; any two tableaux in the same orbit — and they’re all in the same single orbit — have isomorphic stabilizers. But like we mentioned last time the usual choice lists the numbers from $1$ to $\lambda_1$ on the first row, from $\lambda_1+1$ to $\lambda_1+\lambda_2$ on the second row, and so on. We write $S_\lambda$ for the stabilizer of this choice, and this is the subgroup of $S_n$ we will use. Notice that this is exactly the same subgroup we described earlier.
Anyway, now we know that Young tabloids $\{\tau Y\}$ correspond to cosets of $S_\lambda$; if $\tau'=\tau\pi$ for some $\pi\in S_\lambda$, then
$\displaystyle\{\tau' Y\}=\{\tau\pi Y\}=\tau\{\pi Y\}=\tau\{Y\}=\{\tau Y\}$
So we can count these cosets in the usual way:
$\displaystyle[S_n:S_\lambda]=\lvert S_n\rvert/\lvert S_\lambda\rvert=n!/\lvert S_\lambda\rvert$
How big is $S_\lambda$? Well, we know that
$\displaystyle S_\lambda\cong S_{\lambda_1}\times\dots\times S_{\lambda_k}$
and so
$\displaystyle\lvert S_\lambda\rvert=\lvert S_{\lambda_1}\rvert\dots\lvert S_{\lambda_k}\rvert=\lambda_1!\dots\lambda_k!$
Since it will come up so often, we will write this product of factorials as $\lambda!$ for short. We can then write $S_\lambda=\lambda!$ and thus we calculate $n!/\lambda!$ for the number of cosets of $S_\lambda$ in $S_n$. And so this is also the number of Young tabloids of shape $\lambda$, and also the dimension of $M^\lambda$.
Now, along the way we saw that the Young tabloid $\{\tau Y\}$ corresponds to the coset $\tau S_\lambda$. It should be clear that the action of $S_n$ on the Young tabloids is exactly the same as the coset action corresponding to $S_\lambda$. And thus the permutation module $M^\lambda$ must be isomorphic to the induced representation $1\!\!\uparrow_{S_\lambda}^{S_n}$.
### Like this:
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 44, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9185975790023804, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/55097/can-homologous-submanifolds-be-connected-by-an-immersed-manifold-with-boundary/55102
|
## Can homologous submanifolds be connected by an immersed manifold with boundary?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Supposed I have an n-dimensional manifold M with a k-dimensional submanifold that is homologous to zero (or, equivalently, two homologous submanifolds). Can I always construct a k+1-dimensional manifold N and a smooth map $N\to M$ so that the boundary maps diffeomorphically to my submanifold? Can I just take abstract k+1-simplecies and glue them along boundaries to make N, and then somehow smooth it out? If not, is there some understandable obstruction?
I'm most interested in the smooth category, but if it makes more sense in some other category (or there is otherwise a better question I should've asked), do tell me.
Update: As I first asked it, the question was a bit stupid because I forgot about cobordisms. However, in the case I care about, this does not seem to be a problem, since I want the boundary of N to be a union of two submanifolds which are diffeomorphic to each other.
-
2
Are there any hypotheses? Are the homologous submanifolds assumed disjoint (or null-homologous submanifold embedded)? – Igor Rivin Feb 11 2011 at 6:34
## 3 Answers
As Igor has noted, an obvious obstruction is that your embedded submanifold be null-cobordant.
It seems that you are really asking about the kernel of the realization map `$MO_k(M)\to H_k(M;\mathbb{Z}_2)$` (assuming that you don't care about orientations). This appears as the edge homomorphism in the unoriented bordism spectral sequence, which collapses since every homology class is Steenrod realizable (as follows from the work of Thom). A basic reference for this fact is the book "Differentiable periodic maps" by Conner and Floyd. It follows that there is a module isomorphism `$H_*(M; \mathbb{Z}_2)\otimes MO_*\cong MO_*(M)$`. The bordism class of a map $f\colon A\to M$ is determined by its Stiefel-Whitney numbers (see Section 17 of Conner and Floyd). From this you should be able to piece together what you need.
More detail: Let `$f\colon A^k\hookrightarrow M$` be the embedding of your submanifold. Every cohomology class `$x\in H^\ell(M;\mathbb{Z}_2)$` and multi-index `$(i_1,\ldots,i_r)$` with `$i_1+\cdots + i_r=k-\ell$` gives a Stiefel-Whitney number of the map $f$, defined by `$$\langle w_{i_1}(A)\cdots w_{i_r}(A)f^*(x),[A]\rangle\in\mathbb{Z}_2.$$` Your map is null-bordant if and only if these are all zero. (Note when $x$ is the unit class we get the S-W numbers of $A$. Also the multi-index $(0)$ gives trivial numbers by your assumption that `$f_*[A]=0$`.)
-
Thank you, I might look there if I can't come up with some more elementary way of seeing this. – Ilya Grigoriev Feb 11 2011 at 19:30
I recommend it! The theory of bordism groups is very beautiful, and was developed precisely to answer questions such as yours. – Mark Grant Feb 12 2011 at 12:29
Also note that my amended answer completely answers your question in the unoriented case, modulo a good understanding of the cohomology of your manifolds and the map induced by the inclusion. For example, if your embedded submanifolds are spheres, the answer to your question is yes. – Mark Grant Feb 12 2011 at 12:33
Thank you again! You are right, this is probably the way I should be understanding this after all. I'll look at Conner and Floyd. – Ilya Grigoriev Feb 18 2011 at 20:39
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Presumably, the answer is NO, since every manifold $K$ embeds in $\mathbb{S}^n$ of high dimension (where it is then null-homologous), but not every manifold is null-cobordant, in other words, there are $K$ such that there is no $N$ such that $\partial N = K.$
-
You are right, of course, thaks! You were also correct that I want an additional hypothesis - I think I can assume my manifold to be null-cobordant. – Ilya Grigoriev Feb 11 2011 at 19:30
First of all, it doesn't matter whether or not the map is smooth. If you find any continuous map, then it will have a smooth approximation.
The other answers so far explain that the cobordism group gives you an obstruction to improving a singular-simplicial chain into a mapped-in manifold. In fact, it is easy to see that this is basically the only obstruction, and that null corbodism directly gives you a way to improve the simplicial chain. I'll work with integer coefficients rather than over $\mathbb{Z}/2$ so that things survive a little longer. Say that you have this $(k+1)$-dimensional cobounding chain. You can manifold-ize a $k$-dimensional face of the chain because various $k$-dimensional sheets meet the face with opposite sign and you can pair them. This is basically using the fact that the reduced 0-corbodism group is trivial. Then turn to the $(k-1)$-faces. Because of what you did to the $k$-faces, the sheets meet the $(k-1)$-faces in a collection of circles. But circles are null cobordant, so you can smooth them. You can continue in this way using the fact that oriented surfaces and 3-manifold are all null-cobordant. But when you try to improve a $(k-3)$-face, the link of the an incoming sheet can be a 4-manifold that is not null-cobordant, like $\mathbb{C}P^2$. Then you're stuck.
The obstruction is fundamental because the original null-homologous $k$-cycle could have been an embedded $\mathbb{C}P^2$, and the original cobounding chain could have been a cone over it.
-
Thank you, your answer was very helpful. I'm especially hopeful because of your statement that cobordism is the only problem - I don't think it's a problem for me. – Ilya Grigoriev Feb 11 2011 at 19:29
In the construction that I describe, you have to be careful to check that all cobordisms that you will need for the entire cobounding chain are available, and not just that your cycle is represented by a null-cobordant manifold. – Greg Kuperberg Feb 11 2011 at 20:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946226954460144, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/18636/number-of-invertible-0-1-real-matrices/18639
|
## Number of invertible {0,1} real matrices?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is inspired from here, where it was asked what possible determinants an $n \times n$ matrix with entries in {0,1} can have over $\mathbb{R}$.
My question is: how many such matrices have non-zero determinant?
If we instead view the matrix as over $\mathbb{F}_2$ instead of $\mathbb{R}$, then the answer is
$(2^n-1)(2^n-2)(2^n-2^2) \dots (2^n-2^{n-1}).$
This formula generalizes to all finite fields $\mathbb{F}_q$, which leads us to the more general question of how many $n \times n$ matrices with entries in { $0, \dots, q-1$ } have non-zero determinant over $\mathbb{R}$?
-
5
research.att.com/~njas/sequences/A046747 – Steve Huntsman Mar 18 2010 at 19:00
1
In particular, follow Zivkovic's link on the page for A046747. He has the most recent data for small n that I have found. Gerhard "Ask Me About System Design" Paseman, 2010.03.18 – Gerhard Paseman Mar 18 2010 at 22:32
I have yet to find anyone addressing the spectrum question for (0,1,...,q-1) matrices or related questions. I look forward to anyone posting a pointer to such material. Gerhard "Ask Me About System Design" Paseman, 2010.03.18 – Gerhard Paseman Mar 18 2010 at 22:36
What does the obvious chinese remaindering argument give you: in other words, the biggest the determinant could be is $n^n$ (by Hadamard). Now, assuming (that's the big problem obviously) that the probability that a $0, 1$ matrix mod $p$ is singular is the same as the probability that a random matrix is singular, then you get an obvious product over primes for the probability. Is this close to the conjectured answer? – Igor Rivin Mar 23 2012 at 16:15
## 3 Answers
See Sloane, A046747 for the number of singular (0,1)-matrices. It doesn't seem like there's an exact formula, but it's conjectured that the probability that a random (0,1)-matrix is singular is asymptotic to $n^2/2^n$.
Over $F_2$ the probability that a random matrix is nonsingular, as $n \to \infty$, approaches the product $(1/2)(3/4)(7/8)\cdots = 0.2887880951$, and so the probability that a random large matrix is singular is only around 71 percent. I should note that a matrix is singular over $F_2$ if its real determinant is even, so this tells us that determinants of 0-1 matrices are more likely to be even than odd.
-
Thanks Michael. Is there an elementary proof of why the determinant is more likely to be even than odd? – Tony Huynh Mar 18 2010 at 19:34
Not that I know of, but I'm hardly an expert on random matrices. – Michael Lugo Mar 18 2010 at 19:36
12
Fill in a size $n$ square matrix over $\mathbb{F}_2$ row by row. If the first $n-1$ rows are linearly independent, the whole matrix will have zero determinant. Otherwise the last row will cause the determinant to vanish if its entries satisfy a linear equation; this happens with conditional probability $1/2$. – Robin Chapman Mar 18 2010 at 19:58
I think you mean "linearly dependent". – Douglas S. Stones Mar 24 2010 at 6:55
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As Michael noted, the conjectured bound for the probability a random $(0,1)$ matrix is singular is conjectured to be $(1+o(1)) n^2 2^{-n}$. This corresponds to the natural lower bound coming from the observation that if a matrix has two equal rows or columns it is automatically singular.
The best bound currently known for this problem is $(\frac{1}{\sqrt{2}} + o(1) )^n$, and is due to Bourgain, Vu, and Wood. Corollary 3.3 in their paper also gives a bound of $(\frac{1}{\sqrt{q}}+o(1))^n$ in the case where entries are uniformly chosen from ${0, 1, \dots, q-1}$ (here the conjectured bound would be around $n^2 q^{-n})$
Even showing that the determinant is almost surely non-zero is not easy (this was first proven by Komlos in 1967, and the reference is given in Michael's Sloane link).
-
Lurking around MO, I found a question which is related to the second part of my question. Namely, Greg Martin and Erick B. Wong prove that assuming that the entries of an $n \times n$ matrix are chosen randomly with respect to a uniform distribution from the set {$-k, -k + 1 \cdots, -1, 0, 1, \cdots, k-1, k$}, then the probability that the resulting matrix will be singular is $\ll k^{-2 + \epsilon}$.
See this MO question (where the above paragraph is plagarized from) and also here for the link to the Martin, Wong paper.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311986565589905, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/28496/what-should-be-learned-in-a-first-serious-schemes-course/28594
|
What should be learned in a first serious schemes course?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I've just finished teaching a year-long "foundations of algebraic geometry" class. It was my third time teaching it, and my notes are gradually converging. I've enjoyed it for a number of reasons (most of all the students, who were smart, hard-working, and from a variety of fields). I've particularly enjoyed talking with experts (some in nearby fields, many active on mathoverflow) about what one should (or must!) do in a first schemes course. I've been pleasantly surprised to find that those who have actually thought about teaching such a course (and hence who know how little can be covered) tend to agree on what is important, even if they are in very different parts of the subject. I want to raise this question here as well:
What topics/examples/ideas etc. really really should be learned in a year-long first serious course in schemes?
Here are some constraints. Certainly most excellent first courses ignore some or all of these constraints, but I include them to focus the answers. The first course in question should be purely algebraic. (The reason for this constraint: to avoid a debate on which is the royal road to algebraic geometry --- this is intended to be just one way in. But if the community thinks that a first course should be broader, this will be reflected in the voting.) The course should be intended for people in all parts of algebraic geometry. It should attract smart people in nearby areas. It should not get people as quickly as possible into your particular area of research. Preferences: It can (and, I believe, must) be hard. As much as possible, essential things must be proved, with no handwaving (e.g. "with a little more work, one can show that...", or using exercises which are unreasonably hard). Intuition should be given when possible.
Why I'm asking: I will likely edit the notes further, and hope to post them in chunks over the 2010-11 academic year to provoke further debate. Some hastily-written thoughts are here, if you are curious.
As usual for big-list questions: one topic per answer please. There is little point giving obvious answers (e.g. "definition of a scheme"), so I'm particularly interested in things you think others might forget or disagree with, or things often omitted, or things you wish someone had told you when you were younger. Or propose dropping traditional topics, or a nontraditional ordering of traditional topics. Responses addressing prerequisites such as "it shouldn't cover any commutative algebra, as participants should take a serious course in that subject as a prerequisite" are welcome too. As the most interesting responses might challenge (or defend) conventional wisdom, please give some argument or evidence in favor of your opinion.
Update later in 2010: I am posting the notes, after suitable editing, and trying to take into account the advice below, here. I hope to reach (near) the end some time in summer 2011. Update July 2011: I have indeed reached near the end some time in summer 2011.
-
1
Dear Ravi, while I'm not sure if this should be taught in a first schemes course, but it's something that I'd love to see exposited more fully. Jim Borger gave an outline of a program to jump straight into algebraic spaces, skipping schemes entirely. Maybe you could figure out a way to do it? sbseminar.wordpress.com/2009/08/06/… – Harry Gindi Jun 17 2010 at 12:57
10
Community wiki? – Andrew Stacey Jun 17 2010 at 14:28
4
It's also worth linking to the meta discussion that Ravi started to help him craft this question before asking it: meta.mathoverflow.net/discussion/446/… – Andrew Stacey Jun 17 2010 at 14:30
4
A parsing question: does "first serious schemes course" mean that there could be a prior, not-so-serious course on schemes? Or do you mean "first, serious schemes course"? – Pete L. Clark Jun 17 2010 at 17:15
3
Re: community wiki. Ravi considered this (see the meta thread mentioned above), and decided against it, so I'm not going to use the wiki-hammer unless asked. – Scott Morrison♦ Jun 17 2010 at 19:42
show 1 more comment
31 Answers
One of the wholly unnecessary reasons that schemes are regarded with such fear by so many mathematicians in other fields is that three, largely orthogonal, generalizations are made simultaneously.
Considering a "variety" to be Spec or Proj of a domain finitely generated over an algebraically closed field, the generalizations are basically
1. Allowing nilpotents in the ring
2. Gluing affine schemes together
3. Working over a base ring that isn't an algebraically closed field (or even a field at all).
For many years I got by with only #1. More recently I've been interested in #1 + #3. Presumably someday I'll care about #2, but not yet. Anyway I think it's crazy to give the impression that the three are a package deal that one must buy all of simultaneously, rather than in much easier installments.
I think it could be useful to explain which subfield of mathematics, or which important example, motivates which of #1,#2,#3 is really a necessary generalization.
-
31
The way I think about it is: #1 is analysis (looking at near-solutions $P_1(x)=\ldots=P_k(x)=O(\varepsilon)$ of equations instead of exact equations $P_1(x)=\ldots=P_k(x)=0$). #2 is differential geometry (looking at manifolds instead of coordinate patches). #3 is number theory (solving equations over number fields, rings of integers, etc.). This way I can minimise my exposure to algebra and topology, my two weakest suits. :-) – Terry Tao Jun 18 2010 at 5:57
4
Allen, in what sense do you mean you've never cared about #2? I'd have thought it is the basic ingredient by which one thinks about any non-affine object (akin to manifolds, as in Terry's comment), and so much more intuitive than #1 and #3? I assume you are implicitly speaking of gluing in the Zariski topology rather than the etale topology (algebraic spaces...), but perhaps I misunderstand. – BCnrd Jun 18 2010 at 7:01
3
I like these three a lot. In each case things the new notion is forced upon you by nature. I like Terry's interpretation (and like to note that certain notions are "geometry" or "arithmetic" --- I'd never used the word "analysis" for similar reasons to Terry's reasons for algebra and topology... :-). What about #4: non-closed points? This could even bump out #2, which is already present in Proj. Somehow this local-to-global issue is already present in how one thinks of a manifold (and even if people may not know it well at this stage, they have a good intuitive idea of it). – Ravi Vakil Jun 18 2010 at 13:22
11
The Princeton coclass of '92, then. :) – Terry Tao Jun 19 2010 at 16:00
6
Incidentally, Andrei Okounkov (more Princeton!) expressed to me the view that a scheme is "just a ring". I guess he's a #1+#3 guy rather than a #2 guy... – Terry Tao Jun 19 2010 at 16:07
show 10 more comments
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As you should, you prove the Nullstellensatz early on, as the statement that the closed points of $\mathbb{A}^n_k$ are in bijection with $k^n$, for $k$ an algebraically closed field. I wonder whether it is also a good idea to say that, for any $k$, the closed points of $\mathbb{A}^n_k$ are in bijection with the Galois orbits in $\overline{k}^n$. This might require too big a digression into Galois theory, but I remember a number of my grad school classmates having confusions about closed points over non-algebraically closed fields which could be immediately answered from this decsription.
-
9
I definitely second this. I understood $\mathrm{Spec}\mathbb{R}[x]$ when someone said that it's just $\mathbb{C}$ modulo conjugation. And I think it's fair to assume that people learning schemes have some Galois theory from grad algebra (just like you assume they know what a topological space is, because if they don't, things have gone horribly awry) – Charles Siegel Jun 17 2010 at 15:30
1
David, great ideas.I currently state the Nullst. early on, but because the proof will fall out of things later (Noether normalization), I hold off proving it. I definitely do the Galois-conjugate thing --- if you don't know what the points are of a space, you can get very confused! (Aside: it is interesting that we can quickly describe the primes in $\mathbb{C}[x_1,\dots,x_n]$ for $n=0,1,2$, but for $n=3$ there are weirder sorts of primes --- and there is no way of not thinking of them in terms of geometry.) Charles, doing the explicit case of $\mathbb{R}[x]$ makes lots of sense. – Ravi Vakil Jun 17 2010 at 23:50
5
@Harry: the suggestion to view Zariski's Lemma as a corollary of Zariski's Main Theorem sounds dangerously close to (if not actually) being circular, since I can't imagine developing algebraic geometry even remotely far enough to get to ZMT without already knowing the Nullstellensatz (which is more or less equivalent to Zariski's Lemma, depending on how one defines "Nullstellensatz"). – BCnrd Jun 18 2010 at 7:28
show 3 more comments
I found differentials hard to understand when I learned this material. Here are two things that helped me which I think are not in your notes:
(1) The description of the Zariski tangent space to $X$ at $x$ as those Hom's from $\mathrm{Spec} \ k[\epsilon]/\epsilon^2$ to $X$ which take $\mathrm{Spec} \ k$ to $x$. This is this is much closer to my physical intuition for a tangent space than the $(\mathfrak{m}/\mathfrak{m}^2)^{\vee}$ definition. It is also an early example of the power of using rings with nilpotents. Building the vector space structure from this definition is especially pretty.
(2) A careful discussion of the relationship between the infinitesimal objects, i.e. the elements of the Zariski tangent and cotangent spaces, and the global objects, i.e. derivations and Kahler differentials.
-
8
Harry: that's farther than I can manage to get in a year without losing people, although it is possible to foreshadow the dualizing complex (as a side remark) and even essentially to define it. – Ravi Vakil Jun 17 2010 at 23:45
show 3 more comments
Let me begin with something essentially obvious: students should learn to work with non-closed points. In practice, this means learning how to use them to simplify life.
Here are some suggestions as to how to do that:
(a) Explain that coherent sheaves are generically free, and use this to prove things like generic smoothness of varieties (by applying it to the tangent sheaf).
(b) Explain carefully the proof of Chevalley's theorem that the image of constructible is contstructible. (Note that this latter result has the advantage of being extremely useful, and also has likely not been covered in any form in a previous varieties course.)
Note also that one can deduce the Nullstellensatz from this result, which kills two birds with one stone. (See the discussion in this answer, and the notes of Mumford and Oda that are linked there.)
(c) one can beef up (a) by looking at say a fibration $X \to Y,$ and then looking at fibres over a generic point of $Y$, and then extending information to a n.h. of that point. Incidentally, it was the desirability of this kind of argument that first led Zariski to point out the importance of studying algebraic geometry over non-algebraically closed fields. For him, these non-algebraically closed fields were not $\mathbb Q$ or $\mathbb F_p$, but rather function fields of varieties (with the initial ground field being a good old fashioned algebraically closed field).
Examples like this last one can really help demystify not just the role of generic points, but also the role of non-algebraically closed fields. (In particular, they show that the latter are not just of interest in number theory. Zariski was certainly not a number theorist!)
-
2
Just to add to Emerton's remarks, it good to note that the scheme-theoretic results on open/closed/constructible sets are stronger than variety counterparts. If one knows a constructibility result for a subset of an integral scheme of finite type over an alg. closed field (e.g., locus defined by conditions on fibers of morphism) and can prove generic point lies in there then a classical Zariski-dense open locus of classical points are in there too. This is what underlies useful instances of the "spreading out" principle in (c). This comes up for rigid-analytic vs. Berkovich spaces too! – BCnrd Jun 18 2010 at 14:49
3
I agree very much; after taking the class, I still wish I understood generic points better. They make proofs so much easier, in a way unique to algebraic geometry, but I still don't feel that I have a good enough handle to use them. One of my main points of confusion: how should I think of the generic fiber? stalk at a generic point? What's the difference? Also, I still feel unsure of how good the analogy between properties at the generic point and generic properties from differential topology is. – Ilya Grigoriev Jun 18 2010 at 19:23
3
Ilya, drop by my office (or Ravi's) some time and we can clear it up. The name "generic point" is very much deserved. As to why, this is one of those things which is extremely well-developed in EGA and almost invisible in Hartshorne. – BCnrd Jun 18 2010 at 20:42
3
Ravi, what helps is to show the "spreading out" principle in practice with multiple examples that can be seen by hand, and mention of others which are clearly much deeper (in the sense of not being obviously captured by equations) but still reassuring to know about. And also counterexamples (such as irreducible vs. absolutely irreducible fibers, as you know). The example of vanishing of a function is a bit too simple to convey the flavor of how such things go. A bare-hands proof of generic freeness for coherent sheaves via linear algebra at the generic point is a better first example. – BCnrd Jun 19 2010 at 0:44
1
I'm happy with that example; it may as well be a second example, and still mention the first. – Ravi Vakil Jun 20 2010 at 20:48
show 2 more comments
Since in 2007-2008 you evoked [ Class 24, §1.8, The problem with locally free sheaves] the equivalence between locally free sheaves and vector bundles on a scheme, the following point, potentially confusing for a beginner, could be mentioned.
A locally free sheaf $\mathcal E$ has a sheaf fibre $\mathcal E_x$ at $x$ but also a vector fibre $\mathcal E[x]=\mathcal E_x \otimes _ {\mathcal O_x} k(x)$. The fact that tensoring is not exact explains the paradox that a locally free subsheaf of a locally free sheaf does not yield a sub-vector bundle of a vector bundle in the above equivalence. The contrasting notation $\mathcal E[x]$ versus $\mathcal E_x$ (that I learned from German mathematicians) may help clarify this subtle point .
I am quite aware that there is nothing grandiose in this technical suggestion, but little points like those can be quite frustrating when learning a new subject
-
2
This is actually fantastic. (I think the devil in algebraic geometry really is in the details, and things like this make a big difference in assuaging confusion.) It is indeed confusing that the subscript $x$ is used for both stalks and for fibers. Am I understanding you correctly that you would propose that if $\mathcal{F}$ is a quasicoherent sheaf on $X$, that $\mathcal{F}[x]$ be used for the fiber over a point $x \in X$? This would prevent a lot of confusion. – Ravi Vakil Jun 18 2010 at 0:08
3
I always write $\mathcal{E}(x)$ for the "locally ringed space fiber", and (naively) thought it was universal notation nowadays (not sure where I got it from). I agree that a clear distinction of fiber and stalk can be a puzzler early in the education process. Sadly, Godement's book uses the exact same notation for stalks! (Neither Godement nor EGA seem to have a special notation for locally ringed space fiber: each explicitly writes out the monstrosity $\mathcal{E}_x/\mathfrak{m}_x \mathcal{E}_x$ every time it is needed.) – BCnrd Jun 18 2010 at 0:43
2
$\mathcal F|_x$ is another possibility which avoids confusion with twists and shifts. – a-fortiori Jun 18 2010 at 6:58
11
Ravi, the reason I advocate $\mathcal{F}(x)$ for the fiber at $x$ is that just as one writes $s_x$ for the $x$-stalk (in $\mathcal{F}_x$) of a local section $s$ of $\mathcal{F}$, it is nice to write $s(x)$ for the image of $s_x$ in $\mathcal{F}(x) := \mathcal{F}_x/\mathfrak{m}_x \mathcal{F}_x$ because that is suggestive of "evaluating a function" (which literally is what happens when $\mathcal{F} = O_X$!). In other words, the $\mathcal{F}(x)$ notation reminds one of evaluation, which is the difference between "stalk" and "fiber". We can battle it out further in your office later today... – BCnrd Jun 18 2010 at 14:25
3
Ravi, please do not write $\mathcal{O}(x)$ for residue field; I hope it was meant as a joke. Stick with $k(x)$ or $\kappa(x)$ for the residue field. I don't see why getting rid of $\kappa(x)$ is viewed as an "advantage". It is the standard notation in EGA, for example, and sure looks more like a field than $\mathcal{O}(x)$ (as if one knows what a field "looks like"). – BCnrd Jun 18 2010 at 20:38
show 21 more comments
Toric varieties. They're so easy to define and work with, and to organize examples around. Like blowing up a scheme at a fat point, or blowing up in different orders, or big but not ample line bundles, ... Of course there's the danger that they'll give people the wrong idea about what general schemes are like, but a few curves-of-high-genus examples should help with that.
-
3
This is only indirectly related (because "varieties not schemes"). How can a complete variety fail to be projective? I only understood this once I learned toric varieties. – Victor Protsak Jun 18 2010 at 0:13
1
Victor, I think that's quite related, and a good point. I currently do an easy surface example (again requiring flatness), but toric varieties would provide a very pleasant other example. – Ravi Vakil Jun 18 2010 at 0:18
3
I like taking an octahedron, splitting it into northern and southern hemispheres glued along a neighborhood of an equatorial square, and stretching out the top vertex into an interval. Then when you try to glue these two together, the geometry of the top half wants the equator to be a rectangle, but the bottom half requires it to be a square. Hence any line bundle will be degree 0 on the P^1 corresponding to this new top edge. – Allen Knutson Jun 18 2010 at 13:41
show 7 more comments
I'm not sure if this is the kind of answer you're looking for but...
There is a very useful and simple lemma on sheaves which is (I think) never explicitly stated in Hartshorne. It is Proposition I-12 of Eisenbud-Harris. I think you should definitely make sure to explicitly state this. Sheafification was very scary and mysterious to me until I learned this lemma.
-
2
The fact that this is not stated in Hartshorne is one of the reasons why his construction of the structure sheaf of an affine scheme is so ad-hoc. – Harry Gindi Jun 17 2010 at 17:59
6
My first (limited) exposure to schemes was in a course from Joe Harris which used a draft copy of the book with Eisenbud, and most of which went completely over my head. But when I eventually "graduated" to reading Hartshorne I knew to ignore his mysterious construction of the structure sheaf and followed the "B-sheaf" method from Eisenbud-Harris instead, with the help of Exercise 23 in Chapter 3 of Atiyah-MacDonald. – BCnrd Jun 17 2010 at 18:25
4
Wow, a reference for that lemma! Thanks, Kevin. – David Speyer Jun 17 2010 at 18:48
6
I think Chapter I of Eisenbud-Harris is really great in general. – Kevin Lin Jun 17 2010 at 21:12
5
On a related note (also related to Allen's discussion of schemes as gluing together affines): I have found that using the fact that schemes are affines glued together, rather than just ringed spaces, makes showing facts we care about regarding quasicoherent sheaves much easier. In particular, in graduate school, it was a revelation when a visiting grad school (who now goes under the pseudonym "BCnrd") pointed out to me the simple fact that the intersection of 2 affines is a union of affines simultaneously distinguished in them both. This turned a host of Hartshorne ideas from hard to easy. – Ravi Vakil Jun 18 2010 at 13:38
show 6 more comments
Victor Protsak suggests this, and I'll endorse it: the careful construction of the Grassmannian. This is a good example for 3 reasons. (1) It is extremely important. (2) It is a situation where it is both natural to work in local coordinates and with a global projective embedding, so students can practice transforming between the two perspectives. (3) It is small enough to do in full detail, but it usually isn't.
Ideally, this would include proving that the Grassmannian represents the functor of flat families of subspaces of a vector space. (Or quotient spaces, whichever you prefer.)
-
3
Sorry, but I do not understand how the Grassmannian motivates cohomology and base change? – Kevin Lin Jun 18 2010 at 15:15
1
Sorry Kevin, I should have been more explicit. First you understand the Grassmannian in terms of quotients of a free bundle, prettily by hand. But you might also want to think of it geometrically, as parametrizing $\mathbb{P}^k$s in $\mathbb{P}^n$, i.e. a special case of a Hilbert functor: a family is a closed subscheme of $\mathbb{P}^n$ over the base, flat, whose fibers are (linear) $\mathbb{P}^k$s. How might you turn it into the linear algebra problem? By pushing forward the restriction map for $\mathcal{O}(1)$. How do you know that the resulting things are locally free? Coho + base change! – Ravi Vakil Jun 18 2010 at 19:25
show 3 more comments
I haven't seen it mentioned yet, so let me suggest it (and I'll be curious to hear people's responses): the theorem on formal functions.
In suggesting this, I am certainly taking full advantage of the fact that we are supposed to be discussing a year long course.
Let me now give justification (in case it is needed; I don't know how others will feel about this suggestion).
First, my own philosophy is that an algebraic geometry course, even one focusing on the theory of schemes, should be about geometry. So I think that it is important to discuss some geometry, including the basic theory of curves (which is very pretty from the schemes point of view, since one gets the interaction between a more geometric picture, and the more valuation-theoretic function-field picture, by studying the interaction between the generic point and the closed points).
But the theory of curves is not enough concrete geometry for one year; I think some discussion of surfaces adds an enormous level of geometric understanding, just because the theory of surfaces is much closer to the theory of arbitrary dimensional varieties than the theory of curves is. At the same time, by doing some stuff with surfaces, one does a valuable service for many students in the class: pure geometers will certainly need to know this, but so will arithmetic geometers/number theorists, because a curve over a Dedekind domain behaves like a surface, and one studies bad reduction of curves using ideas from the theory of surfaces (blowing up, minimimal models, etc.). So even if one doesn't touch directly on the particularities of degenerations of curves (which, however, is also a topic of very general interest and importance!), by saying something about surfaces, one prepares the way.
Hartshorne Ch. V gives a really nice treatment of many of the basics of surface theory, and the main tool he uses, beyond all the generalities of cohomology and sheaves, is the theorem on formal functions: both in its application to Zariski's main theorem, and to the proof of Castelnuovo's criterion. And these are both beautiful results, the kind of results that would make a good capstone to a one year course. (And they are also basic algebraic geometry knowledge --- the kind of things that you would hope students know after taking a year of the stuff!)
-
2
Ding ding ding! Bells went off in my head when I read this, as this is a topic I very much wanted to hear people's opinions about. Matt, I agree with you that this is important, and in particular through ZMT and Castelnuovo's criterion. I also agree that this is fair game because we are talking about a year-long course, and this would be near the end. So I think it should be in if all possible. (Unlike, perhaps, formal schemes, which no one has brought up so far.) cont'd – Ravi Vakil Jun 20 2010 at 21:01
3
Ravi, proof of thm on formal fns in EGA is simpler than version in Hartshorne, and it works directly in the proper case (no mucking around with $\mathcal{O}(n)$'s, etc.). The argument is due to Serre, and it gives a stronger result than what is claimed in Hartshorne in the projective case. I wrote up a handout on it for the course I taught with Matt's help way back when. I can send you the .pdf for it if you don't have it buried in a filing cabinet somewhere. (As an aside, I think it would be a mistake to try to introduce formal schemes, though one could say what the point of it is.) – BCnrd Jun 20 2010 at 23:05
1
@Ravi There's a resource you may or may not be aware of and I've posted it at Math Online:Last year,Micheal Artin taught a wonderful looking first course at MIT on algebraic geometry using his own notes and William Fulton's ALGEBRAIC CURVES.The only prerequisites were a year long algebra course based on the forthcoming second edition of his classic text.Not only are the notes themselves excellent as a model for what that preliminary "classical" course before yours should look like,he has some very insightful comments there on the teaching of AG. I think you'll find it quite useful. – Andrew L Jul 16 2010 at 20:40
1
You do not <B>NEED</B> the theorem on formal functions to prove Castelnuovo's criterion. Smoothness of the target can be proved by considering the multiplicity of the image point. However, this approach to Castelnuovo's criterion is very close in spirit to Artin's "Algebraization of formal moduli II", cf. also Appendix B.3 of Hartshorne. – Jason Starr Jul 28 2011 at 17:37
show 8 more comments
Dear Ravi, here is a small suggestion. I think one might emphasize as soon as possible that the subschemes of an affine scheme $Spec A$ exactly correspond to the set of ideals of the ring $A$.( I don't know if this is deep or tautological: probably both.) This allows one to illustrate many of the strange and frightening features of scheme theory as compared to tamer geometric structures (that subschemes are not determined by subsets, that functions are not determined by their values, etc) without adding the complications due to sheaves and gluing. I remember it took me a long time to realize this and when I did I lost some of my fear of schemes.
-
2
I fear that much of the apparent pathology in the theory of schemes comes from the presentation of schemes as locally ringed spaces rather than their presentations as sheaves of sets on the category of affine schemes (and therefore their presentations as the gros slice toposes they represent). For example, the reason the fibre product doesn't make any sense at all (even on the underlying set of the locally ringed space) is that Sch has a faithful (and full?) embedding into LRS. When we compute the fibre product of schemes as abstract sheaves, we compute it pointwise, which gives a right answer – Harry Gindi Jun 17 2010 at 14:59
26
Harry, if you're not sure whether the functor from schemes into locally ringed spaces is full, you should step back from the etale topos and related formalism and learn more basic things better (and be more humble for the present time about offering advice on how to think about or teach the subject). – Boyarsky Jun 17 2010 at 15:56
20
Harry, you misunderstood my point: the fact that you needed to go back and check this basic fact from the beginning of the theory means you had not internalized it, and so is a reflection of a certain lack of experience on your part (it is one of the first things that one should learn in the theory of schemes, to connect it up with other geometric theories, etc.). You should consequently be more reserved in offering advice to others on how to teach or think about it. It is akin to a real analysis student who needs to go back and check whether or not the Intermediate Value Theorem is true. – Boyarsky Jun 17 2010 at 16:44
12
Harry, it was not intended as a rebuke (which I interpret as a somewhat negative word). It was simply advice to be more reserved, in view of your somewhat limited experience in this area of mathematics. By all means discuss these ideas with your classmates, professors, etc. Just be less energetic about making suggestions on educational aspects until you have had more time to see where it all goes and how it is used and how more of the deep theorems are proved. – Boyarsky Jun 17 2010 at 17:02
12
Then I accept your advice. Thank you. – Harry Gindi Jun 17 2010 at 17:17
show 3 more comments
Generic fiber vs. general fiber vs. geometric generic fiber.
-
4
vs. very general fiber vs. special fiber vs. closed fiber vs. geometric special fiber vs. geometric fiber vs. complex fiber vs....(and similarly with "point" replacing "fiber", though "point" also opens up an entirely different issue...) – BCnrd Jun 18 2010 at 23:10
7
Our students aren't getting enough fiber in thier diets,that your point,Allen?LOL – Andrew L Jun 19 2010 at 6:19
I actually think that the Hilbert scheme should be mentioned (and, if possible, proved to exist and discussed) as early as possible. It serves as a good example of a moduli space, and it exists! Plus, the infinitesimal study of the Hilbert scheme allows some deformation theory to be discussed (at least, the deformations of projective schemes inside projective space) which also helps explain, algebro-geometrically, what the normal sheaf really controls. Add to this the fact that a lot of research relies on moduli spaces these days (In particular, I know that people care about Hilbert schemes of points, and, if some GIT for PGL can be covered, it'll let you actually construct $\mathcal{M}_g$, which finishes the classification of curves that's given in chapter 1 of Hartshorne, though this is a bit more.)
Because you'll be wanting things fundamentally scheme theoretic, the first part of Kollár's "Rational Curves on Algebraic Varieties" might be a good reference for this stuff.
-
5
Do Grassmanians and Hilbert schemes of points count as "baby moduli spaces"? – Victor Protsak Jun 18 2010 at 0:09
show 8 more comments
I think analytification and GAGA should be mentioned.
Subsequently I think that it should be mentioned that Serre duality can be viewed as a refinement of Poincaré duality.
-
9
Kevin, this opens up a can of worms, to nail down the compatibility of the coherent trace on ${\rm{H}}^n(\Omega^n)$ and the topological trace on ${\rm{H}}^{2n}(\mathbf{C})$ with respect to the "degeneration isomorphism" between them. I had a long conversation about it with Serre, and he was very disappointed with the literature. So when "mentioning" to someone that Serre duality is a refinement, one should also mention that there is real work needed to nail down the compatibilities. There's an old .pdf file about it in the "duality" part of my webpage; quite tricky to do rigorously. – BCnrd Jun 18 2010 at 7:11
2
Thank you for your comments and the reference, Prof. Conrad! – Kevin Lin Jun 18 2010 at 22:47
Why the Spec functor is a natural thing; this is not so clear (at least to me) from the definition in Hartshorne. Bas Edixhoven made me see the light by saying that Spec is adjoint to the global sections functor from locally ringed spaces to commutative rings: `$\mathrm{Hom}_{\mathrm{Rings}}(A,\Gamma(X,{\cal O}_X))\cong\mathrm{Hom}_{\mathrm{LRS}}(X,\mathrm{Spec}(A))$`. Exercise II.2.4 of Hartshorne asks you to prove this with locally ringed spaces replaced by schemes, but this is less clarifying.
-
1
Anton Geraschenko's answer here is useful: mathoverflow.net/questions/731/… – Kevin Lin Jun 19 2010 at 17:30
1
Interesting follow-up: this question suggests that Spec is the right thing because "locally ringed spaces" are the right kind of geometric space. But people may not be initially convinced that locally ringed spaces are natural. Perhaps Spec could be defined first (hence your question still stands), and the locally ringed spaces next? It is a bit of a chicken-and-egg thing. (I'm undecided/agnostic about this.) – Ravi Vakil Jun 20 2010 at 20:57
1. (Maybe this is a standard thing to do already but I think it's still worth mentioning:) A proof of Bezout's theorem via Hilbert polynomials of subschemes of $\mathbb P^N$.
Of course, this isn't fundamentally different than the proof in Hartshorne I.7, but in scheme language it is much much more natural, and might be the best motivation for allowing nilpotents in the structure sheaf.
2. This is extremely vague, but it's something I wish someone should have told me 5 years earlier: Since algebraic geometry is so rigid (few polynomials compared to many differentiable functions), we often have to deal with singularities. E.g. in many cases we can't make intersections transversal, or all interesting families (of certain types) have singular fibers. But since algebraic geometry is so rigid, we also have fairly good tools dealing with singularities, or with degenerate cases.
-
2
I think point 2 here is an extremely important one to make. One doesn't have to give a song and dance about it, but it should be clearly stated at the beginning of any algebraic geometry course. To students who don't yet have much experience with other kinds of geometry, it may not mean so much at the beginning, but as they mature, it will hopefully stay with them as a guide to the difference between algebraic geometry and other geometries. For those who are used to other geometries and want to learn algebraic geometry, I think it is one of the first things that should be pointed out. – Emerton Jul 24 2010 at 0:27
1
It's also a nice heuristic to explain why methods like degenerating techniques, (virtual) localization, ... are so powerful (or why flatness is such an important notion). – ABayer Jul 24 2010 at 2:05
Stalk-local detection of irreducibility on locally Noetherian schemes, which I prove directly here with no primary decomposition tricks. It helps with a lot of exercises, and intuition.
Sheafification of base-presheaves (presheaves defined only on a base of open sets). I see from your TOC that you cover the unique extension of base sheaves to sheaves as per Kevin Lin's answer (E-H's Proposition I-12).
When I took Arthur Ogus' algebraic geometry class, he was very insistent about teaching us this, and it really paid off for the remainder of the course, particularly in exercises. It categorically exclaims (pun intended) the credo always start with the affine opens, so one sees explicitly how special and critical they are to the theory.
The sheaf of meromorphic functions $\mathcal{K}_X$ on $X$ can be defined by sheafifying the naive base-presheaf $\mathcal{K'}(U)=Frac(\mathcal{O}(U))$ on the base of open affines. This formula doesn't define a base-sheaf on affines, and as Georges Elencwajg and BCnrd explain here, it doesn't even define a presheaf when applied to arbitrary opens. I suggest at least mentioning these three facts, to save people from re-wasting the time that I and many others have in wondering what the resulting sheaf looks like.
Locally representable means representable, i.e. if $F:Sch^{op}\to Set$ is a sheaf when restricted to a base of (Zariski) opens on every scheme, and $F$ has a covering by representable open subfunctors $F_i$, then $F$ is representable (very much along the lines of EGA 1 (1971), Chapter 0, Proposition 4.5.4). I advocate this because the work that goes into the proof is essentially the same work we inevitably do to prove fibered products of schemes exist, so it gives fibre products as a special case, but also offers up a rigorous-but-quick route to other constructions like global Spec and global Proj.
The general definition of quasicoherence and coherence for modules on local ringed spaces / non- locally Noetherian schemes... not as a gratuitous generality, but as a foreshadowing/reminder that presentations, not just surjections, are what make coherence work.
Basic Dedekind domain theory, along the lines of Lang's Algebraic Number Theory, chapter 1. I found curves and their divisors — even in characteristic 0 — impossible to understand until I read that.
Quasiseparatedness is something I'm glad to see you including, because using it explicitly is the key to a lot of proofs, so having it in mind as a word helps me remember how to do them.
Your affine communication lemma is a must-have, for anyone else reading this answer!
-
1
Thanks, I just edited the answer to clarify... I'm talking about sheaves on the category of schemes with the Zariski toplogy (see my revised statement). – Andrew Critch Jun 19 2010 at 17:19
show 6 more comments
Serre's criterion for normality, the valuative criterion for normality, normality vs. S2, maybe even seminormality.
Added by request: here's how I think about Serre's criterion. Call a rational function pretty good if it doesn't blow up in codim 1. Call it very good if it's actually well-defined in codim 1. Then a normal space is one for which pretty good rational functions are actually functions, whereas an S2 space only asks that very good rational functions are actually functions. To see the difference, look at x/(x+y) on {xy=0}, to see that the latter is not normal despite being S2. So how can normality fail -- how can f's value be ambiguous in codim 1? If there are 2 ways to approach some divisor -- non-R1ness.
-
4
Somewhat related to this: is the notion of Cohen-Macaulay something people really should see early on? Advantages: it doesn't take long. You get to see the Koszul complex. Then you get to see that the normal sheaf to a local complete intersection is locally free. There is a handy flatness theorem (very roughly, a map from CM to nonsingular is flat iff the fibers are equidimensional; and a flat map to a nonsingular has CM source). Then S2 could go here, perhaps later than it needs to be. Disadvantage: yet more definitions to clog up your brain. – Ravi Vakil Jun 18 2010 at 0:05
3
One of the things I regret is that none of my teachers ever taught me what "Cohen-Macaulay" means. Depending on what you do, it can show up pretty much first thing as you set out into the literature, with the assumption that you know it cold already. – Charles Siegel Jun 18 2010 at 4:16
3
@Charles: wouldn't such an experience simply provide motivation to go back to the commutative algebra books or elsewhere to learn about it (if it wasn't learned when doing Serre duality)? I do agree that CM is a very good notion to see in a course, if time permits. But virtually everything I know about modern algebraic geometry I had to teach myself, often in the service of trying to understand other things which I cared about. But that's part of doing math: having to struggle with learning stuff on one's own, for which external motivation is always a good thing. – BCnrd Jun 18 2010 at 7:18
4
BTW here's how I think about Serre's criterion. Call a rational function pretty good if it doesn't blow up in codim 1. Call it very good if it's actually well-defined in codim 1. Then a normal space is one for which pretty good rational functions are actually functions, whereas an S2 space only asks that very good rational functions are actually functions. To see the difference, look at x/(x+y) on {xy=0}, to see that the latter is not normal despite being S2. So how can normality fail -- how can f's value be ambiguous in codim 1? If there are 2 ways to approach some divisor -- non-R1ness. – Allen Knutson Jun 20 2010 at 1:19
1
@Emerton: normal => S2 => hyperplane sections are S1 => if each component has a reduced point, then the hyperplane section is reduced. Which I think is pretty cool. I used a similar implication in arxiv.org/abs/math/0306275 , namely that a generically reduced complete intersection is in fact reduced (really, that CM => S1). – Allen Knutson Jun 21 2010 at 2:44
show 3 more comments
To prepare well for what comes after a first course, a more extensive discussion of étale morphisms than Hartshorne gives should be part of such a course, in my opinion.
-
25
Oh my. Harry, I don't know what has led you to exert so much time on higher topos theory in lieu of getting more experience with schemes first (the motivating problems, the geometry, the insights from the etale topology, etc.), but you would benefit from seeking more guidance from experts at UM. The (very geometric!) inspiration for algebraic spaces comes from Artin approximation. Even Jacob Lurie learned the basic theory of schemes thoroughly (in a course run by me and Emerton, with no topoi but lots of balance of theory and examples) before going on to the etale site and stacks. Good luck. – BCnrd Jun 19 2010 at 23:52
show 9 more comments
If you decide to teach a more arithmetically flavoured algebraic geometry, students should be made aware that schemes over a ring $A$ are stranger than they might think.
For example $A$-rational points of $\mathbb P^n_A$ are far from being given by non-zero $(n+1)$-tuples of n elements of $A$ modulo tuples of invertible elements, but are described by rank-$n$ projective summands of $A^{n+1}$. More generally morphisms to projective space are described in terms of line bundles and their sections and might be seen as an interesting illustration of these concepts.
Incidentally, a sufficient reason for introducing a little arithmetic geometry is to have the pleasure of reproducing Mumford's incredibly enlightening drawing of the arithmetic surface $\mathbb A^1_{\mathbb Z}$ (in his Red Book), with its points having each a diferent personality and its curves. ( I concede that although Mumford's picture is beautiful, the artistic competition was not so great when he wrote his notes: the EGA's strongest point is not its illustrations...)
-
4
Georges: the common "surprise" about points of projective space or of affine space minus 0-section valued in a ring (or scheme) has always seemed best to explain by analogy with how the same issue comes up in differential geometry, or even alg. geom. using only varieties and not schemes. The meaning of a map from a manifold to real projective $n$-space works out exactly as with schemes, and likewise for affine space minus the 0-section, so it is good to stress to students that none of this is peculiar to working with schemes or is a phenomenon special to the "arithmetic" case. – BCnrd Jun 18 2010 at 13:21
7
I strongly disagree with you about EGA. Every single picture in EGA is incredibly enlightening, and beautifully rendered. :-) [Anyone who has looked at EGA will realize I'm actually agreeing with Georges, but that my "Every single picture" comment is also true...] – Ravi Vakil Jun 18 2010 at 13:53
2
I am very proud of your endorsement, Ravi: thank you. Contrariwise, my heart missed a beat when I read your sentence "I strongly disagree with you about EGA" but fortunately, reading on, I realized that the rich resources of the empty set were coming to my rescue :-) – Georges Elencwajg Jun 18 2010 at 14:15
show 8 more comments
Perhaps this should be attached to Charles Siegel's answer about the Hilbert scheme, but some concrete examples of degenerating flat families could be helpful. Some easy examples include conics turning into a fat line, skew lines colliding to produce an embedded point, and pairs of points on a line colliding to become fat. There are some nice relationships between these objects and families of constant coefficient linear differential equations via spectral schemes, e.g., the colliding points example says something about the behavior of solutions to $(\frac{d}{dz} - a)^2 - \lambda^2 = 0$ as $\lambda$ hits zero.
-
I expressed my frustration with Hartshorne's book a bit here:
http://mathoverflow.net/questions/12436/motivation-for-concepts-in-algebraic-geometry
The point is that many definitions in algebraic geometry are basically obtained by taking definitions from topology or algebra, translating them into "purely category theoretic language" and then using that definition as a substitute in the category of schemes.
In particular I unravel the definition of a separated morphism:
"A seperated morphism of schemes is one where the image of the diagonal is closed."
If we just replace "schemes" with "topological spaces", then this property for spaces says (after a little definition chasing)
"Any two distinct points which are identified by the morphism can be separated by disjoint open sets in the domain"
Thus a space is Hausdorff as a topological space iff the unique map to the one point space is separated. Before I worked through this I had no real reason to believe that separated morphisms were a natural concept. Why don't people ever talk about the topological analogue?
Another point of much confusion for me was the definition of derived functor cohomology. Why should we care about injective resolutions? Anton gives a great answer here:
http://mathoverflow.net/questions/1151/sheaf-cohomology-and-injective-resolutions/1165#1165
Anton's line of thought is also beautifully developed in Gunter Harder's book "Lectures on Algebraic Geometry 1". The quick and dirty version is that cohomology should have nice properties (ses gives rise to les, etc) and acyclic resolutions compute cohomology. Hey! Injective objects are always acyclic (this is reasonable because they make ses's split). Thus injective resolutions are a nice generic thing to use.
-
1
I agree with your frustration; certainly students tell younger students this fact. (And I also included it as an exercise.) I don't think Hartshorne's discussion of separatedness is representative; the description in EGA is clearer (although of course sans motivation). I haven't checked, but perhaps only Hartshorne (among the major references) uses the valuative criterion to prove things that can be more easily proved by hand. – Ravi Vakil Jun 18 2010 at 18:48
1
On a related note, I conjecture that the reason that recent generations learn the valuative criterion so early is that Hartshorne does this. Inflammatory comment: It is not clear to me that anything one might reasonably see in a first course can be more easily proved with the valuative criterion than directly, taking into account the cost of proving the valuative criterion. Any counterexamples? (Although certainly the statement is worth seeing early. But the proof can go, to make room for other things.) – Ravi Vakil Jun 18 2010 at 18:51
2
OK Ravi, I'll bite: doesn't the proof of universal closedness of projective space (over Z) via val. crit. demonstrate the elegance of functorial criteria (in comparison with elimination theory)? There's also something likewise cool about using it to prove that the map from Grassmannian to projective space is closed immersion (esp. comparing it against the more explicit traditional proof, which is also concrete and worth seeing). Basically, it provides simple examples of the power of functorial criteria (if not overdone!). That is surely something to be appreciated in a first course. – BCnrd Jun 18 2010 at 19:17
1
@Ravi For the record,from most of the AG graduate students who have both been in your class and are using the posted notes online that I've spoken to,your notes are proving an invaluable resource for them in the learning of schemes and modern AG. Many of them in fact have dumped Hartshorne in favor of your notes.Thier continued evolution and availabilty on the web has gained a growing grass-roots support.I think I can speak for all of them when I say this ongoing project is a noble undertaking and it's continuation is fervently hoped for. – Andrew L Jul 16 2010 at 20:46
1
On the derived-functors versus Cech-cohomology-only question: I think an interesting middle ground is to do cohomology only for quasi-coherent sheaves on separated schemes. One can construct injective resolutions via an affine covering and I-twiddles of injective modules I for each open set. Then one can prove e.g. the agreement with Cech cohomology without ever using words like "flasque" or "$O_X$-modules". (My feeling is that at least some of the obfuscation comes from the fact that Hartshorne's approach leaves the category of quasi-coherent sheaves before taking injective resolutions.) – ABayer Jul 24 2010 at 2:33
show 6 more comments
I am surprised that no one mentioned this so far; I am only imagining that everyone thought it so natural that it escaped their mind.
Most "standard courses" would be following Hartshorne's book, I assume. It is a great loss that this book does not mention the "functor of points" view at all. It would maybe take 10 or 15 minutes to state and prove the Yoneda's lemma, and a little more time to mention the functor of points and the advantage of this point of view for applications to arithmetic geometry(points with values in a certain ring, base change, etc.), and more importantly for moduli problems. One could also give a definition of a fine moduli space and coarse moduli space, and as examples just mention the the moduli space of curves with marked points(but without proofs, of course).
-
A small suggestion : the deformation to the normal cone is a nice construction that I would have liked to see in a first course. It illustrate the use of blow-ups, the degeneration of a family with constant fibers (an highly non-obvious concept the first times you see it) and how important intuitions from differential geometry - tubular neighbourhoods - have a non-trivial translation to algebraic geometry.
-
In reference to why the spec functor is a natural thing, (low tech answer): isn't this essentially what the nullstellensatz says? Or rather it generalizes the nullstellensatz. I.e. spec is a good thing because it lets you make a construction that gives you some "geometry" associated to a given ring.
Perhaps the main thing beginners should learn about schemes is that they are needed. I.e. schemes should be motivated. In books which try to restrict to varieties such as Shafarevich's BAG, schemes still raise their heads sometimes unnoticed. E.g. Shafarevich states in chapter I sections 4.4 and 6.4 that the set of hypersurfaces of given degree in a given projective space are parametrized by a projective space, which is not true unless one considers more than the variety defined by a polynomial.
If one is guided on what to include by the section headings of chapter 2 of Mumford's red book, in addition to fields of definition and the functor of points, one finds there a section called specializations, which also contains one of his exotic illustrations.
Even in a classical book like Walker's algebraic curves, schemes arise when studying singularities. The tangent cone to a cuspidal plane curve requires more structure than a variety. Even the fundamental theorem of algebra does not count the roots of a polynomial correctly unless multiplicities are considered.
Some of these examples require only cycles or divisors rather than schemes, but more general tangent cones should provide more general schemes. One can also consider the problem of varieties varying in families and try to fill in something over the limit point of the parameter space. Sometimes non reduced objects will force themselves on us.
The best motivation for differentials may be learning the classical Riemann Roch theorem for curves.
Of course this is probably obvious and taken for granted by most people, but it seemed worth mentioning as a guide to choosing first examples of schemes. I.e. we should not take schemes for granted and choose what to teach based solely on the needs of experts, but we should assume that schemes may be quite strange to beginners and spend some effort showing that they are natural.
-
I think that base change is a very important and subtle idea which should certainly be included in a first course. In particular, one should discuss properties that are stable under base change and those that are not.
In a similar vein, in discussing cohomology, the difference between the coefficients of the motive and the base should be emphasized. This was confusing to me as I learned the subject.
-
1
Well, I think that I'll elaborate with an example. If $X=$Spec$F$ is a variety over $\mathbb{Q}$ and $F$ is a number field then $H^0_B(X(\mathbb{C}),\mathbb{Q})$ is isomorphic to the group ring $\mathbb Q[G]$ where $G$ is the Galois group of $F$ over $\mathbb{Q|$. If I would like to decompose this into the irreducible representations of $\mathbb{Q}$, then I need to extend \em{coefficients} to a field $E$ over which the idempotents are defined. So I would be looking at $H^0_B(X(\mathbb{C}),E)$. If I wanted to look at a subgroup of $G$, I would need to change the base. – Johnson-Leung Jun 18 2010 at 12:25
1
Another example: an $\ell$-adic sheaf on $X$ is a motive over $X$ with coefficients in $\mathbf{Q}_{\ell}$ (or rather a realization of one). The most interesting cases are when there's a relationship between the coefficients and the base, e.g. Hodge theory and $p$-adic Hodge theory. – James Borger Jun 18 2010 at 13:50
show 2 more comments
Resolution of singularities.
That isn't really an answer to the question - I don't think it's necessary in a first course, but I do think resolution should be rotated in on a regular basis, which requires the annual core to be small enough to make room for it. I think it's valuable not just to teach the material somewhere in the curriculum, but to put it in an introductory course, to emphasize that it is elementary and not impossibly difficult. Also, to contrast with the Grothendieck-flavored majority.
-
1
Do you mean a proof of resolution of singularities? (And I presume you mean in characteristic $0$.) At Brown, there was a topics course on this, using Koll\'ar's explanation; but it took a semester. Somehow I find resolution of singularities harder than most of Grothendieck's foundations. – Ravi Vakil Jun 18 2010 at 0:11
2
As far as learning blowups goes, a good (albeit tedious!) exercise that really got me comfortable with them was resolving the ADE surface singularities. – Charles Siegel Jun 18 2010 at 15:37
show 7 more comments
1. This is really about commutative algebra more than algebraic geometry as such, but something I found incredibly frustrating for a while was what to do when I need to compare $M \otimes_A N$ with $M \otimes_B N$. I finally discovered the following illuminating lemma:
If $M$ and $N$ are $B$ modules, then for every ring homomorphism $A \to B$, there is a natural map $M \otimes_A N \to M \otimes_B N$. Moreover, this map is an isomorphism for all $M, N$ iff it is an isomorphism for $M = N = B$ iff $A \to B$ is an epimorphism of rings.
In particular, the last condition holds if $B$ is obtained from $A$ by some combination of localization and taking a quotient ring, or if $\operatorname{Spec} B \to \operatorname{Spec} A$ is any kind of immersion. The same "abstract nonsense" shows that if $Z \to Z'$ is a monomorphism of schemes (in particular, any kind of immersion), then the product of two $Z$-schemes over $Z$ is naturally isomorphic to their product over $Z'$.
2. I found the usual description of gluing schemes and morphisms (i.e., requiring things to agree on $U_i \cap U_j$) frustrating to use sometimes, because in general, $U_i \cap U_j$ might not be affine even if $U_i$ and $U_j$ both are. To glue morphisms only, one can require that the morphism be defined on every set of an open cover, such that whenever $x \in U \cap V$, then $x$ has a neigborhood $W \subset U \cap V$ such that $f_V|W = f_U|W$.
For gluing schemes, one can use a commuting poset of open immersions. Given such a diagram, with objects ${U_i}$, there exists a scheme $W$, together with open immersions $U_i \to W$ commuting with the diagram, such that the $U_i$ cover $W$, and $x \in U_i$, $y \in U_j$ map to the same point in $W$ iff they may to the same point in some $U_k$. When this is combined with the statement on gluing morphisms, one sees that $W$ is actually the colimit of the diagram; and, in fact, the statement that "any such diagram has a colimit" more or less encapsulates both glueing schemes and glueing morphisms. Realizing this was also the first time I felt like I understood colimits.
For a more streamlined, if less general, version of the above, one can use a version of the cocycle condition with $U_i \cap U_j$ replaced by a cover of $U_i \cap U_j$ by simultaneously distinguished affines, assuming the cover ${U_i}$ is by open affines.
In either formulation, this combines with the previous point to give a very quick construction of the fibre product: simply take the colimit of the diagram consisting of maps $$\operatorname{Spec} (A \otimes_C B)_{f \otimes g} \to \operatorname{Spec} A \otimes_C B$$ such that the images of $\operatorname{Spec} A$ and $\operatorname{Spec} B$ lie in $\operatorname{Spec} C$. (If these images in fact lie in a distinguished open subset $C_h$, we get the tensor product over that for free by point 1.) Of course, one still has to verify that this colimit behaves as desired; but this is not hard using the more general "gluing morphims" to show existence and uniqueness.
Note: if it's not clear already, my perspective is that of a student rather than an expert.
-
First off, I wanted to commend you on this whole project, Ravi. Algebraic geometry and the theory of schemes is a notoriously difficult subject to internalize for any advanced student and it's clear you've given a lot of serious thought on how to make it more digestible. I've browsed the old version of the notes and found them very readable and highly thought out. I firmly support this project and hope it goes through many revisions and drafts, evolving into a future classic. Algebraic geometry is a subject I haven't seriously begun broaching yet and I hope to use one of the newer versions when ready,
Secondly-I sympathize with your hesitancy to convert them into a book. What you might consider is creating an online text that will constantly be revised and will never be in "final" form. My old biochemistry professor Burton Tropp did this for many years and it worked out for him very well: The first edition WAS published, but all subsequent editions (and there was nearly a dozen before he retired last year) were online and subject to constant revision and improvement. I think this kind of format will work very well for you.
Thirdly -- history is so important in learning a new,conceptually difficult field. Some good historical notes would make the notes a lot more interesting to read no matter how good the exposition is. Students want to know how they came up with this crazy stuff -- if you know how the original source authors came up with these concepts and why, it'll make it a lot easier to not only internalize, but also to form thier own opinions on the subject.
Fourthly -- I think inserting references and research assignments relying on significant papers, such as Grothendiek's original schemes paper -- will give your students some much needed research experience in a very active field. These are advanced students and the more such experience they get,the better off they'll be.
Lastly -- I wanted to commend your humility and determination in asking other mathematicians and students for opinions and input on this project. It shows how committed you are to this project and experts should be chomping at the bit to give you thier feedback and opinions. I would, but my lack of expertise precludes that. Hopefully those with much more knowledge then I will jump at the chance to assist you with this wonderful project.
Good luck with this exciting project and looking forward to future versions!!!
-
5
For your third point Dieudonne has written "History of Algebraic Geometry", which starts at the very beginning (having separated the development of the subject into epochs), is a pleasure to read, and leads the reader to the modern problems. Unfortunately it appears to be rare, and I'm not sure if one would be able to distribute scanned pages among students. – pmoduli Jun 17 2010 at 18:52
show 1 more comment
At a relatively late point in the course, I believe that the idea of descent should be explained, with two examples: Zariski-descent, or gluing, and faithfully flat descent. The latter should then be applied, for example to prove that some examples of functor of points are representable.
-
As David and Anweshi told before, think it could be very interesting to deal with functor of points, with main example being subfunctors of Grassmannians. I would make some general statements on functor of points (Yoneda lemma, definition of functor of points, vector bundles) and then begin to study as soon as possible classical examples, such as Grassmanians, Severi-Brauer varieties and their tautological vector bundle, varieties of flag of subspaces...
Finally it would lead to a glimpse on group schemes and algebraic groups.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 150, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9522098302841187, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/65418?sort=newest
|
## Birational Contractions on Moduli of pointed Curves
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
On the Moduli space $\overline{M}_{g}$ of genus $g$ stable curves the Hodge class $\lambda$ induces a birational morphism $f$ on a projective variety contracting the boundary, that is the exceptional locus of $f$ coincides with the boundary of the moduli space.
Is there a line bundle $L$ on the moduli space of pointed curves (as instance on $\overline{M}_{2,1}$ and on the moduli space of $2$-pointed elliptic curves) with the same property (i.e. a line bundle which induces a birational morphism whose exceptional locus coincides with the boundary) ?
-
## 2 Answers
A detailed description of the nef cones of the examples you mention is given in the thesis of William Rulla (The birational geometry of `$\overline{M}_3$` and `$\overline{M}_{2,1}$`, The University of Texas at Austin, 2001.) From this, it follows that there are no such line bundles for the two examples that you mention. I would guess that the same holds in general as well.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The Hodge class $\lambda_{2,1}$ on $\overline{M}_{2,1}$
is the pull-back of the Hodge class $\lambda$ on $\overline{M}_{2}$ via the forgetful morphism
$$\pi:\overline{M}_{2,1}\rightarrow\overline{M}_2,$$
that is $\lambda_{2,1} = \pi^{*}\lambda$.
So $\lambda_{1,2}$ is nef but not big on $\overline{M}_{2,1}$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9233185052871704, "perplexity_flag": "head"}
|
http://mathhelpforum.com/statistics/85801-permutation-problem-4-a.html
|
# Thread:
1. ## permutation problem 4
hey guys, need some help with this problem. this is one of four permutation problems that our teacher gave us to solve but haven't really figured it out, plus i was sick so i missed the lecture..
4) 3 balls are to be drawn one at a time from an urn containing 8 distinct ones. Find the number of ways these 3 balls can be drawn if a) a drawn ball has to be replaced before drawing the next and b) there is no replacement of drawn balls.
thanks for the help. we've been given all the basic formula for permutations but I'm at a loss at how to apply them still.
2. Originally Posted by wheresthecake
hey guys, need some help with this problem. this is one of four permutation problems that our teacher gave us to solve but haven't really figured it out, plus i was sick so i missed the lecture..
4) 3 balls are to be drawn one at a time from an urn containing 8 distinct ones. Find the number of ways these 3 balls can be drawn if a) a drawn ball has to be replaced before drawing the next and b) there is no replacement of drawn balls.
thanks for the help. we've been given all the basic formula for permutations but I'm at a loss at how to apply them still.
Logically, we drawn the ball one by one.
With replacement:
1st draw: 8 distinct balls in the urn
2nd draw: 8 distinct balls in the urn as the previous drawn ball is replaced into the urn.
3rd draw: 8 distinct balls in the urn as the previous drawn ball is replaced into the urn.
$^8P_1 \cdot ^8P_1 \cdot ^8P_1$
No replacement:
1st draw: 8 distinct balls in the urn
2nd draw: 7 distinct balls in the urn as the previous drawn ball is not replaced into the urn and now outside the urn.
3rd draw: 6 distinct balls in the urn as the previous drawn ball is not replaced into the urn and now outside the urn.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9775949716567993, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/73326/closed-but-not-rational-points-of-a-real-cubic
|
## Closed but not rational points of a real cubic
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Mumford's Red Book of Varieties and Schemes, page 102, he gave the example of the closed but not rational points (that is to say points having residue field the complex field and not the real field) of the cubic $y^2=x^3−x$ on the real field : I have some difficulty to recover by elementary methods the figure he traced.
Especially, he seems to imply that these closed points formed the region $y^2>x^3−x$ in the real plane (which looks like the cylinder he pictured in the projective plane). Can somebody give me a simple explanation ? (I suppose the maximal ideals of the spectrum of the algebra defined by the cubic have to be parametrized the right way ?)
-
Perhaps to make this question clearer, I should precise that I am looking at a way to parametrize the maximal ideals of the real algebra determined by the cubic in order to get something that looks in the projective plane like a cylinder based on the two connected components of the cubic in the real plane. This what I understood when Munford said he will leave "the details to the reader", having explained in the previous example L page 101 such a method that worked well with the real circle (he then got a disk in the plane). Unfortunately I have some problems to work out these details ... – brunoh Aug 22 2011 at 14:07
1
Your comment was very helpful, and your expectation is correct if you meant that you're looking for a cylinder that looks like a 1-meter section of a pipe of diameter $1/(2\pi)$. Remember that the complex picture is a torus gotten by taking the quotient of the plane by a square lattice, say of width and height 1, and that the closed points over $\mathbb{R}$ are what you get by taking the quotient by the equivalence relation of complex conjugacy. In this latter relation, not only the points on the real axis but those $1/2$ unit above are self-conjugate. – Lubin Aug 22 2011 at 17:33
@Lubin Thank you for your comment, it gave me something new to think about ! Especially considering that I am stuck because of the following reasoning : I looked at the maximal ideals in the algebra determined by the cubic, and I found them to be like $(x^2-px-q,ax+by-1)$, with relations between $(a,b)$ and $(p,q)$ to be sure they contain $(y^2-x^3-x)$. If I want their residue field to be like the complex field and not the real, I have to make $p^2+4q<0$, and be sure that $a$ and $b$ stay real. I obtained in the $(p,q)$ plane something like a parabola, not like a cylinder in the projective ! – brunoh Aug 22 2011 at 22:37
## 2 Answers
I don't know what Mumford had in mind, but here (in some detail) is a down-to-earth way to topologically identify this space with a cylinder.
Let $C$ be our projective cubic curve with affine equation $y^2=x^3-x$. We're considering complex conjugate pairs of points on $C$, that is, pairs ${(x,y), (\bar x,\bar y)}$ of solutions of $y^2=x^3-x$. While those points are not real, the line joining them is real: there are real numbers $a,b,c$, not all zero, such that the line $l_{a,b,c}: aX + bY + c = 0$ passes through $(x,y)$ and $(\bar x, \bar y)$, and the coefficient vector $(a,b,c)$ is determined uniquely up to multiplication by a nonzero scalar. That is, $(a:b:c)$ is a well-defined point in the "dual projective plane" ${\bf P}^*$ of lines on the projective plane with coordinates $(x:y:1)$ where $C$ lives. Now these points $(x,y)$ and $(\bar x, \bar y)$ are on $l_{a,b,c} \cap C$, which contains three points in all, so there is a third point $(x_0,y_0) =: p_0$, necessarily real. Conversely, any line $l$ meets $C$ in at least one real point, and if there is only one such point (and $l$ is not tangent to $C$ at that point) then the other two points of $l \cap C$ constitute a closed-but-not-rational point of $C$.
That is,
the space we're looking for is homeomorphic with the subset, call it $S$, of ${\bf P}^*$ consisting of lines whose real intersection with $C$, with multiplicity, has size $1$
as opposed to size $3$.
One way to describe $S$ is to start from $p_0 = (x_0,y_0)$. It is geometrically clear that this point must be on the infinite component of $C$, call it $C_0$: the other component $C_1$ is a closed curve in the affine plane ${\bf R}^2$, so any line meets it with even total multiplicity. Given $p_0$, the lines through $p_0$ constitute a real projective line, which is topologically a circle and the lines through $p_0$ that meet $C_0$ in two other points $q,q'$ constitute the union of two closed arcs, one for lines where $q,q' \in C_0$ and the other for lines where $q,q' \in C_1$. [The boundary points correspond to the four points $q$ whose tangent passes through $p$, which are the solutions of $2q=-p$ in the group law of $C$.] So the lines through $p_0$ in $S$ constitute two open intervals. Now the subtlety is that when $p_0$ goes around the closed curve $C_0$, these two intervals switch as each of the boundary points makes a complete cycle around $C_0$ or $C_1$, so we must traverse $C_0$ twice to traverse our cylinder once. In effect we're getting a Möbius band cut down the middle, which is indeed a cylinder (with a "full twist", true, but that is an artifact of the embedding in three-dimensional space that we use to visualize $S$).
For a different kind of explicit picture of $S$, note that a real cubic polynomial has one real root (with multiplicity) if and only if its discriminant is negative. So we can describe $S$ by eliminating of the variables from $aX+bY+c=0$, substituting into $Y^2=X^3-X$, computing the discriminant $\Delta$ of the resulting cubic, and plotting the region $\Delta < 0$. For example, in the affine piece $b \neq 0$ of ${\bf P}^*$, we may set $b=1$, compute $Y = -(aX+c)$, find that $$\Delta = -27c^4 - 4(ac)^3 + 30(ac)^2 + 4 a^5 c + 24 ac + a^4 + 4$$ (I didn't promise it would be pretty), and ask www.wolframalpha.com
````plot(-27*c^4-4*a^3*c^3+30*a^2*c^2+4*a^5*c+24*a*c+a^4+4 < 0)
````
to get a picture with two blue components that join up at infinity to form a topological cylinder:
[The two visible cusps come from the inflection points where $p=q=q'$, which are real 3-torsion points on $C$; there's a third such singularity at infinity. This means that of the two boundary components of $S$ (it looks like four but they pair up at infinity) the one containing the cusps is $C_0$, and the other is $C_1$.] Try also
````plot(-27*c^4-4*a^3*c^3-30*a^2*c^2+(24*a-4*a^5)*c+a^4-4 < 0)
````
for the picture arising from the curve $y^2=x^3+x$ with only one real component; this time it is a Möbius band embedded in ${\bf P}^*$ so that there boundary and the complement have only one component each:
To connect this with the usual (but less elementary) picture of an elliptic curve over ${\bf C}$ as a complex torus: as Lubin noted, the complex locus of $C$ is isomorphic as a Riemann surface with ${\bf C} / L$ where $L$ is the Gaussian lattice ${\bf Z} + {\bf Z} i$; this is consistent with complex conjugation, and the real locus consists of the cosets mod $L$ of the complex numbers of integral or half-integral imaginary part, constituting the components $C_0$ and $C_1$ respectively. We're looking to identify the conjugate pairs ${(z,\bar z)} \bmod L$ with a cylinder; in terms of the group law the real point $p_0$ associated above to ${(z,\bar z)}$ is $-2 \phantom. {\rm Re}(z)$, which as before can only be on $C_0$ and goes around $C_0$ twice (and in the opposite direction, as it happens) as $z$ goes around the cylinder once.
-
[edited to correct the description of the space of closed-but-not-rational points on $y^2=x^3+x$: in that case it's a Möbius band, not a cylinder.] – Noam D. Elkies Aug 24 2011 at 3:56
@Noam Your answer is very clear and pedagogical : thank you very much for your time ! I especially loved the part about the analysis of the boundary components recovering $C_0$ and $C_1$ and the extension to the other cubic. Everything is clear for me now and I understood also why I got stuck trying to read to precisely the complicated equations that arouse ... – brunoh Aug 24 2011 at 7:48
@Noam And I also loved the explanation about the Moebius band aspect ... – brunoh Aug 24 2011 at 8:44
@Noam Because of your excellent answer, I also understood my mistake : what I was doing was not wrong but by using the parameters $(c,d)$ of the $(x^2-cx-d)$ in the maximal ideals corresponding to each real points $(x^2-cx-d,ax+by+1)$ I was projecting on the wrong slice of the torus in $\mathbb{C}/L$, therefore crushing in the real plane the $y$ component. I could therefore only saw a disk in the projective (the parabola I found). Silly me, right ? – brunoh Aug 24 2011 at 11:46
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Starting always from the knowledge that a closed point over $\mathbb{R}$ is either an $\mathbb{R}$-rational point or a pair of conjugate $\mathbb{C}$-rational points, let's think of a complex point $P=(z,w)$ together with its conjugate $\overline P$, if need be, and call $z=a+bi$, $w=c+di$. Then your maximal ideal corresponding to $P$ is $(x^2-2ax+a^2+b^2,y^2-2cy+c^2+d^2)$, with special forms in case either $z$ or $w$ is real, for example $(x-a,y^2-2cy+c^2+d^2)$ in case $z$ is real but not $w$. Of course the condition that $w^2=z^3-z$ is expressed by a pair of equations in the real variables $a,b,c,d$, namely $a + c^2 - d^2 - a^3 + 3ab^2=0$ and $b + 2cd - 3a^2b + b^3=0$, which I haven't gotten much help from, even though I've stared at them long and hard hoping to verify your very interesting insight about the bordered region $S=\lbrace (x,y):y^2 \ge x^3-x\rbrace$. If you want a surface in $\mathbb{R}^3$ to look at, you can take the points $(a,c,b^2+d^2)$, subject to the two conditions above. Its intersection with the plane $(*,*,0)$ is just the locus of real-rational points, but its projection onto that plane is not your region $S$. Notice that conjugate points have the same image under this mapping, and nonconjugate points have different images.
The edition of Mumford's Red Book that I'm looking at does not support your guess about $S$, in my opinion, even though as a topological space with border, $S$ is exactly right. Perhaps there's a transcendental argument justifying your insight, using the $\wp$-function for the appropriate lattice.
-
@Lubin Thank you very much for your detailed explanation : I was reassured by the fact that the equations are indeed not easy to read and you are right by the suggesting I should just try to depict S topologically ! – brunoh Aug 24 2011 at 7:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 133, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492952823638916, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Kernel_(statistics)
|
# Kernel (statistics)
It has been suggested that this article be into multiple articles accessible from a disambiguation page. (December 2012)
The term kernel has two separate meanings in statistics.
## In Bayesian statistics
In statistics, especially in Bayesian statistics, the kernel of a probability density function (pdf) or probability mass function (pmf) is the form of the pdf or pmf in which any factors that are not functions of any of the variables in the domain are omitted.[citation needed] Note that such factors may well be functions of the parameters of the pdf or pmf. These factors form part of the normalization factor of the probability distribution, and are unnecessary in many situations. For example, in pseudo-random number sampling, most sampling algorithms ignore the normalization factor. In addition, in Bayesian analysis of conjugate prior distributions, the normalization factors are generally ignored during the calculations, and only the kernel considered. At the end, the form of the kernel is examined, and if it matches a known distribution, the normalization factor can be reinstated. Otherwise, it may be unnecessary (for example, if the distribution only needs to be sampled from).
For many distributions, the kernel can be written in closed form, but not the normalization constant.
An example is the normal distribution. Its probability density function is
$p(x|\mu,\sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-\mu)^2}{2\sigma^2}}$
and the associated kernel is
$p(x|\mu,\sigma^2) \propto e^{-\frac{(x-\mu)^2}{2\sigma^2}}$
Note that the factor in front of the exponential has been omitted, even though it contains the parameter $\sigma^2$ , because it is not a function of the domain variable $x$ .
## In non-parametric statistics
In non-parametric statistics, a kernel is a weighting function used in non-parametric estimation techniques. Kernels are used in kernel density estimation to estimate random variables' density functions, or in kernel regression to estimate the conditional expectation of a random variable. Kernels are also used in time-series, in the use of the periodogram to estimate the spectral density. An additional use is in the estimation of a time-varying intensity for a point process.
Commonly, kernel widths must also be specified when running a non-parametric estimation.
### Definition
A kernel is a non-negative real-valued integrable function K satisfying the following two requirements:
• $\int_{-\infty}^{+\infty}K(u)\,du = 1\,;$
• $K(-u) = K(u) \mbox{ for all values of } u\,.$
The first requirement ensures that the method of kernel density estimation results in a probability density function. The second requirement ensures that the average of the corresponding distribution is equal to that of the sample used.
If K is a kernel, then so is the function K* defined by K*(u) = λK(λu), where λ > 0. This can be used to select a scale that is appropriate for the data.
### Kernel functions in common use
Several types of kernel functions are commonly used: uniform, triangle, Epanechnikov, quartic (biweight), tricube, triweight, Gaussian, and cosine.
In the table below, 1{…} is the indicator function.
Kernel Functions, K(u) $\textstyle \int u^2K(u)du$ $\textstyle \int K^2(u)du$
Uniform $K(u) = \frac12 \,\mathbf{1}_{\{|u|\leq1\}}$ $\frac13$ $\frac12$
Triangular $K(u) = (1-|u|) \,\mathbf{1}_{\{|u|\leq1\}}$ $\frac{1}{6}$ $\frac{2}{3}$
Epanechnikov $K(u) = \frac{3}{4}(1-u^2) \,\mathbf{1}_{\{|u|\leq1\}}$ $\frac{1}{5}$ $\frac{3}{5}$
Quartic
(biweight)
$K(u) = \frac{15}{16}(1-u^2)^2 \,\mathbf{1}_{\{|u|\leq1\}}$ $\frac{1}{7}$ $\frac{5}{7}$
Triweight $K(u) = \frac{35}{32}(1-u^2)^3 \,\mathbf{1}_{\{|u|\leq1\}}$ $\frac{1}{9}$ $\frac{350}{429}$
Tricube $K(u) = \frac{70}{81}(1- {\left| u \right|}^3)^3 \,\mathbf{1}_{\{|u|\leq1\}}$ $\frac{35}{243}$ $\frac{175}{247}$
Gaussian $K(u) = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}u^2}$ $1\,$ $\frac{1}{2\sqrt\pi}$
Cosine $K(u) = \frac{\pi}{4}\cos\left(\frac{\pi}{2}u\right) \mathbf{1}_{\{|u|\leq1\}}$ $1-\frac{8}{\pi^2}$ $\frac{\pi^2}{16}$
## References
• Li, Qi; Racine, Jeffrey S. (2007). Nonparametric Econometrics: Theory and Practice. Princeton University Press. ISBN 0-691-12161-3.
• Zucchini, Walter. "APPLIED SMOOTHING TECHNIQUES Part 1: Kernel Density Estimation". Retrieved 28 March 2012.
• Comaniciu, D; Meer, P (2002). "Mean shift: A robust approach toward feature space analysis". IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (5): 603–619. doi:10.1109/34.1000236. CiteSeerX: .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8020899891853333, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/9586/range-of-binomial-probability-given-a-certain-number-of-observations/9590
|
## Range of binomial probability, given a certain number of observations?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let's say I am given $n$ flips of a coin, $k$ of which are heads. These are iid flips.
Can I say, with probability $p > 1/2$, that the true probability of heads is in range $[p_1, p_2]$ ? What is that range?
How do I integrate prior knowledge of the binomial distribution? What if I have no prior knowledge?
-
## 3 Answers
There is a whole field around questions like this called bayesian statistics, it's been a while since I looked at this stuff but if I remember right.
Sadly you do need to have some pre-determined view of what p is. That is some before flipping the n coins you have a distribution in mind for the value of p (called the prior distribution). This distribution changes as you flip the coins (you get a posterior distribution).
For example you might start out believing that the coin has a 50% chance of being fair and a 50% chance of coming up heads 2/3rds of the time. (You might believe this if you know the person you got the coin from has both types of coins and there is a 50% chance he's trying to trick you).
The interesting (or at least nice) case is when your prior distribution is a "conjugate prior". Which basically means that your posterior distiribution for p is of the same parametric family as your prior distribution. I believe the conjugate prior for this is the beta distribution, but you might want to google "conjugate prior" and "bayesian statistics".
Hope that helps.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
The idea of estimating a distribution parameter is to construct a random variable whose expectation is the parameter needed to be estimated. In your example, we want to estimate the success probability of a Bernoulli distribution and we construct a binomially distributed random variable by repeated trials. The key point is that the new random variable has the same average (after normalizing by n) but its standard deviation is smaller (by a factor of sqrt(n)) which gives better bounds on the estimated value. The confidence level is just a percetile of the distribution of the random variable used for the estimation (in your example you chose 50%).The interval size is a function of the percentile. The estimated value p* = k/n is inside the interval but the interval is not symmetric in general around this value. In the Wikipedia page, several approximations for large n are given, based on the central limit theorem, which are usually used in real-life estimations.
2.The above solution assumes prior knowledge that the distribution of a single trial is Bernoulli and that the trials are independent. Usually, one needs some prior knowledge to infere a statistical parameter. Specifically in your question, prior knowldge would be that the coin flips are independent, or that at after some flip the success probability p had changed, etc.
-
Some very sharp bounds for questions like these are provided by something called a Chernoff bound. The example in the wikipedia article will give you what you need.
Edit: Oh, I forgot to say that you need an estimator for the "true" probability, but I guess that the one you are using is just the average over the samples.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305313229560852, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/33042-need-help-hurry.html
|
# Thread:
1. ## Need Help!! in a hurry
A ruptured oil tanker causes a circular oil slick on the surface of the ocean. When its radius is 110 meters, the radius of the slick is expanding by 0.6 meter/minute and its thickness is 0.03 meter.
(a) At that moment, how fast is the area of the slick expanding?
(b) The circular slick has the same thickness everywhere, and the volume of oil spilled remains fixed. How fast is the thickness of the slick decreasing when the radius is 110 meters?
this one probably seems harder than im making it out to be, but i need help, the thickness part freaks me out...thanks
mathlete
2. Originally Posted by mathlete
A ruptured oil tanker causes a circular oil slick on the surface of the ocean. When its radius is 110 meters, the radius of the slick is expanding by 0.6 meter/minute and its thickness is 0.03 meter.
(a) At that moment, how fast is the area of the slick expanding?
$A=\pi r^2$
$\frac{d}{dt}A=2 \pi r \frac{dr}{dt}$
(b) The circular slick has the same thickness everywhere, and the volume of oil spilled remains fixed. How fast is the thickness of the slick decreasing when the radius is 110 meters?
$V=A \times h$
where $h$ is the thickness.
$\frac{d}{dt}V=A \frac{dh}{dt}+h \frac{dA}{dt}=0$
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928643524646759, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/100514/separable-l-1-predual
|
## Separable $L_1$-predual
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Some isometric preduals of $\ell_1$ are of the form $C_0(K)$ where $K$ is countable. I am wondering whether this is a general rule.
Question: Is there a measure $\mu$ and a (preferably separable) Banach space $X$ without a subspace isomorphic to $c_0$ which has $X^*=L_1(\mu)$ isometrically?
I apologise for three questions in a such short period of time. Now I'll take my time.
EDIT: Corrected according to Philip's remarks.
-
The first sentence of your post is incorrect; what is true is that when $C_0(K)^\ast$ is isometric to $\ell_1$ it is necessarily the case that $K$ is countable. There are isometric preduals of $\ell_1$ that are not isomorphic to a space $C_0(K)$; the first example is due to Benyamini and Lindenstrauss, A predual of $\ell_1$ which is not isomorphic to a $C(K)$ space, Israel J. Math. 13 (1972), 246-254. Other constructions have since been given, see Gasparis' preprint arxiv.org/pdf/1205.4317.pdf for a brief survey. Gasparis' paper contains a new approach to constructing an $\ell_$ – Philip Brooker Jun 24 at 9:49
isometric predual of $\ell_1$. The difference between his space and earlier constructions is that his space does not contain a subspace isomorphic to $C_0([0,\omega^\omega])$. You can see a video of him presenting a talk on his paper at birs.ca/events/2012/5-day-workshops/12w5019/… – Philip Brooker Jun 24 at 9:52
## 1 Answer
Zippin proved that every isometric $L_1$ predual contains $c_0$ isometrically.
Zippin, M. On some subspaces of Banach spaces whose duals are L1 spaces. Proc. Amer. Math. Soc. 23 1969 378–385.
-
Is anything known about containment of $c$ isometrically? – Jan Vardøen Jun 24 at 12:00
@Jan: $c$ does not embed isometrically into $c_0$. – Philip Brooker Jul 4 at 6:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9233211874961853, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/75902/is-the-disk-quasiconformally-isomorphic-to-the-plane
|
## Is the disk quasiconformally isomorphic to the plane?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
(This question might turn out to be too elementary for this site,
if so I'm sorry, but I can't find the answer anywhere.)
Does there exist a function `$\; f : \{z\in \mathbb{C} : |z| < 1\} \to \mathbb{C} \;$`
such that $f\hspace{.01 in}$ is a quasiconformal bijection and $f^{-1}$ is quasiconformal?
-
7
The plane and a disk are not quasiconformally equivalent, see e.g. page 11 in math.qc.edu/~zakeri/papers/ahl-bers.pdf. Incidentally, the inverse of a quasiconformal map is automatically quasiconformal. – Igor Belegradek Sep 20 2011 at 2:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8314104676246643, "perplexity_flag": "middle"}
|
http://en.wikibooks.org/wiki/Special_Relativity/Print_version
|
Special Relativity/Print version
The latest reviewed version was checked on 12 March 2013. There are template/file changes awaiting review.
This is the print version of Special_relativity You won't see this message or any elements not part of the book's content when you print or preview this page.
Note: current version of this book can be found at http://en.wikibooks.org/wiki/Special_relativity
Remember to click "refresh" to view this version.
Table of contents
The principle of relativity
Introduction
The Principle of Relativity
Frames of reference, events and transformations
Special relativity
The postulates of special relativity
Spacetime
The spacetime interpretation of special relativity
Spacetime
The lightcone
The Lorentz transformation equations
The relativity of simultaneity and the Andromeda paradox
The twin paradox
Addition of velocities
Relativistic dynamics
Introduction
Momentum
Force
Mass and energy
Light propagation and the aether
Introduction
The aether drag hypothesis
The Michelson-Morley experiment
Mathematical approach to special relativity
Introduction
Mathematical techniques
Vectors
Matrices
Linear Transformations
Indicial Notation
Analysis of curved surfaces and transformations
Four vectors
The Lorentz transformation
The linearity and homogeneity of spacetime
The Lorentz transformation
Length contraction, time dilation and phase
Hyperbolic geometry
Addition of velocities
Acceleration transformation
Relativistic dynamics
Introduction
Momentum
Relativistic Mass
Appendices
Mathematics of the Lorentz Transformation Equations
Introduction
Introduction
The Special Theory of Relativity was the result of developments in physics at the end of the nineteenth century and the beginning of the twentieth century. It changed our understanding of older physical theories such as Newtonian Physics and led to early Quantum Theory and General Relativity.
Special Relativity does not just apply to fast moving objects, it affects the everyday world directly through "relativistic" effects such as magnetism and the relativistic inertia that underlies kinetic energy and hence the whole of dynamics.
Special Relativity is now one of the foundation blocks of physics. It is in no sense a provisional theory and is largely compatible with quantum theory; it not only led to the idea of matter waves but is the origin of quantum 'spin' and underlies the existence of the antiparticles. Special Relativity is a theory of exceptional elegance, Einstein crafted the theory from simple postulates about the constancy of physical laws and of the speed of light and his work has been refined further so that the laws of physics themselves and even the constancy of the speed of light are now understood in terms of the most basic symmetries in space and time.
Further Reading
Feynman Lectures on Physics. Symmetry in Physical Laws. (World Student) Vol 1. Ch 52.
Gross, D.J. The role of symmetry in fundamental physics. PNAS December 10, 1996 vol. 93 no. 25 14256-14259 http://www.pnas.org/content/93/25/14256.full
Historical Development
Special Relativity is not a theory about light, it is a theory about space and time, but it was the strange behaviour of light that first alerted scientists to the possibility that the universe had an unexpected geometry. The short history of Special Relativity given here will start with light but will end with the discovery that the behaviour of light is related to the geometry of the universe.
In the nineteenth century it was widely accepted that light travelled as waves in a substance called the “aether” in a similar way to how waves in general travel in material substances. A possible link between this aether and electrical and magnetic fields became apparent during the first half of the nineteenth century when Faraday demonstrated that the polarisation of light was affected by magnetic fields and Weber showed that electrical effects could be transmitted across non-conducting materials.
James Clerk Maxwell
In 1865 the Scottish physicist, James Clerk Maxwell, drew together the various experiments on electricity and magnetism into an electromagnetic theory of light based on the aether. One of his key observations was that electrical effects seemed to propagate at nearly light speed. He wrote of the velocity of electrical interactions that:
“This velocity is so nearly that of light that it seems we have strong reason to conclude that light itself (including radiant heat and other radiations, if any) is an electromagnetic disturbance in the form of waves propagated through the electromagnetic field according to electromagnetic laws.”
Maxwell's theory explained radio, heat radiation, light and many other phenomena as electromagnetic waves travelling in an aether. The velocity of these waves depended upon the properties of the aether itself. Someone who was stationary within the aether would measure the speed of light to be constant as a result of the constant properties of the aether. A light ray going from one stationary point to another in the aether would take the same amount of time to make the journey no matter who observed it. However, although stationary observers would all observe the same velocity for light, moving observers would measure the velocity of light as the sum of their velocity relative to the aether and the velocity of light in the aether.
If space were indeed full of an aether then the motion of objects through this aether should be detectable by measuring the velocity of light rays. In practice it is difficult to measure the velocity of light with sufficient precision. Maxwell proposed that an instrument called an "interferometer" would provide the required accuracy. An interferometer splits a ray of light into two identical beams arranged at right angles to each other. It then brings these beams together so that the light waves reinforce each other if they arrive at the same time and destructively interfere with each other if one beam is slightly delayed. If one beam is reflected back and forth at right angles to the direction of travel of the interferometer and the other reflected along the direction of travel then the velocity of the interferometer should affect the velocity of each beam differently and create a delay and hence observable interference. Maxwell proposed that if an interferometer were moved through the aether the addition of the velocity of the equipment to the velocity of the light in the aether would cause a distinctive interference pattern. Maxwell's idea was submitted as a letter to Nature in 1879 (posthumously).
Albert Michelson read Maxwell's paper and in 1887 Michelson and Morley performed an 'interferometer' experiment to test whether the observed velocity of light is indeed the sum of the speed of light in the aether and the velocity of the observer. Michelson and Morley discovered that the measured velocity of light did not change with the velocity of the observer. To everyone's surprise the experiment showed that the speed of light was independent of the speed of the destination or source of the light in the proposed aether.
Albert Abraham Michelson
How might this "null result" of the interferometer experiment be explained? How could the speed of light in a vacuum be constant for all observers no matter how they are moving themselves? It was possible that Maxwell's theory was correct but the theory about the way that velocities add together (known as Galilean Relativity) was wrong. Alternatively it was possible that Maxwell's theory was wrong and Galilean Relativity was correct. However, the most popular interpretation at the time was that both Maxwell and Galileo were correct and something was happening to the measuring equipment. Perhaps the instrument was being squeezed in some way by the aether or some other material effect was occurring.
Various physicists attempted to explain the Michelson and Morley experiment. George Fitzgerald (1889) and Hendrik Lorentz (1895) suggested that objects tend to contract along the direction of motion relative to the aether and Joseph Larmor (1897) and Hendrik Lorentz (1899) proposed that moving objects are contracted and that moving clocks run slow as a result of motion in the aether. Fitzgerald, Larmor and Lorentz's contributions to the analysis of light propagation are of huge importance because they produced the "Lorentz Transformation Equations". The Lorentz Transformation Equations were developed to describe how physical effects would need to change the length of the interferometer arms and the rate of clocks to account for the lack of change in interference fringes in the interferometer experiment. It took the rebellious streak in Einstein to realise that the equations could also be applied to changes in space and time itself.
Albert Einstein
By the late nineteenth century it was becoming clear that aether theories of light propagation were problematic. Any aether would have properties such as being massless, incompressible, entirely transparent, continuous, devoid of viscosity and nearly infinitely rigid. In 1905 Albert Einstein realised that Maxwell's equations did not require an aether. On the basis of Maxwell's equations he showed that the Lorentz Transformation was sufficient to explain that length contraction occurs and clocks appear to go slow provided that the old Galilean concept of how velocities add together was abandoned. Einstein's remarkable achievement was to be the first physicist to propose that Galilean relativity might only be an approximation to reality. He came to this conclusion by being guided by the Lorentz Transformation Equations themselves and noticing that these equations only contain relationships between space and time without any references to the properties of an aether.
In 1905 Einstein was on the edge of the idea that made relativity special. It remained for the mathematician Hermann Minkowski to provide the full explanation of why an aether was entirely superfluous. He announced the modern form of Special Relativity theory in an address delivered at the 80th Assembly of German Natural Scientists and Physicians on September 21, 1908. The consequences of the new theory were radical, as Minkowski put it:
"The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality."
What Minkowski had spotted was that Einstein's theory was actually related to the theories in differential geometry that had been developed by mathematicians during the nineteenth century. Initially Minkowski's discovery was unpopular with many physicists including Poincaré, Lorentz and even Einstein. Physicists had become used to a thoroughly materialist approach to nature in which lumps of matter were thought to bounce off each other and the only events of any importance were those occurring at some universal, instantaneous, present moment. The possibility that the geometry of the world might include time as well as space was an alien idea. The possibility that phenomena such as length contraction could be due to the physical effects of spacetime geometry rather than the increase or decrease of forces between objects was as unexpected for physicists in 1908 as it is for the modern high school student. Einstein rapidly assimilated these new ideas and went on to develop General Relativity as a theory based on differential geometry but many of the earlier generation of physicists were unable to accept the new way of looking at the world.
The adoption of differential geometry as one of the foundations of relativity theory has been traced by Walter (1999). Walter's study shows that by the 1920's modern differential geometry had become the principal theoretical approach to relativity, replacing Einstein's original electrodynamic approach.
Henri Poincaré
It has become popular to credit Henri Poincaré with the discovery of the theory of Special Relativity, but Poincaré got many of the right answers for some of the wrong reasons. He even came up with a version of $E=mc^2$. In 1904 Poincaré had gone as far as to enunciate the "principle of relativity" in which "The laws of physical phenomena must be the same, whether for a fixed observer, as also for one dragged in a motion of uniform translation, so that we do not and cannot have any means to discern whether or not we are dragged in a such motion." Furthermore, in 1905 Poincaré coined the term "Lorentz Transformation" for the equation that explained the null result of the Michelson Morley experiment. Although Poincaré derived equations to explain the null result of the Michelson Morley experiment, his assumptions were still based upon an aether. It remained for Einstein to show that an aether was unnecessary.
It is also popular to claim that Special Relativity and aether theories such as those due to Poincaré and Lorentz are equivalent and only separated by Occam's Razor. This is not strictly true. Occam's Razor is used to separate a complex theory from a simple theory, the two theories being different. In the case of Poincare's and Lorentz's aether theories both contain the Lorentz Transformation which is already sufficient to explain the Michelson and Morley Experiment, length contraction, time dilation etc. without an aether. The aether theorists simply failed to notice that this is a possibility because they rejected spacetime as a concept for reasons of philosophy or prejudice. In Poincaré's case he rejected spacetime because of philosophical objections to the idea of spatial or temporal extension (see note 1).
It is curious that Einstein actually returned to thinking based on an aether for philosophical reasons similar to those that haunted Poincaré (See Granek 2001). The geometrical form of Special Relativity as formalised by Minkowski does not forbid action at a distance and this was considered to be dubious philosophically. This led Einstein, in 1920, to reintroduce some of Poincaré's ideas into the theory of General Relativity. Whether an aether of the type proposed by Einstein is truly required for physical theory is still an active question in physics. However, such an aether leaves the spacetime of Special Relativity almost intact and is a complex merger of the material and geometrical that would be unrecognised by 19th century theorists.
• Einstein, A. (1905). Zur Elektrodynamik bewegter Körper, in Annalen der Physik. 17:891-921. http://www.fourmilab.ch/etexts/einstein/specrel/www/
• Granek, G (2001). Einstein's ether: why did Einstein come back to the ether? Apeiron, Vol 8, 3. http://citeseer.ist.psu.edu/cache/papers/cs/32948/http:zSzzSzredshift.vif.comzSzJournalFileszSzV08NO3PDFzSzV08N3GRF.PDF/granek01einsteins.pdf
• FitzGerald, G. F. (1889), The Ether and the Earth’s Atmosphere, Science 13, 390.
• Larmor, J. (1897), On a Dynamical Theory of the Electric and Luminiferous Medium, Part 3, Relations with material media, Phil. Trans. Roy. Soc. 190: 205–300, doi:10.1098/rsta.1897.0020
• Lorentz, H. A. L. (1895), Versuch einer Theorie der electrischen und optischen Erscheinungen in bewegten Körpern, Brill, Leyden.
• Maxwell, J.C. (1865) A Dynamical Theory of the Electromagnetic Field, Philosophical Transactions, vol 155, p459 (1865)
• Walter, S. (1999), The non-Euclidean style of Minkowskian relativity. Published in J. Gray (ed.), The Symbolic Universe, Oxford University Press, 1999, 91–127. http://www.univ-nancy2.fr/DepPhilo/walter/papers/nes.pdf
Note 1: The modern philosophical objection to the spacetime of Special Relativity is that it acts on bodies without being acted upon, however, in General Relativity spacetime is acted upon by its content.
Intended Audience
This book presents special relativity (SR) from first principles and logically arrives at the conclusions. There will be simple diagrams and some thought experiments. Although the final form of the theory came to use Minkowski spaces and metric tensors, it is possible to discuss SR using nothing more than high school algebra. That is the method used here in the first half of the book. That being said, the subject is open to a wide range of readers. All that is really required is a genuine interest.
For a more mathematically sophisticated treatment of the subject, please refer to the Advanced Text in Wikibooks.
The book is designed to confront the way students fail to understand the relativity of simultaneity. This problem is well documented and described in depth in: Student understanding of time in special relativity: simultaneity and reference frames by Scherr et al.
What's so special?
The special theory was suggested in 1905 in Einstein's article "On the Electrodynamics of Moving Bodies", and is so called because it applies in the absence of non-uniform gravitational fields.
In search of a more complete theory, Einstein developed the general theory of relativity published in 1915. General relativity (GR), a more mathematically demanding subject, describes physics in the presence of gravitational fields.
The conceptual difference between the two is the model of spacetime used. Special relativity makes use of a Euclidean-like (flat) spacetime. GR lives in a spacetime that is generally not flat but curved, and it is this curvature which represents gravity. The domain of applicability for SR is not so limited, however. Spacetime can often be approximated as flat, and there are techniques to deal with accelerating special relativistic objects.
Common Pitfalls in Relativity
Here is a collection of common misunderstandings and misconceptions about SR. If you are unfamiliar with SR then you can safely skip this section and come back to it later. If you are an instructor, perhaps this can help you divert some problems before they start by bringing up these points during your presentation when appropriate.
Beginners often believe that special relativity is only about objects that are moving at high velocities. Strictly speaking, this is a mistake. Special relativity applies at all velocities but at low velocity the predictions of special relativity are almost identical to those of the Newtonian empirical formulae. As an object increases its velocity the predictions of relativity gradually diverge from Newtonian Mechanics.
There is sometimes a problem differentiating between the two different concepts "relativity of simultaneity" and "signal latency/delay." This book text differs from some other presentations because it deals with the geometry of spacetime directly and avoids the treatment of delays due to light propagation. This approach is taken because students would not be taught Euclid's geometry using continuous references to the equipment and methods used to measure lengths and angles. Continuous reference to the measurement process obscures the underlying geometrical theory whether the geometry is three dimensional or four dimensional.
If students do not grasp that, from the outset, modern Special Relativity proposes that the universe is four dimensional, then, like Poincaré, they will consider that the constancy of the speed of light is just an event awaiting a mechanical explanation and waste their time by pondering the sorts of mechanical or electrical effects that could adjust the velocity of light to be compatible with observation.
A Word about Wiki
This is a wikibook. That means it has great potential for improvement and enhancement. The improvement can be in the form of refined language, clear mathematics, simple diagrams, and better practice problems and answers. The enhancement can be in the form of artwork, historical context of SR, anything. Feel free to improve and enhance Special Relativity and other wikibooks as you see necessary.
The principle of relativity
The principle of relativity
Galileo Galilei
Principles of relativity address the relationship between observations made at different places. This problem has been a difficult theoretical challenge since the earliest times and involves physical questions such as how the velocities of objects can be combined and how influences are transmitted between moving objects.
One of the most fruitful approaches to this problem was the investigation of how observations are affected by the velocity of the observer. This problem had been tackled by classical philosophers but it was the work of Galileo that produced a real breakthrough. Galileo (1632), in his "Dialogue Concerning the Two Chief World Systems", considered observations of motion made by people inside a ship who could not see the outside:
"have the ship proceed with any speed you like, so long as the motion is uniform and not fluctuating this way and that. You will discover not the least change in all the effects named, nor could you tell from any of them whether the ship was moving or standing still. "
According to Galileo, if the ship moved smoothly then someone inside it would be unable to determine whether they were moving. If people in Galileo's moving ship were eating dinner they would see their peas fall from their fork straight down to their plate in the same way as they might if they were at home on dry land. The peas move along with the people and do not appear to the diners to fall diagonally. This means that the peas continue in a state of uniform motion unless someone intercepts them or otherwise acts on them. It also means that simple experiments that the people on the ship might perform would give the same results on the ship or at home. This concept led to “Galilean Relativity” in which it was held that things continue in a state of motion unless acted upon and that the laws of physics are independent of the velocity of the laboratory.
This simple idea challenged the previous ideas of Aristotle. Aristotle had argued in his Physics that objects must either be moved or be at rest. According to Aristotle, on the basis of complex and interesting arguments about the possibility of a 'void', objects cannot remain in a state of motion without something moving them. As a result Aristotle proposed that objects would stop entirely in empty space. If Aristotle were right the peas that you dropped whilst dining aboard a moving ship would fall in your lap rather than falling straight down on to your plate. Aristotle's idea had been believed by everyone so Galileo's new proposal was extraordinary and, because it was nearly right, became the foundation of physics.
Galilean Relativity contains two important principles: firstly it is impossible to determine who is actually at rest and secondly things continue in uniform motion unless acted upon. The second principle is known as Galileo’s Law of Inertia or Newton's First Law of Motion.
Reference:
• Galileo Galilei (1632). Dialogues Concerning the Two Chief World Systems.
• Aristotle (350BC). Physics. http://classics.mit.edu/Aristotle/physics.html
Special relativity
Until the nineteenth century it appeared that Galilean relativity treated all observers as equivalent no matter how fast they were moving. If you throw a ball straight up in the air at the North Pole it falls straight back down again and this also happens at the equator even though the equator is moving at almost a thousand miles an hour faster than the pole. Galilean velocities are additive so that the ball continues moving at a thousand miles an hour when it is thrown upwards at the equator and continues with this motion until it is acted on by an external agency.
This simple scheme became questioned in 1865 when James Clerk Maxwell discovered the equations that describe the propagation of electromagnetic waves such as light. His equations showed that the speed of light depended upon constants that were thought to be simple properties of a physical medium or “aether” that pervaded all space. If this were the case then, according to Galilean relativity, it should be possible to add your own velocity to the velocity of incoming light so that if you were travelling at a half the speed of light then any light approaching you would be observed to be travelling at 1.5 times the speed of light in the aether. Similarly, any light approaching you from behind would strike you at 0.5 times the speed of light in the aether. Light itself would always go at the same speed in the aether so if you shone a light from a torch whilst travelling at high speed the light would plop into the aether and slow right down to its normal speed. This would spoil Galileo's Relativity because all you would need to do to discover whether you were in a moving ship or on dry land would be to measure the speed of light in different directions. The light would go slower in your direction of travel through the aether and faster in the opposite direction.
If the Maxwell equations are valid and the simple classical addition of velocities applies then there should be a preferred reference frame, the frame of the stationary aether. The preferred reference frame would be considered the true zero point to which all velocity measurements could be referred.
Special relativity restored a principle of relativity in physics by maintaining that Maxwell's equations are correct but that classical velocity addition is wrong: there is no preferred reference frame. Special relativity brought back the interpretation that in all inertial reference frames the same physics is going on and there is no phenomenon that would allow an observer to pinpoint a zero point of velocity. Einstein preserved the principle of relativity by proposing that the laws of physics are the same regardless of the velocity of the observer. According to Einstein, whether you are in the hold of Galileo's ship or in the cargo bay of a space ship going at a large fraction of the speed of light the laws of physics will be the same.
Einstein's idea shared the same philosophy as Galileo's idea, both men believed that the laws of physics would be unaffected by motion at a constant velocity. In the years between Galileo and Einstein it was believed that it was the way velocities simply add to each other that preserved the laws of physics but Einstein adapted this simple concept to allow for Maxwell's equations.
Frames of reference, events and transformations
Before proceeding further with the analysis of relative motion the concepts of reference frames, events and transformations need to be defined more closely.
Physical observers are considered to be surrounded by a reference frame which is a set of coordinate axes in terms of which position or movement may be specified or with reference to which physical laws may be mathematically stated.
An event is something that happens independently of the reference frame that might be used to describe it. Turning on a light or the collision of two objects would constitute an event.
Suppose there is a small event, such as a light being turned on, that is at coordinates $x,y,z,t$ in one reference frame. What coordinates would another observer, in another reference frame moving relative to the first at velocity $v$ along the $x$ axis assign to the event? This problem is illustrated below:
What we are seeking is the relationship between the second observer's coordinates for the event $x^', y^', z^', t^'$ and the first observer's coordinates for the event $x,y,z,t$. The coordinates refer to the positions and timings of the event that are measured by each observer and, for simplicity, the observers are arranged so that they are coincident at t=0. According to Galilean Relativity:
$x^' = x - vt$
$y^' = y$
$z^' = z$
$t^' = t$
This set of equations is known as a Galilean coordinate transformation or Galilean transformation.
These equations show how the position of an event in one reference frame is related to the position of an event in another reference frame. But what happens if the event is something that is moving? How do velocities transform from one frame to another?
The calculation of velocities depends on Newton's formula: $v = dx/dt$. The use of Newtonian physics to calculate velocities and other physical variables has led to Galilean Relativity being called Newtonian Relativity in the case where conclusions are drawn beyond simple changes in coordinates. The velocity transformations for the velocities in the three directions in space are, according to Galilean relativity:
$\mathbf{u^'_x = u_x - v}$
$\mathbf{u^'_y = u_y}$
$\mathbf{u^'_z = u_z}$
Where $\mathbf{u^'_x, u^'_y, u^'_z}$ are the velocities of a moving object in the three directions in space recorded by the second observer and $\mathbf{u_x, u_y, u_z}$ are the velocities recorded by the first observer and $\mathbf{v}$ is the relative velocity of the observers. The minus sign in front of the $\mathbf{v}$ means the moving object is moving away from both observers.
This result is known as the classical velocity addition theorem and summarises the transformation of velocities between two Galilean frames of reference. It means that the velocities of projectiles must be determined relative to the velocity of the source and destination of the projectile. For example, if a sailor throws a stone at 10 km/hr from Galileo's ship which is moving towards shore at 5 km/hr then the stone will be moving at 15 km/hr when it hits the shore.
In Newtonian Relativity the geometry of space is assumed to be Euclidean and the measurement of time is assumed to be the same for all observers.
The derivation of the classical velocity addition theorem is as follows If the Galilean transformations are differentiated with respect to time:
$x^' = x - vt$
So:
$dx^'/dt = dx/dt - v$
But in Galilean relativity $t^' = t$ and so $dx^'/dt^' = dx^'/dt$ therefore:
$dx^'/dt^' = dx/dt - v$
$dy^'/dt^' = dy/dt$
$dz^'/dt^' = dz/dt$
If we write $u^'_x = dx^'/dt^'$ etc. then:
$u^'_x = u_x - v$
$u^'_y = u_y$
$u^'_z = u_z$
The postulates of special relativity
In the previous section transformations from one frame of reference to another were described using the simple addition of velocities that were introduced in Galileo's time and these transformations are consistent with Galileo's main postulate which was that the laws of physics would be the same for all inertial observers so that no-one could tell who was at rest. Aether theories had threatened Galileo's postulate because the aether would be at rest and observers could determine that they were at rest simply by measuring the speed of light in the direction of motion. Einstein preserved Galileo's fundamental postulate that the laws of physics are the same in all inertial frames of reference but to do so he had to introduce a new postulate that the speed of light would be the same for all observers. These postulates are listed below:
1. First postulate: the principle of relativity
Formally: the laws of physics are the same in all inertial frames of reference.
Informally: every physical theory should look the same mathematically to every inertial observer. Experiments in a physics laboratory in a spaceship or planet orbiting the sun and galaxy will give the same results no matter how fast the laboratory is moving.
2. Second postulate: the invariance of the speed of light
Formally: the speed of light in free space is a constant in all inertial frames of reference.
Informally: the speed of light in a vacuum, commonly denoted c, is the same for all inertial observers, is the same in all directions, and does not depend on the velocity of the object emitting the light.
Using these postulates Einstein was able to calculate how the observation of events depends upon the relative velocity of observers. He was then able to construct a theory of physics that led to predictions such as the equivalence of mass and energy and early quantum theory.
Einstein's formulation of the axioms of relativity is known as the electrodynamic approach to relativity. It has been superseded in most advanced textbooks by the “space-time approach” in which the laws of physics themselves are due to symmetries in space-time and the constancy of the speed of light is a natural consequence of the existence of space-time. However, Einstein's approach is equally valid and represents a tour de force of deductive reasoning which provided the insights required for the modern treatment of the subject.
Einstein's Relativity - the electrodynamic approach
Einstein asked how the lengths and times that are measured by the observers might need to vary if both observers found that the speed of light was constant. He looked at the formulae for the velocity of light that would be used by the two observers, $(x = ct)$ and $(x^' = ct^')$, and asked what constants would need to be introduced to keep the measurement of the speed of light at the same value even though the relative motion of the observers meant that the $x^'$ axis was continually expanding. His working is shown in detail in the appendix. The result of this calculation is the Lorentz Transformation Equations:
$x' = \gamma (x - vt)\,$
$y' = y \,$
$z' = z \,$
$t' = \gamma (t - \frac{v x}{c^{2}})\,$
Where the constant $\gamma = \frac {1}{\sqrt {1 -\frac{v^2}{c^2}}}$. These equations apply to any two observers in relative motion but note that the sign within the brackets changes according to the direction of the velocity - see the appendix.
The Lorentz Transformation is the equivalent of the Galilean Transformation with the added assumption that everyone measures the same velocity for the speed of light no matter how fast they are travelling. The speed of light is a ratio of distance to time (ie: metres per second) so for everyone to measure the same value for the speed of light the length of measuring rods, the length of space between light sources and receivers and the number of ticks of clocks must dynamically differ between the observers. So long as lengths and time intervals vary with the relative velocity of two observers (v) as described by the Lorentz Transformation the observers can both calculate the speed of light as the ratio of the distance travelled by a light ray divided by the time taken to travel this distance and get the same value.
Einstein's approach is "electrodynamic" because it assumes, on the basis of Maxwell's equations, that light travels at a constant velocity. As mentioned above, the idea of a universal constant velocity is strange because velocity is a ratio of distance to time. Do the Lorentz Transformation Equations hide a deeper truth about space and time? Einstein himself (Einstein 1920) gives one of the clearest descriptions of how the Lorentz Transformation equations are actually describing properties of space and time itself. His general reasoning is given below.
If the equations are combined they satisfy the relation:
(1) $x^{'2} - c^2t^{'2} = x^2 - c^2t^2 \,$
Einstein (1920) describes how this can be extended to describe movement in any direction in space:
(2) $x^{'2} + y^{'2} + z^{'2} - c^2t^{'2} = x^2 + y^2 + z^2 - c^2t^2 \,$
Equation (2) is a geometrical postulate about the relationship between lengths and times in the universe. It suggests that there is a constant s such that:
$s^2 = x^{'2} + y^{'2} + z^{'2} - c^2t^{'2} \,$
$s^2 = x^2 + y^2 + z^2 - c^2t^2 \,$
This equation was recognised by Minkowski as an extension of Pythagoras' Theorem (ie: $s^2 = x^2 + y^2$), such extensions being well known in early twentieth century mathematics. What the Lorentz Transformation is telling us is that the universe is a four dimensional spacetime and as a result there is no need for any "aether". (See Einstein 1920, appendices, for Einstein's discussion of how the Lorentz Transformation suggests a four dimensional universe but be cautioned that "imaginary time" has now been replaced by the use of "metric tensors").
Einstein's analysis shows that the x-axis and time axis of two observers in relative motion do not overlie each other, The equation relating one observer's time to the other observer's time shows that this relationship changes with distance along the x-axis ie:
$t' = \gamma (t - \frac{v x}{c^{2}})\,$
This means that the whole idea of "frames of reference" needs to be re-visited to allow for the way that axes no longer overlie each other.
Einstein, A. (1920). Relativity. The Special and General Theory. Methuen & Co Ltd 1920. Written December, 1916. Robert W. Lawson (Authorised translation). http://www.bartleby.com/173/
Inertial reference frames
The Lorentz Transformation for time involves a component $(vx/c^2)$ which results in time measurements being different along the x-axis of relatively moving observers. This means that the old idea of a frame of reference that simply involves three space dimensions with a time that is in common between all of the observers no longer applies. To compare measurements between observers the concept of a "reference frame" must be extended to include the observer's clocks.
An inertial reference frame is a conceptual, three-dimensional latticework of measuring rods set at right angles to each other with clocks at every point that are synchronised with each other (see below for a full definition). An object that is part of, or attached to, an inertial frame of reference is defined as an object which does not disturb the synchronisation of the clocks and remains at a constant spatial position within the reference frame. The inertial frame of reference that has a moving, non-rotating body attached to it is known as the inertial rest frame for that body. An inertial reference frame that is a rest frame for a particular body moves with the body when observed by observers in relative motion.
This type of reference frame became known as an "inertial" frame of reference because, as will be seen later in this book, each system of objects that are co-moving according to Newton's law of inertia (without rotation, gravitational fields or forces acting) have a common rest frame, with clocks that differ in synchronisation and rods that differ in length, from those in other, relatively moving, rest frames.
There are many other definitions of an "inertial reference frame" but most of these, such as "an inertial reference frame is a reference frame in which Newton's First Law is valid" do not provide essential details about how the coordinates are arranged and/or represent deductions from more fundamental definitions.
The following definition by Blandford and Thorne(2004) is a fairly complete summary of what working physicists mean by an inertial frame of reference:
"An inertial reference frame is a (conceptual) three-dimensional latticework of measuring rods and clocks with the following properties: (i ) The latticework moves freely through spacetime (i.e., no forces act on it), and is attached to gyroscopes so it does not rotate with respect to distant, celestial objects. (ii ) The measuring rods form an orthogonal lattice and the length intervals marked on them are uniform when compared to, e.g., the wavelength of light emitted by some standard type of atom or molecule; and therefore the rods form an orthonormal, Cartesian coordinate system with the coordinate x measured along one axis, y along another, and z along the third. (iii ) The clocks are densely packed throughout the latticework so that, ideally, there is a separate clock at every lattice point. (iv ) The clocks tick uniformly when compared, e.g., to the period of the light emitted by some standard type of atom or molecule; i.e., they are ideal clocks. (v) The clocks are synchronized by the Einstein synchronization process: If a pulse of light, emitted by one of the clocks, bounces off a mirror attached to another and then returns, the time of bounce $t_b$ as measured by the clock that does the bouncing is the average of the times of emission and reception as measured by the emitting and receiving clock: $t_b=1/2(t_e + t_r)$.¹
¹For a deeper discussion of the nature of ideal clocks and ideal measuring rods see, e.g., pp. 23-29 and 395-399 of Misner, Thorne, and Wheeler (1973)."
Special Relativity demonstrates that the inertial rest frames of objects that are moving relative to each other do not overlay one another. Each observer sees the other, moving observer's, inertial frame of reference as distorted. This discovery is the essence of Special Relativity and means that the transformation of coordinates and other measurements between moving observers is complicated. It will be discussed in depth below.
Blandford, R.D. and Thorne, K.S.(2004). Applications of Classical Physics. California Institute of Technology. See: http://www.pma.caltech.edu/Courses/ph136/yr2004/
Spacetime
The modern approach to relativity
Although the special theory of relativity was first proposed by Einstein in 1905, the modern approach to the theory depends upon the concept of a four-dimensional universe, that was first proposed by Hermann Minkowski in 1908. This approach uses the concept of invariance to explore the types of coordinate systems that are required to provide a full physical description of the location and extent of things.
The modern theory of special relativity begins with the concept of "length". In everyday experience, it seems that the length of objects remains the same no matter how they are rotated or moved from place to place. We think that the simple length of a thing is "invariant". However, as is shown in the illustrations below, what we are actually suggesting is that length seems to be invariant in a three-dimensional coordinate system.
The length of a thing in a two-dimensional coordinate system is given by Pythagoras's theorem:
$x^2 + y^2 = h^2$
This two-dimensional length is not invariant if the thing is tilted out of the two-dimensional plane. In everyday life, a three-dimensional coordinate system seems to describe the length fully. The length is given by the three-dimensional version of Pythagoras's theorem:
$h^2 = x^2 + y^2 + z^2$
The derivation of this formula is shown in the illustration below.
It seems that, provided all the directions in which a thing can be tilted or arranged are represented within a coordinate system, then the coordinate system can fully represent the length of a thing. However, it is clear that things may also be changed over a period of time. Time is another direction in which things can be arranged. This is shown in the following diagram:
The path taken by a thing in both space and time is known as the space-time interval.
In 1908 Hermann Minkowski pointed out that if things could be rearranged in time, then the universe might be four-dimensional. He boldly suggested that Einstein's recently-discovered theory of Special Relativity was a consequence of this four-dimensional universe. He proposed that the space-time interval might be related to space and time by Pythagoras' theorem in four dimensions:
$s^2 = x^2 + y^2 + z^2 + (ict)^2$
Where i is the imaginary unit (sometimes imprecisely called $\sqrt{-1}$), c is a constant, and t is the time interval spanned by the space-time interval, s. The symbols x, y and z represent displacements in space along the corresponding axes. In this equation, the 'second' becomes just another unit of length. In the same way as centimetres and inches are both units of length related by centimetres = 'conversion constant' times inches, metres and seconds are related by metres = 'conversion constant' times seconds. The conversion constant, c has a value of about 300,000,000 meters per second. Now $i^2$ is equal to minus one, so the space-time interval is given by:
$s^2 = x^2 + y^2 + z^2 - (ct)^2$
Minkowski's use of the imaginary unit has been superseded by the use of advanced geometry that uses a tool known as the "metric tensor". The metric tensor permits the existence of "real" time and the negative sign in the expression for the square of the space-time interval originates in the way that distance changes with time when the curvature of spacetime is analysed (see advanced text). We now use real time but Minkowski's original equation for the square of the interval survives so that the space-time interval is still given by:
$s^2 = x^2 + y^2 + z^2 - (ct)^2$
Space-time intervals are difficult to imagine; they extend between one place and time and another place and time, so the velocity of the thing that travels along the interval is already determined for a given observer.
If the universe is four-dimensional, then the space-time interval will be invariant, rather than spatial length. Whoever measures a particular space-time interval will get the same value, no matter how fast they are travelling. In physical terminology the invariance of the spacetime interval is a type of Lorentz Invariance. The invariance of the spacetime interval has some dramatic consequences.
The first consequence is the prediction that if a thing is travelling at a velocity of c metres per second, then all observers, no matter how fast they are travelling, will measure the same velocity for the thing. The velocity c will be a universal constant. This is explained below.
When an object is travelling at c, the space time interval is zero, this is shown below:
The distance travelled by an object moving at velocity v in the x direction for t seconds is:
$x = vt$
If there is no motion in the y or z directions the space-time interval is $s^2 = x^2 + 0 + 0 - (ct)^2$
So: $s^2 = (vt)^2 - (ct)^2$
But when the velocity v equals c:
$s^2 = (ct)^2 - (ct)^2$
And hence the space time interval $s^2 = (ct)^2 - (ct)^2 = 0$
A space-time interval of zero only occurs when the velocity is c (if x>0). All observers observe the same space-time interval so when observers observe something with a space-time interval of zero, they all observe it to have a velocity of c, no matter how fast they are moving themselves.
The universal constant, c, is known for historical reasons as the "speed of light in a vacuum". In the first decade or two after the formulation of Minkowski's approach many physicists, although supporting Special Relativity, expected that light might not travel at exactly c, but might travel at very nearly c. There are now few physicists who believe that light in a vacuum does not propagate at c.
The second consequence of the invariance of the space-time interval is that clocks will appear to go slower on objects that are moving relative to you. Suppose there are two people, Bill and John, on separate planets that are moving away from each other. John draws a graph of Bill's motion through space and time. This is shown in the illustration below:
Being on planets, both Bill and John think they are stationary, and just moving through time. John spots that Bill is moving through what John calls space, as well as time, when Bill thinks he is moving through time alone. Bill would also draw the same conclusion about John's motion. To John, it is as if Bill's time axis is leaning over in the direction of travel and to Bill, it is as if John's time axis leans over.
John calculates the length of Bill's space-time interval as:
$s^2 = (vt)^2 - (ct)^2$
whereas Bill doesn't think he has travelled in space, so writes:
$s^2 = (0)^2 - (cT)^2$
The space-time interval, $s^2$, is invariant. It has the same value for all observers, no matter who measures it or how they are moving in a straight line. Bill's $s^2$ equals John's $s^2$ so:
$(0)^2 - (cT)^2 = (vt)^2 - (ct)^2$
and
$-(cT)^2 = (vt)^2 - (ct)^2$
hence
$t = T / \sqrt{1 - v^2/c^2}$.
So, if John sees Bill measure a time interval of 1 second ($T = 1$) between two ticks of a clock that is at rest in Bill's frame, John will find that his own clock measures between these same ticks an interval $t$, called coordinate time, which is greater than one second. It is said that clocks in motion slow down, relative to those on observers at rest. This is known as "relativistic time dilation of a moving clock". The time that is measured in the rest frame of the clock (in Bill's frame) is called the proper time of the clock.
John will also observe measuring rods at rest on Bill's planet to be shorter than his own measuring rods, in the direction of motion. This is a prediction known as "relativistic length contraction of a moving rod". If the length of a rod at rest on Bill's planet is $X$, then we call this quantity the proper length of the rod. The length $x$ of that same rod as measured on John's planet, is called coordinate length, and given by
$x = X \sqrt{1 - v^2/c^2}$.
This equation can be derived directly and validly from the time dilation result with the assumption that the speed of light is constant.
The last consequence is that clocks will appear to be out of phase with each other along the length of a moving object. This means that if one observer sets up a line of clocks that are all synchronised so they all read the same time, then another observer who is moving along the line at high speed will see the clocks all reading different times. In other words observers who are moving relative to each other see different events as simultaneous. This effect is known as Relativistic Phase or the Relativity of Simultaneity. Relativistic phase is often overlooked by students of Special Relativity, but if it is understood then phenomena such as the twin paradox are easier to understand.
The way that clocks go out of phase along the line of travel can be calculated from the concepts of the invariance of the space-time interval and length contraction.
In the diagram above John is conventionally stationary. Distances between two points according to Bill are simple lengths in space (x) all at t=0 whereas John sees Bill's measurement of distance as a combination of a distance (X) and a time interval (T):
$x^2 = X^2 - (cT)^2$
But Bill's distance, x, is the length that he would obtain for things that John believes to be X metres in length. For Bill it is John who has rods that contract in the direction of motion so Bill's determination "x" of John's distance "X" is given from:
$x = X \sqrt{1 - v^2/c^2}$.
Thus $x^2 = X^2 - (v^2/c^2)X^2$
So: $(cT)^2 = (v^2/c^2)X^2$
And $cT = (v/c)X$
So: $T = (v/c^2)X$
Clocks that are synchronised for one observer go out of phase along the line of travel for another observer moving at $v$ metres per second by :$(v/c^2)$ seconds for every metre. This is one of the most important results of Special Relativity and should be thoroughly understood by students.
The net effect of the four-dimensional universe is that observers who are in motion relative to you seem to have time coordinates that lean over in the direction of motion and consider things to be simultaneous that are not simultaneous for you. Spatial lengths in the direction of travel are shortened, because they tip upwards and downwards, relative to the time axis in the direction of travel, akin to a rotation out of three-dimensional space.
Great care is needed when interpreting space-time diagrams. Diagrams present data in two dimensions, and cannot show faithfully how, for instance, a zero length space-time interval appears.
It is sometimes mistakenly held that the time dilation and length contraction results only apply for observers at x=0 and t=0. This is untrue. An inertial frame of reference is defined so that length and time comparisons can be made anywhere within a given reference frame.
Time dilation applies to time measurements taken between corresponding planes of simultaneity
Time differences in one inertial reference frame can be compared with time differences anywhere in another inertial reference frame provided it is remembered that these differences apply to corresponding pairs of lines or pairs of planes of simultaneous events.
Spacetime
Spacetime diagram showing an event, a world line, and a line of simultaneity.
In order to gain an understanding of both Galilean and Special Relativity it is important to begin thinking of space and time as being different dimensions of a four-dimensional vector space called spacetime. Actually, since we can't visualize four dimensions very well, it is easiest to start with only one space dimension and the time dimension. The figure shows a graph with time plotted on the vertical axis and the one space dimension plotted on the horizontal axis. An event is something that occurs at a particular time and a particular point in space. ("Julius X. wrecks his car in Lemitar, NM on 21 June at 6:17 PM.") A world line is a plot of the position of some object as a function of time (more properly, the time of the object as a function of position) on a spacetime diagram. Thus, a world line is really a line in spacetime, while an event is a point in spacetime. A horizontal line parallel to the position axis (x-axis) is a line of simultaneity; in Galilean Relativity all events on this line occur simultaneously for all observers. It will be seen that the line of simultaneity differs between Galilean and Special Relativity; in Special Relativity the line of simultaneity depends on the state of motion of the observer.
In a spacetime diagram the slope of a world line has a special meaning. Notice that a vertical world line means that the object it represents does not move -- the velocity is zero. If the object moves to the right, then the world line tilts to the right, and the faster it moves, the more the world line tilts. Quantitatively, we say that
$velocity = \frac{1}{slope~of~world~line} .$(5.1)
Notice that this works for negative slopes and velocities as well as positive ones. If the object changes its velocity with time, then the world line is curved, and the instantaneous velocity at any time is the inverse of the slope of the tangent to the world line at that time.
The hardest thing to realize about spacetime diagrams is that they represent the past, present, and future all in one diagram. Thus, spacetime diagrams don't change with time -- the evolution of physical systems is represented by looking at successive horizontal slices in the diagram at successive times. Spacetime diagrams represent the evolution of events, but they don't evolve themselves.
The lightcone
Things that move at the speed of light in our four dimensional universe have surprising properties. If something travels at the speed of light along the x-axis and covers x meters from the origin in t seconds the space-time interval of its path is zero.
$s^2 = x^2 - (ct)^2$
but $x = ct$ so:
$s^2 = (ct)^2 - (ct)^2 = 0$
Extending this result to the general case, if something travels at the speed of light in any direction into or out from the origin it has a space-time interval of 0:
$0 = x^2 + y^2 + z^2 - (ct)^2$
This equation is known as the Minkowski Light Cone Equation. If light were travelling towards the origin then the Light Cone Equation would describe the position and time of emission of all those photons that could be at the origin at a particular instant. If light were travelling away from the origin the equation would describe the position of the photons emitted at a particular instant at any future time 't'.
At the superficial level the light cone is easy to interpret. Its backward surface represents the path of light rays that strike a point observer at an instant and its forward surface represents the possible paths of rays emitted from the point observer. Things that travel along the surface of the light cone are said to be light- like and the path taken by such things is known as a null geodesic.
Events that lie outside the cones are said to be space-like or, better still space separated because their space time interval from the observer has the same sign as space (positive according to the convention used here). Events that lie within the cones are said to be time-like or time separated because their space-time interval has the same sign as time.
However, there is more to the light cone than the propagation of light. If the added assumption is made that the speed of light is the maximum possible velocity then events that are space separated cannot affect the observer directly. Events within the backward cone can have affected the observer so the backward cone is known as the "affective past" and the observer can affect events in the forward cone hence the forward cone is known as the "affective future".
The assumption that the speed of light is the maximum velocity for all communications is neither inherent in nor required by four dimensional geometry although the speed of light is indeed the maximum velocity for objects if the principle of causality is to be preserved by physical theories (ie: that causes precede effects).
The Lorentz transformation equations
The discussion so far has involved the comparison of interval measurements (time intervals and space intervals) between two observers. The observers might also want to compare more general sorts of measurement such as the time and position of a single event that is recorded by both of them. The equations that describe how each observer describes the other's recordings in this circumstance are known as the Lorentz Transformation Equations. (Note that the symbols below signify coordinates.)
The table below shows the Lorentz Transformation Equations.
$x^' = \frac{x - vt}{\sqrt{(1 - v^2/c^2)}}$ $x = \frac{x^' + vt^'}{\sqrt{(1 - v^2/c^2)}}$
$y^' = y$ $y = y^'$
$z^' = z$ $z = z^'$
$t^' = \frac{t - (v/c^2)x}{\sqrt{(1 - v^2/c^2)}}$ $t = \frac{t^' + (v/c^2)x^'}{\sqrt{(1 - v^2/c^2)}}$
See mathematical derivation of Lorentz transformation.
Notice how the phase ( (v/c2)x ) is important and how these formulae for absolute time and position of a joint event differ from the formulae for intervals.
A spacetime representation of the Lorentz Transformation
The Lorentz Transformation
Bill and John are moving at a relative velocity, v, and synchronise clocks when they pass each other. Both Bill and John observe an event along Bill's direction of motion. What times will Bill and John assign to the event? It was shown above that the relativistic phase was given by: $vx/c^2$. This means that Bill will observe an extra amount of time elapsing on John's time axis due to the position of the event. Taking phase into account and using the time dilation equation Bill is going to observe that the amount of time his own clocks measure can be compared with John's clocks using:
$T = \frac {t - vx/c^2}{\sqrt{1 - v^2/c^2}}$.
This relationship between the times of a common event between reference frames is known as the Lorentz Transformation Equation for time.
Simultaneity, time dilation and length contraction
More about the relativity of simultaneity
Most physical theories assume that it is possible to synchronise clocks. If you set up an array of synchronised clocks over a volume of space and take a snapshot of all of them simultaneously, you will find that the one closest to you will appear to show a later time than the others, due to the time light needs to travel from each of the distant clocks towards you. However, if the correct clock positions are known, by taking the transmission time of light into account, one can easily compensate for the differences and synchronise the clocks properly. The possibility of truly synchronising clocks exists because the speed of light is constant and this constant velocity can be used in the synchronisation process (the use of the predictable delays when light is used for synchronising clocks is known as "Einstein synchronisation").
The Lorentz transformation for time compares the readings of synchronised clocks at any instant. It compares the actual readings on clocks allowing for any time delay due to transmitting information between observers and answers the question "what does the other observer's clock actually read now, at this moment". The answer to this question is shocking. The Lorentz transformation for time shows that the clocks in any frame of reference moving relative to you cease to be synchronised!
The desynchronisation between relatively moving observers is illustrated below with a simpler diagram:
The effect of the relativity of simultaneity is for each observer to consider that a different set of events is simultaneous. Phase means that observers who are moving relative to each other have different sets of things that are simultaneous, or in their "present moment". It is this discovery that time is no longer absolute that profoundly unsettles many students of relativity.
The amount by which the clocks differ between two observers depends upon the distance of the clock from the observer ($t = xv/c^2$). Notice that if both observers are part of inertial frames of reference with clocks that are synchronised at every point in space then the phase difference can be obtained by simply reading the difference between the clocks at the distant point and clocks at the origin. This difference will have the same value for both observers.
The Andromeda Paradox
What do we mean when we say that events are occuring "now"? If we are looking out over a cityscape, watching the traffic, lots of things appear to be happening all at once. We can take a snapshot with a camera and the scene on the photo consists of all those things that happened at very nearly the same time. They are only "very nearly" at the same time because the events on the photo that were furthest away actually occurred slightly earlier than those that were nearby because of the time taken for light to reach the camera. If we want to discover the events that really happened at the same time we would need to subtract the time taken for the light to get to us. This would be highly necessary if you were observing events on the Moon: if you were on the Earth and saw the time on a lunar clock you would know that the real time on the moon was more than a second later. But would this be enough? What about the relativistic phase differences between clocks due to motion?
Special Relativity introduces yet another factor, in addition to the travel time of light, that upsets our knowledge of which events are simultaneous. The relativistic phase differences between clocks are tiny at the distance of the moon but have the startling consequence that at distances as large as our separation from nearby galaxies an observer who is driving on the earth can have a radically different set of events that are simultaneous with her "present moment" from another person who is standing on the earth. The classic example of this effect of relativistic phase is the "Andromeda Paradox", also known as the "Rietdijk-Putnam-Penrose" argument. Penrose described the argument:
"Two people pass each other on the street; and according to one of the two people, an Andromedean space fleet has already set off on its journey, while to the other, the decision as to whether or not the journey will actually take place has not yet been made. How can there still be some uncertainty as to the outcome of that decision? If to either person the decision has already been made, then surely there cannot be any uncertainty. The launching of the space fleet is an inevitability." (Penrose 1989).
The argument is illustrated below:
Notice that neither observer can actually "see" what is happening on Andromeda now. The argument is not about what can be "seen", it is purely about what different observers consider to be contained in their instantaneous present moment. The two observers observe the same, two million year old events in their telescopes but the moving observer must assume that events at the present moment on Andromeda are a day or two in advance of those in the present moment of the stationary observer. (Incidentally, the two observers see the same events in their telescopes because length contraction of the distance from Earth to Andromeda compensates exactly for the time difference on Andromeda.)
This "paradox" has generated considerable philosophical debate on the nature of time and free-will. The advanced text of this book provides a discussion of some of the issues surrounding this geometrical interpretation of special relativity.
A result of the relativity of simultaneity is that if the car driver launches a space rocket towards the Andromeda galaxy it might have a several days head start compared with a space rocket launched from the ground. This is because the "present moment" for the moving car driver is progressively advanced with distance compared with the present moment on the ground. The present moment for the car driver is shown in the illustration below:
The net effect of the Andromeda paradox is that when someone is moving towards a distant point there are later events at that point than for someone who is not moving towards the distant point. There is a time gap between the events in the present moment of the two people.
The nature of length contraction
According to special relativity items such as measuring rods consist of events distributed in space and time and a three dimensional rod is the events that compose the rod at a single instant. However, from the relativity of simultaneity it is evident that two observers in relative motion will have different sets of events that are present at a given instant. This means that two observers moving relative to each other will usually be observing measuring rods that are composed of different sets of events. If the word "rod" means the three dimensional form of the object called a rod then these two observers in relative motion observe different rods.
The way that measuring rods differ between observers can be seen by using a Minkowski diagram. The area of a Minkowski diagram that corresponds to all of the events that compose an object over a period of time is known as the worldtube of the object. It can be seen in the image below that length contraction is the result of individual observers having different sections of an object's worldtube in their present instant.
(It should be recalled that the longest lengths on space-time diagrams are often the shortest in reality).
It is sometimes said that length contraction occurs because objects rotate into the time axis. This is actually a half truth, there is no actual rotation of a three dimensional rod, instead the observed three dimensional slice of a four dimensional rod is changed which makes it appear as if the rod has rotated into the time axis. In special relativity it is not the rod that rotates into time, it is the observer's slice of the worldtube of the rod that rotates.
There can be no doubt that the three dimensional slice of the worldtube of a rod does indeed have different lengths for relatively moving observers so that the relativistic contraction of the rod is a real, physical phenomenon.
The issue of whether or not the events that compose the worldtube of the rod are always existent is a matter for philosophical speculation.
Further reading: Vesselin Petkov. (2005) Is There an Alternative to the Block Universe View?
More about time dilation
The term "time dilation" is applied to the way that observers who are moving relative to you record fewer clock ticks between events than you. In special relativity this is not due to properties of the clocks, such as their mechanisms getting heavier. Indeed, it should not even be said that the clocks tick faster or slower because what is truly occurring is that the clocks record shorter or longer elapsed times and this recording of elapsed time is independent of the mechanism of the clocks. The differences between clock readings are due to the clocks traversing shorter or longer distances between events along an observer's path through spacetime. This can be seen most clearly by re-examining the Andromeda Paradox.
Suppose Bill passes Jim at high velocity on the way to Mars. Jim has previously synchronised the clocks on Mars with his Earth clocks but for Bill the Martian clocks read times well in advance of Jim's. This means that Bill has a head start because his present instant contains what Jim considers to be the Martian future. Jim observes that Bill travels through both space and time and expresses this observation by saying that Bill's clocks recorded fewer ticks than his own. Bill achieves this strange time travel by having what Jim considers to be the future of distant objects in his present moment. Bill is literally travelling into future parts of Jim's frame of reference.
In special relativity time dilation and length contraction are not material effects, they are physical effects due to travel within a four dimensional spacetime. The mechanisms of the clocks and the structures of measuring rods are irrelevant.
It is important for advanced students to be aware that special relativity and General Relativity differ about the nature of spacetime. General Relativity, in the form championed by Einstein, avoids the idea of extended space and time and is what is known as a "relationalist" theory of physics. Special relativity, on the other hand, is a theory where extended spacetime is pre-eminent. The brilliant flowering of physical theory in the early twentieth century has tended to obscure this difference because, within a decade, special relativity had been subsumed within General Relativity. The interpretation of special relativity that is presented here should be learnt before advancing to more advanced interpretations.
The twin paradox
In the twin paradox there are twins, Bill and Jim. Jim is on Earth. Bill flies past Jim in a spaceship, goes to a distant point such as Mars, turns round and flies back again. It is found that Bill records fewer clock ticks over the whole journey than Jim records on earth. Why?
The twin paradox seems to cause students more problems than almost any other area of special relativity. Students sometimes reason that "all motion is relative" and time dilation applies so wonder why, if Jim records 25 seconds for the journey and sees Bill's clocks read 15 seconds, Bill doesn't reciprocally see Jim's clocks read only 9 seconds? This mistake arises for two reasons. Firstly, relativity does not hold that "all motion is relative;" this is not a postulate of the theory. Secondly, Bill moves through space, so the effects of the relativity of simultaneity must be considered as well as time dilation. The analysis given below follows Bohm's approach (see "further reading" below). It demonstrates that the twin "paradox", or more correctly, the way that the twin's clocks read different elapsed times, is due in large part to the relativity of simultaneity.
The effects of the relativity of simultaneity such as are seen in the "Andromeda paradox" are, in part, the origin of the "twin paradox". If you have not understood the Andromeda Paradox you will not understand the twin paradox because it will not be obvious that the twin who turns round has a head start. The relativity of simultaneity means that if Bill flies past Jim in the direction of Mars then Bill finds that any of Jim's clocks on Mars will already be reading a time that is in Jim's future. Bill gets a head start on the journey because for him Mars is already in Jim's future. Examine the diagram below, the x' axis connects all the events that Bill considers to be in his present moment, notice that these events get ever further into Jim's future with distance. Bill is flying to a Mars that is already in Jim's future. If you understand this then you will understand the twin paradox.
The analysis of the twin paradox begins with Jim and Bill synchronising clocks in their frames of reference. Jim synchronises his clocks on Earth with those on Mars. As Bill flies past Jim he synchronises his clock with Jim's clock on Earth. When he does this he realises that the relativity of simultaneity applies and so, for Bill, Jim's clocks on Mars are not synchronised with either his own or Jim's clocks on Earth. There is a time difference, or "gap", between Bill's clocks and those on Mars even when he passes Jim. This difference is equal to the relativistic phase at the distant point. This set of events is almost identical to the set of events that were discussed above in the Andromeda Paradox. This is the most crucial part of understanding the twin paradox: to Bill the clocks that Jim has placed on Mars are already in Jim's future even as Bill passes Jim on Earth.
Bill flies to Mars and discovers that the clocks there are reading a later time than his own clock. He turns round to fly back to Earth and realises that the relativity of simultaneity means that, for Bill, the clocks on Earth will have jumped forward and are ahead of those on Mars, yet another "time gap" appears. When Bill gets back to Earth the time gaps and time dilations mean that people on Earth have recorded more clock ticks that he did.
In essence the twin paradox is equivalent to two Andromeda paradoxes, one for the outbound journey and one for the inbound journey with the added spice of actually visiting the distant points.
For ease of calculation suppose that Bill is moving at a truly astonishing velocity of 0.8c in the direction of a distant point that is 10 light seconds away (about 3 million kilometres). The illustration below shows Jim and Bill's observations:
From Bill's viewpoint there is both a time dilation and a phase effect. It is the added factor of "phase" that explains why, although the time dilation occurs for both observers, Bill observes the same readings on Jim's clocks over the whole journey as does Jim.
To summarise the mathematics of the twin paradox using the example:
Jim observes the distance as 10 light seconds and the distant point is in his frame of reference. According to Jim it takes Bill the following time to make the journey:
Time taken = distance / velocity therefore according to Jim:
$t = 10/0.8 = 12.5$ seconds
Again according to Jim, time dilation should affect the observed time on Bill's clocks:
$T = t \times \sqrt {1 - v^2/c^2}$ so:
$T = 12.5 \times \sqrt {1 - 0.8^2} = 7.5$ seconds
So for Jim the round trip takes 25 secs and Bill's clock reads 15 secs.
Bill measures the distance as:
$X = x \times \sqrt {1 - v^2/c^2} = 10 \times \sqrt {1 - 0.8^2} = 6$ light seconds.
For Bill it takes $X/v = 6/0.8 = 7.5$ seconds.
Bill observes Jim's clocks to appear to run slow as a result of time dilation:
$t^' = T \times \sqrt {1 - v^2/c^2}$ so:
$t^' = 7.5 \times \sqrt {1 - 0.8^2} = 4.5$ seconds
But there is also a time gap of $vx/c^2 = 8$ seconds.
So for Bill, Jim's clocks register 12.5 secs have passed from the start to the distant point. This is composed of 4.5 secs elapsing on Jim's clocks at the turn round point plus an 8 sec time gap from the start of the journey. Bill sees 25 secs total time recorded on Jim's clocks over the whole journey, this is the same time as Jim observes on his own clocks.
It is sometimes dubiously asserted that the twin paradox is about the clocks on the twin that leaves earth being slower than those on the twin that stays at home, it is then argued that biological processes contain clocks therefore the twin that travelled away ages less. This is not really true because the relativistic phase plays a major role in the twin paradox and leads to Bill travelling to a remote place that, for Bill, is at a later time than Jim when Bill and Jim pass each other. A more accurate explanation is that when we travel we travel in time as well as space.
Students have difficulty with the twin paradox because they believe that the observations of the twins are symmetrical. This is not the case. As can be seen from the illustration below either twin could determine whether they had made the turn or the other twin had made the turn.
The twin paradox can also be analysed without including any turnaround by Bill. Suppose that when Bill passes Mars he meets another traveller coming towards Earth. If the two travellers synchronise clocks as they pass each other they will obtain the same elapsed times for the whole journey to Mars and back as Bill would have recorded himself. This shows that the "paradox" is independent of any acceleration effects at the turnaround point.
Jim and Bill's view of the journey
Special relativity does not postulate that all motion is 'relative'; the postulates are that the laws of physics are the same in all inertial frames and there is a constant velocity called the "speed of light". Contrary to popular myth the twins do not observe events that are a mirror image of each other. Bill observes himself leave Jim then return, Jim sees Bill leave him then return. Bill does not observe Jim turn round, he observes himself making the turn.
The following illustrations cover various views of the journey. The most important moment in the journey is the point where Bill turns round. Notice how Bill's surface of simultaneity, that includes the events that he considers to be in the present moment, swings across Jim's worldline during the turn.
As Bill travels away from Jim he considers events that are already in Jim's past to be in his own present.
After the turn Bill considers events that are in Jim's future to be in his present (although the finite speed of light prevents Bill from observing Jim's future).
The swing in Bill's surface of simultaneity at the turn-round point leads to a 'time gap'. In our example Bill might surmise that Jim's clocks jump by 16 seconds on the turn.
Notice that the term "Jim's apparent path" is used in the illustration - as was seen earlier, Bill knows that he himself has left Jim and returned so he knows that Jim's apparent path is an artefact of his own motion. If we imagine that the twin paradox is symmetrical then the illustration above shows how we might imagine Bill would view the journey. But what happens, in our example, to the 16 seconds in the time gap, does it just disappear? The twin paradox is not symmetrical and Jim does not make a sudden turn after 4.5 seconds. Bill's actual observation and the fate of the information in the time gap can be probed by supposing that Jim emits a pulse of light several times a second. The result is shown in the illustration below.
Jim has clearly but one inertial frame but does Bill represent a single inertial frame? Suppose Bill was on a planet as he passed Jim and flew back to Jim in a rocket from the turn-round point: how many inertial frames would be involved? Is Bill's view a view from a single inertial frame?
Exercise: it is interesting to calculate the observations made by an observer who continues in the direction of the outward leg of Bill's journey - note that a velocity transformation will be needed to estimate Bill's inbound velocity as measured by this third observer.
Further reading:
Bohm, D. The Special Theory of Relativity (W. A. Benjamin, 1965).
D’Inverno, R. Introducing Einstein’s Relativity (Oxford University Press, 1992).
Eagle, A. A note on Dolby and Gull on radar time and the twin "paradox". American Journal of Physics. 2005, VOL 73; NUMB 10, pages 976-978. http://arxiv.org/PS_cache/physics/pdf/0411/0411008v2.pdf
The Pole-barn paradox
The length contraction in relativity is symmetrical. When two observers in relative motion pass each other they both measure a contraction of length.
(Note that Minkowski's metric involves the subtraction of displacements in time, so what appear to be the longest lengths on a 2D sheet of paper are often the shortest lengths in a (3+1)D reality).
The symmetry of length contraction leads to two questions. Firstly, how can a succession of events be observed as simultaneous events by another observer? This question led to the concept of de Broglie waves and quantum theory. Secondly, if a rod is simultaneously between two points in one frame how can it be observed as being successively between those points in another frame? For instance, if a pole enters a building at high speed how can one observer find it is fully within the building and another find that the two ends of the rod are opposed to the two ends of the building at successive times? What happens if the rod hits the end of the building? The second question is known as the "pole-barn paradox" or "ladder paradox".
The pole-barn paradox states the following: suppose a superhero running at 0.75c and carrying a horizontal pole 15 m long towards a barn 10m long, with front and rear doors. When the runner and the pole are inside the barn, a ground observer closes and then opens both doors (by remote control) so that the runner and pole are momentarily captured inside the barn and then proceed to exit the barn from the back door.
One may be surprised to see a 15-m pole fit inside a 10-m barn. But the pole is in motion with respect to the ground observer, who measures the pole to be contracted to a length of 9.9 m (check using equations).
The “paradox” arises when we consider the runner’s point of view. The runner sees the barn contracted to 6.6 m. Because the pole is in the rest frame of the runner, the runner measures it to have its proper length of 15 m. Now, how can our superhero make it safely through the barn?
The resolution of the “paradox” lies in the relativity of simultaneity. The closing of the two doors is measured to be simultaneous by the ground observer. However, since the doors are at different positions, the runner says that they do not close simultaneously. The rear door closes and then opens first, allowing the leading edge of the pole to exit. The front door of the barn does not close until the trailing edge of the pole passes by.
What happens if the rear door is kept closed and made out of some impenetrable material? Can we or can we not trap the rod inside the barn by closing the front door while the whole rod is inside according to a ground observer? When the front end of the rod hits the rear door, information about this impact will travel backwards along the rod in the form of a shock wave. The information cannot travel faster than c, so the rear end of the rod will continue to travel forward at its original speed until the wave reaches it. Even if the shock wave is traveling at the speed of light, it will not reach the rear end of the rod until after the rear end has passed through the front door even in the runner's frame. Therefore the whole rod (albeit quite scrunched up) will be inside the barn when the front door closes. If it is infinitely elastic, it will end up compressed and "spring loaded" against the inside of the closed barn.
Evidence for length contraction, the field of an infinite straight current
Length contraction can be directly observed in the field of an infinitely straight current. This is shown in the illustration below.
Non-relativistic electromagnetism describes the electric field due to a charge using:
$E = \frac{\lambda}{2 \pi \epsilon_0 r}$
and describes the magnetic field due to an infinitely long straight current using the Biot Savart law:
$B = \frac{\mu_0 I}{2 \pi r}$
Or using the charge density (from $I = \lambda v$ where $\lambda$ ):
$B = \frac{\mu_0 \lambda v}{2 \pi r}$
Using relativity it is possible to show that the formula for the magnetic field given above can be derived using the relativistic effect of length contraction on the electric field and so what we call the "magnetic" field can be understood as relativistic observations of a single phenomenon. The relativistic calculation is given below.
If Jim is moving relative to the wire at the same velocity as the negative charges he sees the wire contracted relative to Bill:
$l_+ = l \sqrt{1 - v^2/c^2}$
Bill should see the space between the charges that are moving along the wire to be contracted by the same amount but the requirement for electrical neutrality means that the moving charges will be spread out to match those in the frame of the fixed charges in the wire.
This means that Jim sees the negative charges spread out so that:
$l_- = \frac{l}{\sqrt{1 - v^2/c^2}}$
The net charge density observed by Jim is:
$\lambda = \frac{q}{l_-} - \frac{q}{l_+}$
Substituting:
$\lambda = \frac{q}{l} ( \sqrt{1 - v^2/c^2} - \frac{1}{\sqrt{1 - v^2/c^2}})$
Using the binomial expansion:
$\lambda = \frac{q}{l} (1 - \frac{v^2}{2c^2} - 1 - \frac{v^2}{2c^2})$
Therefore, allowing for a net positive charge, the positive charges being fixed:
$\lambda = \frac{qv^2}{l c^2}$
The electric field at Jim's position is given by:
$E = \frac{\lambda v^2}{2 \pi \epsilon_0 r c^2}$
The force due to the electrical field at Jim's position is given by $F = Eq$ which is:
$F = \frac{q \lambda v^2}{2 \pi \epsilon_0 r c^2}$
Now, from classical electromagnetism:
$c^2 = \frac{1}{\epsilon_0 \mu_0}$
So substituting this into $F = \frac{q \lambda v^2}{2 \pi \epsilon_0 r c^2}$:
(1) $F = \frac{q\mu_0 \lambda v^2}{2 \pi r}$
This is the formula for the relativistic electric force that is observed by Bill as a magnetic force. How does this compare with the non-relativistic calculation of the magnetic force? The force on a charge at Jim's position due to the magnetic field is, from the classical formula:
$F = Bqv$
Which from the Biot-Savart law is:
(2) $F = \frac{q\mu_0 \lambda v^2}{2 \pi r}$
which shows that the same formula applies for the relativistic excess electrical force experienced by Jim as the formula for the classical magnetic force.
It can be seen that once the idea of space-time is understood the unification of the two fields is straightforward. Jim is moving relative to the wire at the same speed as the negatively charged current carriers so Jim only experiences an electric field. Bill is stationary relative to the wire and observes that the charges in the wire are balanced whereas Jim observes an imbalance of charge. Bill assigns the attraction between Jim and the current carriers to a "magnetic field".
It is important to notice that, in common with the explanation of length contraction given above, the events that constitute the stream of negative charges for Jim are not the same events as constitute the stream of negative charges for Bill. Bill and Jim's negative charges occupy different moments in time.
Incidently, the drift velocity of electrons in a wire is about a millimetre per second but a huge charge is available in a wire (See link below).
Further reading:
Purcell, E. M. Electricity and Magnetism. Berkeley Physics Course. Vol. 2. 2nd ed. New York, NY: McGraw-Hill. 1984. ISBN: 0070049084.
http://hyperphysics.phy-astr.gsu.edu/hbase/electric/ohmmic.html
http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/releng.html
De Broglie waves
De Broglie noticed that the differing three dimensional sections of the universe would cause oscillations in the rest frame of an observer to appear as wave trains in the rest frame of observers who are moving.
He combined this insight with Einstein's ideas on the quantisation of energy to create the foundations of quantum theory. De Broglie's insight is also a round-about proof of the description of length contraction given above - observers in relative motion have differing three dimensional slices of a four dimensional universe. The existence of matter waves is direct experimental evidence of the relativity of simultaneity.
Further reading: de Broglie, L. (1925) On the theory of quanta. A translation of : RECHERCHES SUR LA THEORIE DES QUANTA (Ann. de Phys., 10e s´erie, t. III (Janvier-F ´evrier 1925).by: A. F. Kracklauer. http://replay.web.archive.org/20090509012910/http://www.ensmp.fr/aflb/LDB-oeuvres/De_Broglie_Kracklauer.pdf
Bell's spaceship paradox
Bell devised a thought experiment called the "Spaceship Paradox" to enquire whether length contraction involved a force and whether this contraction was a contraction of space. In the Spaceship Paradox two spaceships are connected by a thin, stiff string and are both equally and linearly accelerated to a velocity $v$ relative to the ground, at which, in the special relativity version of the paradox, the acceleration ceases. The acceleration on both spaceships is arranged to be equal according to ground observers so, according to observers on the ground, the spaceships will stay the same distance apart. It is asked whether the string would break.
It is useful when considering this problem to investigate what happens to a single spaceship. If a spaceship that has rear thrusters is accelerated linearly, according to ground observers, to a velocity $v$ then the ground observers will observe it to have contracted in the direction of motion. The acceleration experienced by the front of the spaceship will have been slightly less than the acceleration experienced by the rear of the spaceship during contraction and then would suddenly reach a high value, equalising the front and rear velocities, once the rear acceleration and increasing contraction had ceased. From the ground it would be observed that overall the acceleration at the rear could be linear but the acceleration at the front would be non-linear.
In Bell's thought experiment both spaceships are artificially constrained to have constant acceleration, according to the ground observers, until the acceleration ceases. Sudden adjustments are not allowed. Furthermore no difference between the accelerations at the front and rear of the assembly are permitted so any tendency towards contraction would need to be borne as tension and extension in the string.
The most interesting part of the paradox is what happens to the space between the ships. From the ground the spaceships will stay the same distance apart (the experiment is arranged to achieve this) whilst according to observers on the spaceships they will appear to become increasingly separated. This implies that acceleration is not invariant between reference frames (see Part II) and the force applied to the spaceships will indeed be affected by the difference in separation of the ships observed by each frame.
The section on the nature of length contraction above shows that as the string changes velocity the observers on the ground observe a changing set of events that compose the string. These new events define a string that is shorter than the original. This means that the string will indeed attempt to contract as observed from the ground and will be drawn out under tension as observed from the spaceships. If the string were unable to bear the extension and tension in the moving frame or the tension in the rest frame it would break.
Another interesting aspect of Bell's Spaceship Paradox is that in the inertial frames of the ships, owing to the relativity of simultaneity, the lead spaceship will always be moving slightly faster than the rear spaceship so the spaceship-string system does not form a true inertial frame of reference until the acceleration ceases in the frames of reference of both ships. The asynchrony of the cessation of acceleration shows that the lead ship reaches the final velocity before the rear ship in the frame of reference of either ship. However, this time difference is very slight (less than the time taken for an influence to travel down the string at the speed of light $x/c > vx/c^2$).
It is necessary at this stage to give a warning about extrapolating special relativity into the domain of general relativity (GR). SR cannot be applied with confidence to accelerating systems which is why the comments above have been confined to qualitative observations.
Further reading
Bell, J. S. (1976). Speakable and unspeakable in quantum mechanics. Cambridge University Press 1987 ISBN 0-521-52338-9
Hsu, J-P and Suzuki, N. (2005) Extended Lorentz Transformations for Accelerated Frames and the Solution of the “Two-Spaceship Paradox” AAPPS Bulletin October 2005 p.17 http://www.aapps.org/archive/bulletin/vol15/15-5/15_5_p17p21%7F.pdf
Matsuda, T and Kinoshita, A (2004. A Paradox of Two Space Ships in Special Relativity. AAPPS Bulletin February 2004 p3. http://www.aapps.org/archive/bulletin/vol14/14_1/14_1_p03p07.pdf
The transverse doppler effect
The existence of time dilation means that the frequency of light emitted from a source that is moving towards or away from an observer should be red shifted in directions that are perpendicular to the direction of motion. The transverse doppler effect is given by:
$\nu = \nu^{\prime} \sqrt{ 1 - \frac{v^2}{c^2}}$
Where $\nu$ is the observed frequency and $\nu^{\prime}$ is the frequency if the source were stationary relative to the observer (the proper frequency).
This effect was first confirmed by Ives and Stillwell in 1938. The transverse doppler effect is a purely relativistic effect and has been used as an example of proof that time dilation occurs.
Relativistic transformation of angles
If a rod makes an angle with its direction of motion toward or away from an observer the component of its length in the direction of motion will be contracted. This means that observed angles are also transformed during changes of frames of reference. Assuming that motion occurs along the x-axis, suppose the rod has a proper length (rest length) of $L^{\prime}$ metres and makes an angle of $\theta^{\prime}$degrees with the x'-axis in its rest frame. The tangent of the angle made with the axes is:
Tangent in rest frame of rod = $\tan \theta^{\prime} = \frac{L^{\prime}_y}{L^{\prime}_x}$
Tangent in observer's frame = $\tan \theta = \frac{L_y}{L_x}$
Therefore:
$\frac {\tan \theta}{\tan \theta^{\prime}} = \frac {L_y L^{\prime}_x} {L^{\prime}_y L_x}$
But $L_x = L^{\prime}_x \sqrt {1 - v^2/c^2}$
And $L_y = L^{\prime}_y$
So
$\tan \theta =\frac {\tan \theta^{\prime}} {\sqrt {1 - v^2/c^2}}$
Showing that angles with the direction of motion are observed to increase with velocity.
The angle made by a moving object with the x-axis also involves a transformation of velocities to calculate the correct angle of incidence.
Addition of velocities
How can two observers, moving at v m/sec relative to each other, compare their observations of the velocity of a third object?
Suppose one of the observers measures the velocity of the object as $u^'$ where:
$u^' = \frac{x^'}{t^'}$
The coordinates $x^'$ and $t^'$ are given by the Lorentz transformations:
$x^' = \frac{x - vt}{\sqrt{(1 - v^2/c^2)}}$
and
$t^' = \frac{t - (v/c^2)x}{\sqrt{(1 - v^2/c^2)}}$
but
$x^' = u^' t^'$
so:
$\frac{x - vt}{\sqrt{(1 - v^2/c^2)}} = u^' \frac{t - (v/c^2)x}{\sqrt{(1 - v^2/c^2)}}$
and hence:
$x - vt = u^' ( t - vx/c^2)$
Notice the role of the phase term $vx/c^2$. The equation can be rearranged as:
$x = \frac{(u^' + v)}{(1 + u^'v/c^2)} t$
given that $x = u t$:
$u = \frac{(u^' + v)}{(1 + u^'v/c^2)}$
This is known as the relativistic velocity addition theorem, it applies to velocities parallel to the direction of mutual motion.
The existence of time dilation means that even when objects are moving perpendicular to the direction of motion there is a discrepancy between the velocities reported for an object by observers who are moving relative to each other. If there is any component of velocity in the x direction (${u_x}$, ${{u^'}_x}$) then the phase affects time measurement and hence the velocities perpendicular to the x-axis. The table below summarises the relativistic addition of velocities in the various directions in space.
${u^'}_x = \frac{(u_x - v)}{(1 - u_x v/c^2)}$ $u_x = \frac{({u^'}_x + v)}{(1 + {u^'}_x v/c^2)}$
${u^'}_y = \frac{u_y \sqrt{1 - v^2/c^2}}{(1 - u_x v/c^2)}$ $u_y = \frac{{u^'}_y \sqrt{1 - v^2/c^2}}{(1 + {u^'}_x v/c^2)}$
${u^'}_z = \frac{u_z \sqrt{1 - v^2/c^2}}{(1 - u_x v/c^2)}$ $u_z = \frac{{u^'}_z \sqrt{1 - v^2/c^2}}{(1 + {u^'}_x v/c^2)}$
Notice that for an observer in another reference frame the sum of two velocities (u and v) can never exceed the speed of light. This means that the speed of light is the maximum velocity in any frame of reference. Simultaneity, time dilation and length contraction
Dynamics
Introduction
The way that the velocity of a particle can differ between observers who are moving relative to each other means that momentum needs to be redefined as a result of relativity theory.
The illustration below shows a typical collision of two particles. In the right hand frame the collision is observed from the viewpoint of someone moving at the same velocity as one of the particles, in the left hand frame it is observed by someone moving at a velocity that is intermediate between those of the particles.
If momentum is redefined then all the variables such as force (rate of change of momentum), energy etc. will become redefined and relativity will lead to an entirely new physics. The new physics has an effect at the ordinary level of experience through the relation $K = \gamma m c^2 - m c^2\,$ whereby it is the tiny deviations in gamma from unity that are expressed as everyday kinetic energy so that the whole of physics is related to "relativistic" reasoning rather than Newton's empirical ideas.
Momentum
In physics momentum is conserved within a closed system, the law of conservation of momentum applies. Consider the special case of identical particles colliding symmetrically as illustrated below:
The momentum change by the red ball is:
$2m\mathbf{u_{yR}}$
The momentum change by the blue ball is:
$-2m\mathbf{u_{yB}}$
The sum of the momentum changes is zero, so the Newtonian conservation of momentum law is demonstrated:
$2m\mathbf{u_{yR}}-2m\mathbf{u_{yB}}=\mathbf{0}$
Notice that this result depends upon the y components of the velocities being equal, that is, $\mathbf{u_{yR}}=\mathbf{u_{yB}}$.
The relativistic case is rather different. The collision is illustrated below, the left hand frame shows the collision as it appears for one observer and the right hand frame shows exactly the same collision as it appears for another observer moving at the same velocity as the blue ball:
The configuration shown above has been simplified because one frame contains a stationary blue ball (ie: $u_{xB}=0$) and the velocities are chosen so that the vertical velocity of the red ball is exactly reversed after the collision ie:$u_{yR}^' = -u_{yB}^'$. Both frames show exactly the same event, it is only the observers who differ between frames. The relativistic velocity transformations between frames is:
$u_{yR}^' = \frac{u_{yR} \sqrt{1 - v^2/c^2}}{1 - u_{xR}v/c^2}$
$u_{yB}^' =\frac{u_{yB} \sqrt{1 - v^2/c^2}}{1 - u_{xB}v/c^2}= u_{yB} \sqrt{1 - v^2/c^2}$ given that $u_{xB}=0\,$.
Suppose that the y components are equal in one frame, in Newtonian physics they will also be equal in the other frame. However, in relativity, if the y components are equal in one frame they are not necessarily equal in the other frame (time dilation is not directional so perpendicular velocities differ between the observers). For instance if $u_{yR}^' = u_{yB}^'$ then:
$u_{yB} = \frac{u_{yR}}{1 - u_{xR}v/c^2}$
So if $u_{yR}^' = u_{yB}^'$ then in this case $u_{yR} \ne u_{yB}$.
If the mass were constant between collisions and between frames then although $2 m\mathbf{u_{yR}^'} = 2m\mathbf{u_{yB}^'}$ it is found that:
$2 m\mathbf{u_{yR}} \ne 2m\mathbf{u_{yB}}$
So momentum defined as mass times velocity is not conserved in a collision when the collision is described in frames moving relative to each other. Notice that the discrepancy is very small if $u_{xR}$ and $v$ are small.
To preserve the principle of momentum conservation in all inertial reference frames, the definition of momentum has to be changed. The new definition must reduce to the Newtonian expression when objects move at speeds much smaller than the speed of light, so as to recover the Newtonian formulas.
The velocities in the y direction are related by the following equation when the observer is travelling at the same velocity as the blue ball ie: when $u_{xB} = 0\,$:
$u_{yB} = \frac{u_{yR}}{1 - u_{xR}v/c^2}$
If we write $m_B$ for the mass of the blue ball) and $m_R$ for the mass of the red ball as observed from the frame of the blue ball then, if the principle of relativity applies:
$2 m_R u_{yR} = 2 m_B u_{yB} \,$
So:
$m_R = m_B \frac{u_{yB}}{u_{yR}}$
But:
$u_{yB} = \frac{u_{yR}}{1 - u_{xR}v/c^2}$
Therefore:
$m_R = \frac{m_B}{1 - u_{xR}v/c^2}$
This means that, if the principle of relativity is to apply then the mass must change by the amount shown in the equation above for the conservation of momentum law to be true.
The reference frame was chosen so that $u_{yR}^' = -u_{yB}^'$ and hence $u_{xR}^' = v$. This allows $v$ to be determined in terms of $u_{xR}\,$:
$u_{xR}^' = \frac{u_{xR} - v}{1 - u_{xR}v/c^2} = v$
and hence:
$v = \frac{c^2}{u_{xR}}(1 - \sqrt{1-u_{xR}^2/c^2})$
So substituting for $v$ in $m_R = \frac{m_B}{1 - u_{xR}v/c^2}$:
$m_R = \frac{m_B}{\sqrt{1 - u_{xR}^2/c^2}}$
The blue ball is at rest so its mass is sometimes known as its rest mass, and is given the symbol $m$. As the balls were identical at the start of the boost the mass of the red ball is the mass that a blue ball would have if it were in motion relative to an observer; this mass is sometimes known as the relativistic mass symbolised by $M$. These terms are now infrequently used in modern physics, as will be explained at the end of this section. The discussion given above was related to the relative motions of the blue and red balls, as a result $u_{xR}$ corresponds to the speed of the moving ball relative to an observer who is stationary with respect to the blue ball. These considerations mean that the relativistic mass is given by:
$M = \frac{m}{\sqrt{1 - u^2/c^2}}$
The relativistic momentum is given by the product of the relativistic mass and the velocity $\mathbf{p}=M\mathbf{u}$.
The overall expression for momentum in terms of rest mass is:
$\mathbf{p} = \frac{m\mathbf{u}}{\sqrt{1-u^2/c^2}}$
and the components of the momentum are:
$p_x = \frac{mu_x}{\sqrt{1-u^2/c^2}}$
$p_y = \frac{mu_y}{\sqrt{1-u^2/c^2}}$
$p_z = \frac{mu_z}{\sqrt{1-u^2/c^2}}$
So the components of the momentum depend upon the appropriate velocity component and the speed.
Since the factor with the square root is cumbersome to write, the following abbreviation is often used, called the Lorentz gamma factor:
$\gamma = \frac{1}{\sqrt{1-u^2/c^2}}$
The expression for the momentum then reads $\mathbf{p} = m \gamma \mathbf{u}$.
It can be seen from the discussion above that we can write the momentum of an object moving with velocity $\mathbf{u}$ as the product of a function $M(u)$ of the speed $u$ and the velocity $\mathbf{u}$:
$M(u) \mathbf{u}$
The function $M(u)$ must reduce to the object's mass $m$ at small speeds, in particular when the object is at rest $M(0) = m$.
There is a debate about the usage of the term "mass" in relativity theory. If inertial mass is defined in terms of momentum then it does indeed vary as $M = \gamma m$ for a single particle that has rest mass, furthermore, as will be shown below the energy of a particle that has a rest mass is given by $E=Mc^2$. Prior to the debate about nomenclature the function $M(u)$, or the relation $M = \gamma m$, used to be called 'relativistic mass', and its value in the frame of the particle was referred to as the 'rest mass' or 'invariant mass'. The relativistic mass, $M = \gamma m$, would increase with velocity. Both terms are now largely obsolete: the 'rest mass' is today simply called the mass, and the 'relativistic mass' is often no longer used since, as will be seen in the discussion of energy below, it is identical to the energy but for the units.
Force
Newton's second law states that the total force acting on a particle equals the rate of change of its momentum. The same form of Newton's second law holds in relativistic mechanics. The relativistic 3 force is given by:
$\mathbf{f} = d\mathbf{p}/dt$
If the relativistic mass is used:
$\frac{d\mathbf{p}}{dt}= \frac{d(m\mathbf{u})}{dt}$
By Leibniz's law where $d(xy)=xdy+ydx$:
$\mathbf{f} = \frac{d\mathbf{p}}{dt}= m\frac{d\mathbf{u}}{dt}+\mathbf{u}\frac{dm}{dt}$
This equation for force will be used below to derive relativistic expressions for the energy of a particle in terms of the old concept of "relativistic mass".
The relativistic force can also be written in terms of acceleration. Newton's second law can be written in the familiar form
$\mathbf{F} = m \mathbf{a}$
where $\mathbf{a} = d\mathbf{v}/dt$ is the acceleration.
here m is not the relativistic mass but is the invariant mass.
In relativistic mechanics, momentum is $\mathbf{p} = m \gamma \mathbf{v}$
again m being the invariant mass and the force is given by $\mathbf{F} = \frac{d\mathbf{p}}{dt} = m \frac{d(\gamma \mathbf{v})}{dt}$
This form of force is used in the derivation of the expression for energy without relying on relativistic mass.
It will be seen in the second section of this book that Newton's second law in terms of acceleration is given by:
$\mathbf{F} = m \gamma \mathbf{a} + \frac{m \gamma^3 v}{c^2} \frac{dv}{dt} \mathbf{v}$
Energy
The debate over the use of the concept "relativistic mass" means that modern physics courses may forbid the use of this in the derivation of energy. The newer derivation of energy without using relativistic mass is given in the first section and the older derivation using relativistic mass is given in the second section. The two derivations can be compared to gain insight into the debate about mass but a knowledge of 4 vectors is really required to discuss the problem in depth. In principle the first derivation is most mathematically correct because "relativistic mass" is given by: $M = \frac{m}{\sqrt{1 - u^2/c^2}}$ which involves the constants $m$ and $c$.
Derivation of relativistic energy using the relativistic momentum
In the following, modern derivation, m means the invariant mass - what used to be called the "rest mass". Energy is defined as the work done in moving a body from one place to another. We will make use of the relativistic momentum $p=\gamma mv$. Energy is given from:
$dE = \mathbf{f}d\mathbf{x}$
so, over the whole path:
$E = \int_{0}^{x} \mathbf{f}d\mathbf{x}$
Kinetic energy (K) is the energy used to move a body from a velocity of 0 to a velocity $\mathbf{u}$. Restricting the motion to one dimension:
$K = \int_{u=0}^{u=u} \mathbf{f} dx$
Using the relativistic 3 force:
$K = \int_{u=0}^{u=u} \frac{d(m\gamma u)}{dt}dx=\int_{u=0}^{u=u}m \frac{d(\gamma u)}{dt}dx= \int_{u=0}^{u=u} m d(\gamma u)\frac{dx}{dt}$
substituting for $d(\gamma u)$ and using $dx/dt=u$:
$K = \int_{u=0}^{u=u} m (\gamma du + ud\gamma) u$
Which gives:
$K = \int_{u=0}^{u=u} m (u\gamma du + u^2 d\gamma)$
The Lorentz factor $\gamma$ is given by:
$\gamma = \frac{1}{\sqrt{1 - u^2/c^2}}$
meaning that :
$d\gamma = \frac{u}{c^2}\gamma^3du$
$du = \frac{c^2}{u\gamma^3}d\gamma$
So that
$K = \int_{\gamma=1}^{\gamma=\gamma} m (u\gamma \frac{c^2}{u\gamma^3}d\gamma + u^2 d\gamma) = \int_{\gamma=1}^{\gamma=\gamma} m (\frac{c^2}{\gamma^2} + u^2) d\gamma = \int_{\gamma=1}^{\gamma=\gamma} m c^2 d\gamma$
Alternatively, we can use the fact that:
$\gamma^2c^2 - \gamma^2u^2 = c^2\,$
Differentiating:
$2\gamma c^2d\gamma - \gamma^22udu -u^22\gamma d\gamma =0\,$
So, rearranging:
$\gamma u du + u^2 d\gamma = c^2 d\gamma\,$
In which case:
$K = \int_{u=0}^{u=u} m (u\gamma du + u^2 d\gamma) = \int_{u=0}^{u=u} m c^2 d\gamma \,$
As $u$ goes from 0 to $u$, the Lorentz factor $\gamma$ goes from 1 to $\gamma$, so:
$K = m c^2 \int_{\gamma=1}^{\gamma=\gamma} d\gamma \,$
and hence:
$K = \gamma m c^2 - m c^2\,$
The amount $\gamma mc^2$ is known as the total energy of the particle. The amount $m c^2$ is known as the rest energy of the particle. If the total energy of the particle is given the symbol $E$:
$E = \gamma m c^2 = mc^2 + K \,$
So it can be seen that $m c^2$ is the energy of a mass that is stationary. This energy is known as mass energy.
The Newtonian approximation for kinetic energy can be derived by using the binomial theorem to expand $\gamma = (1-u^2/c^2)^{-\frac{1}{2}}$.
The binomial expansion is:
$(a + x)^n = a^n + na^{n-1}x + \frac{n(n-1)}{2!}a^{n-2}x^2 ....$
So expanding $(1-u^2/c^2)^{-\frac{1}{2}}$:
$K = \frac{1}{2} m u^2 + \frac{3m u^4}{8c^2} + \frac{5m u^6}{16c^4} + ...$
So if $u$ is much less than $c$:
$K = \frac{1}{2} m u^2$
which is the Newtonian approximation for low velocities.
Derivation of relativistic energy using the concept of relativistic mass
Energy is defined as the work done in moving a body from one place to another. Energy is given from:
$dE = \mathbf{F}d\mathbf{x}$
so, over the whole path:
$E = \int_{0}^{x} \mathbf{F}d\mathbf{x}$
Kinetic energy (K) is the energy used to move a body from a velocity of 0 to a velocity $u$. So:
$K = \int_{u=0}^{u=u} F dx$
Using the relativistic force:
$K = \int_{u=0}^{u=u} \frac{d(Mu)}{dt}dx$
So:
$K = \int_{u=0}^{u=u} d(Mu)\frac{dx}{dt}$
substituting for $d(Mu)$ and using $dx/dt=u$:
$K = \int_{u=0}^{u=u} (Mdu + udM) u$
Which gives:
$K = \int_{u=0}^{u=u} (Mu du + u^2 dM)$
The relativistic mass is given by:
$M = \frac{m}{\sqrt{1 - u^2/c^2}}$
Which can be expanded as:
$M^2c^2 - M^2u^2 = m^2c^2$
Differentiating:
$2Mc^2dM - M^22udu -u^22MdM =0$
So, rearranging:
$Mu du + u^2 dM = c^2 dM$
In which case:
$K = \int_{u=0}^{u=u} (Mu du + u^2 dM)$
is simplified to:
$K = \int_{u=0}^{u=u} c^2 dM$
But the mass goes from $m$ to $M$ so:
$K = c^2 \int_{M=m}^{M=M} dM)$
and hence:
$K = Mc^2 - mc^2$
The amount $Mc^2$ is known as the total energy of the particle. The amount $mc^2$ is known as the rest energy of the particle. If the total energy of the particle is given the symbol $E$:
$E = mc^2 + K$
So it can be seen that $mc^2$ is the energy of a mass that is stationary. This energy is known as mass energy and is the origin of the famous formula $E=mc^2$ that is iconic of the nuclear age.
The Newtonian approximation for kinetic energy can be derived by substituting the rest mass for the relativistic mass ie:
$M = \frac{m}{\sqrt{1 - u^2/c^2}}$
and:
$K = Mc^2 - mc^2$
So:
$K = \frac{mc^2}{\sqrt{1-u^2/c^2}} - mc^2$
ie:
$K = mc^2 ((1-u^2/c^2)^{-\frac{1}{2}} -1)$
The binomial theorem can be used to expand $(1-u^2/c^2)^{-\frac{1}{2}}$:
The binomial theorem is:
$(a + x)^n = a^n + na^{n-1}x + \frac{n(n-1)}{2!}a^{n-2}x^2 ....$
So expanding $(1-u^2/c^2)^{-\frac{1}{2}}$:
$K = \frac{1}{2} mu^2 + \frac{3mu^4}{8c^2} + \frac{5mu^6}{16c^4} + ...$
So if $u$ is much less than $c$:
$K = \frac{1}{2} mu^2$
Which is the Newtonian approximation for low velocities.
Nuclear Energy
When protons and neutrons (nucleons) combine to form elements the combination of particles tends to be in a lower energy state than the free neutrons and protons. Iron has the lowest energy and elements above and below iron in the scale of atomic masses tend to have higher energies. This decrease in energy as neutrons and protons bind together is known as the binding energy. The atomic masses of elements are slightly different from that calculated from their constituent particles and this difference in mass energy, calculated from $E=mc^2$, is almost exactly equal to the binding energy.
The binding energy can be released by converting elements with higher masses per nucleon to those with lower masses per nucleon. This can be done by either splitting heavy elements such as uranium into lighter elements such as barium and krypton or by joining together light elements such as hydrogen into heavier elements such as deuterium. If atoms are split the process is known as nuclear fission and if atoms are joined the process is known as nuclear fusion. Atoms that are lighter than iron can be fused to release energy and those heavier than iron can be split to release energy.
When hydrogen and a neutron are combined to make deuterium the energy released can be calculated as follows:
The mass of a proton is 1.00731 amu, the mass of a neutron is 1.00867 amu and the mass of a deuterium nucleus is 2.0136 amu. The difference in mass between a deuterium nucleus and its components is 0.00238 amu. The energy of this mass difference is:
$E = mc^2 = 1.66 \times 10^{-27} \times 0.00238 \times (3 \times 10^8)^2$
So the energy released is $3.57 \times 10^{-13}$ joules or about $2 \times 10^{11}$ joules per gram of protons (ionised hydrogen).
(Assuming 1 amu = $1.66 \times 10^{-27}$ Kg, Avogadro's number = $6 \times 10^{23}$ and the speed of light is $3 \times 10^8$ metres per second)
Present day nuclear reactors use a process called nuclear fission in which rods of uranium emit neutrons which combine with the uranium in the rod to produce uranium isotopes such as 236U which rapidly decay into smaller nuclei such as Barium and Krypton plus three neutrons which can cause further generation of 236U and further decay. The fact that each neutron can cause the generation of three more neutrons means that a self sustaining or chain reaction can occur. The generation of energy results from the equivalence of mass and energy; the decay products, barium and krypton have a lower mass than the original 236U, the missing mass being released as 177 MeV of radiation. The nuclear equation for the decay of 236U is written as follows:
$^{236}_{92}U \rightarrow ^{144}_{56}Ba + ^{89}_{36}Kr + 3n + 177 MeV$
Nuclear explosion
If a large amount of the uranium isotope 235U (the critical mass) is confined the chain reaction can get out of control and almost instantly release a large amount of energy. A device that confines a critical mass of uranium is known as an atomic bomb or A-bomb. A bomb based on the fusion of deuterium atoms is known as a thermonuclear bomb, hydrogen bomb or H-bomb.
aether
Introduction
Many students confuse Relativity Theory with a theory about the propagation of light. According to modern Relativity Theory the constancy of the speed of light is a consequence of the geometry of spacetime rather than something specifically due to the properties of photons; but the statement "the speed of light is constant" often distracts the student into a consideration of light propagation. This confusion is amplified by the importance assigned to interferometry experiments, such as the Michelson-Morley experiment, in most textbooks on Relativity Theory.
The history of theories of the propagation of light is an interesting topic in physics and was indeed important in the early days of Relativity Theory. In the seventeenth century two competing theories of light propagation were developed. Christiaan Huygens published a wave theory of light which was based on Huygen's principle whereby every point in a wavelike disturbance can give rise to further disturbances that spread out spherically. In contrast Newton considered that the propagation of light was due to the passage of small particles or "corpuscles" from the source to the illuminated object. His theory is known as the corpuscular theory of light. Newton's theory was widely accepted until the nineteenth century.
In the early nineteenth century Thomas Young performed his Young's slits experiment and the interference pattern that occurred was explained in terms of diffraction due to the wave nature of light. The wave theory was accepted generally until the twentieth century when quantum theory confirmed that light had a corpuscular nature and that Huygen's principle could not be applied.
The idea of light as a disturbance of some medium, or aether, that permeates the universe was problematical from its inception (US spelling: "ether"). The first problem that arose was that the speed of light did not change with the velocity of the observer. If light were indeed a disturbance of some stationary medium then as the earth moves through the medium towards a light source the speed of light should appear to increase. It was found however that the speed of light did not change as expected. Each experiment on the velocity of light required corrections to existing theory and led to a variety of subsidiary theories such as the "aether drag hypothesis". Ultimately it was experiments that were designed to investigate the properties of the aether that provided the first experimental evidence for Relativity Theory.
The aether drag hypothesis
The aether drag hypothesis was an early attempt to explain the way experiments such as Arago's experiment showed that the speed of light is constant. The aether drag hypothesis is now considered to be incorrect.
According to the aether drag hypothesis light propagates in a special medium, the aether, that remains attached to things as they move. If this is the case then, no matter how fast the earth moves around the sun or rotates on its axis, light on the surface of the earth would travel at a constant velocity.
Stellar Aberration. If a telescope is travelling at high speed, only light incident at a particular angle can avoid hitting the walls of the telescope tube.
The primary reason the aether drag hypothesis is considered invalid is because of the occurrence of stellar aberration. In stellar aberration the position of a star when viewed with a telescope swings each side of a central position by about 20.5 seconds of arc every six months. This amount of swing is the amount expected when considering the speed of earth's travel in its orbit. In 1871, George Biddell Airy demonstrated that stellar aberration occurs even when a telescope is filled with water. It seems that if the aether drag hypothesis were true then stellar aberration would not occur because the light would be travelling in the aether which would be moving along with the telescope.
The "train analogy" for the absence of aether drag.
If you visualize a bucket on a train about to enter a tunnel and a drop of water drips from the tunnel entrance into the bucket at the very centre, the drop will not hit the centre at the bottom of the bucket. The bucket is the tube of a telescope, the drop is a photon and the train is the earth. If aether is dragged then the droplet would be travelling with the train when it is dropped and would hit the centre of bucket at the bottom.
The amount of stellar aberration, α is given by:
$tan(\alpha) = v \delta t / c \delta t$
So:
$tan(\alpha) = v / c$
The speed at which the earth goes round the sun, v = 30 km/s, and the speed of light is c = 300,000,000 m/s which gives α = 20.5 seconds of arc every six months. This amount of aberration is observed and this contradicts the aether drag hypothesis.
In 1818, Augustin Jean Fresnel introduced a modification to the aether drag hypothesis that only applies to the interface between media. This was accepted during much of the nineteenth century but has now been replaced by special theory of relativity (see below).
The aether drag hypothesis is historically important because it was one of the reasons why Newton's corpuscular theory of light was replaced by the wave theory and it is used in early explanations of light propagation without relativity theory. It originated as a result of early attempts to measure the speed of light.
In 1810, François Arago realised that variations in the refractive index of a substance predicted by the corpuscular theory would provide a useful method for measuring the velocity of light. These predictions arose because the refractive index of a substance such as glass depends on the ratio of the velocities of light in air and in the glass. Arago attempted to measure the extent to which corpuscles of light would be refracted by a glass prism at the front of a telescope. He expected that there would be a range of different angles of refraction due to the variety of different velocities of the stars and the motion of the earth at different times of the day and year. Contrary to this expectation he found that that there was no difference in refraction between stars, between times of day or between seasons. All Arago observed was ordinary stellar aberration.
In 1818 Fresnel examined Arago's results using a wave theory of light. He realised that even if light were transmitted as waves the refractive index of the glass-air interface should have varied as the glass moved through the aether to strike the incoming waves at different velocities when the earth rotated and the seasons changed.
Fresnel proposed that the glass prism would carry some of the aether along with it so that "...the aether is in excess inside the prism". He realised that the velocity of propagation of waves depends on the density of the medium so proposed that the velocity of light in the prism would need to be adjusted by an amount of 'drag'.
The velocity of light $v_n$ in the glass without any adjustment is given by:
$v_n = c / n$
The drag adjustment $v_d$ is given by:
$v_d = v (1 - \frac {\rho_e}{\rho_g})$
Where $\rho_e$ is the aether density in the environment, $\rho_g$ is the aether density in the glass and $v$ is the velocity of the prism with respect to the aether.
The factor $(1 - \frac {\rho_e}{\rho_g})$ can be written as $(1 - \frac{1}{n^2})$ because the refractive index, n, would be dependent on the density of the aether. This is known as the Fresnel drag coefficient.
The velocity of light in the glass is then given by:
$V = \frac {c}{n} + v (1 - \frac{1}{n^2})$
This correction was successful in explaining the null result of Arago's experiment. It introduces the concept of a largely stationary aether that is dragged by substances such as glass but not by air. Its success favoured the wave theory of light over the previous corpuscular theory.
The Fresnel drag coefficient was confirmed by an interferometer experiment performed by Fizeau. Water was passed at high speed along two glass tubes that formed the optical paths of the interferometer and it was found that the fringe shifts were as predicted by the drag coefficient.
The special theory of relativity predicts the result of the Fizeau experiment from the velocity addition theorem without any need for an aether.
If $V$ is the velocity of light relative to the Fizeau apparatus and $U$ is the velocity of light relative to the water and $v$ is the velocity of the water:
$U = \frac {c}{n}$
$V = \frac {c/n + v}{1 + v/nc}$
which, if v/c is small can be expanded using the binomial expansion to become:
$V = \frac {c}{n} + v (1 - \frac{1}{n^2})$
This is identical to Fresnel's equation.
It may appear as if Fresnel's analysis can be substituted for the relativistic approach, however, more recent work has shown that Fresnel's assumptions should lead to different amounts of aether drag for different frequencies of light and violate Snell's law (see Ferraro and Sforza (2005)).
The aether drag hypothesis was one of the arguments used in an attempt to explain the Michelson-Morley experiment before the widespread acceptance of the special theory of relativity.
The Fizeau experiment is consistent with relativity and approximately consistent with each individual body, such as prisms, lenses etc. dragging its own aether with it. This contradicts some modified versions of the aether drag hypothesis that argue that aether drag may happen on a global (or larger) scale and stellar aberration is merely transferred into the entrained "bubble" around the earth which then faithfully carries the modified angle of incidence directly to the observer.
References
The Michelson-Morley experiment
The Michelson-Morley experiment, one of the most important and famous experiments in the history of physics, was performed in 1887 by Albert Michelson and Edward Morley at what is now Case Western Reserve University, and is considered to be the first strong evidence against the theory of a luminiferous aether.
Physics theories of the late 19th century postulated that, just as water waves must have a medium to move across (water), and audible sound waves require a medium to move through (air), so also light waves require a medium, the "luminiferous aether". The speed of light being so great, designing an experiment to detect the presence and properties of this aether took considerable thought.
Measuring aether
A depiction of the concept of the “aether wind”.
Each year, the Earth travels a tremendous distance in its orbit around the sun, at a speed of around 30 km/second, over 100,000 km per hour. It was reasoned that the Earth would at all times be moving through the aether and producing a detectable "aether wind". At any given point on the Earth's surface, the magnitude and direction of the wind would vary with time of day and season. By analysing the effective wind at various different times, it should be possible to separate out components due to motion of the Earth relative to the Solar System from any due to the overall motion of that system.
The effect of the aether wind on light waves would be like the effect of wind on sound waves. Sound waves travel at a constant speed relative to the medium that they are travelling through (this varies depending on the pressure, temperature etc (see sound), but is typically around 340 m/s). So, if the speed of sound in our conditions is 340 m/s, when there is a 10 m/s wind relative to the ground, into the wind it will appear that sound is travelling at 330 m/s (340 - 10). Downwind, it will appear that sound is travelling at 350 m/s (340 + 10). Measuring the speed of sound compared to the ground in different directions will therefore enable us to calculate the speed of the air relative to the ground.
If the speed of the sound cannot be directly measured, an alternative method is to measure the time that the sound takes to bounce off of a reflector and return to the origin. This is done parallel to the wind and perpendicular (since the direction of the wind is unknown before hand, just determine the time for several different directions). The cumulative round trip effects of the wind in the two orientations slightly favors the sound travelling at right angles to it. Similarly, the effect of an aether wind on a beam of light would be for the beam to take slightly longer to travel round-trip in the direction parallel to the “wind” than to travel the same round-trip distance at right angles to it.
“Slightly” is key, in that, over a distance such as a few meters, the difference in time for the two round trips would be only about a millionth of a millionth of a second. At this point the only truly accurate measurements of the speed of light were those carried out by Albert Abraham Michelson, which had resulted in measurements accurate to a few meters per second. While a stunning achievement in its own right, this was certainly not nearly enough accuracy to be able to detect the aether.
The experiments
Michelson, though, had already seen a solution to this problem. His design, later known as an interferometer, sent a single source of white light through a half-silvered mirror that was used to split it into two beams travelling at right angles to one another. After leaving the splitter, the beams travelled out to the ends of long arms where they were reflected back into the middle on small mirrors. They then recombined on the far side of the splitter in an eyepiece, producing a pattern of constructive and destructive interference based on the length of the arms. Any slight change in the amount of time the beams spent in transit would then be observed as a shift in the positions of the interference fringes. If the aether were stationary relative to the sun, then the Earth's motion would produce a shift of about 0.04 fringes.
Michelson had made several measurements with an experimental device in 1881, in which he noticed that the expected shift of 0.04 was not seen, and a smaller shift of about 0.02 was. However his apparatus was a prototype, and had experimental errors far too large to say anything about the aether wind. For a measurement of the aether wind, a much more accurate and tightly controlled experiment would have to be carried out. The prototype was, however, successful in demonstrating that the basic method was feasible.
A Michelson interferometer
He then combined forces with Edward Morley and spent a considerable amount of time and money creating an improved version with more than enough accuracy to detect the drift. In their experiment the light was repeatedly reflected back and forth along the arms, increasing the path length to 11m. At this length the drift would be about .4 fringes. To make that easily detectable the apparatus was located in a closed room in the basement of a stone building, eliminating most thermal and vibrational effects. Vibrations were further reduced by building the apparatus on top of a huge block of marble, which was then floated in a pool of mercury. They calculated that effects of about 1/100th of a fringe would be detectable.
The mercury pool allowed the device to be turned, so that it could be rotated through the entire range of possible angles to the "aether wind". Even over a short period of time some sort of effect would be noticed simply by rotating the device, such that one arm rotated into the direction of the wind and the other away. Over longer periods day/night cycles or yearly cycles would also be easily measurable.
During each full rotation of the device, each arm would be parallel to the wind twice (facing into and away from the wind) and perpendicular to the wind twice. This effect would show readings in a sine wave formation with two peaks and two troughs. Additionally if the wind was only from the earth's orbit around the sun, the wind would fully change directions east/west during a 12 hour period. In this ideal conceptualization, the sine wave of day/night readings would be in opposite phase.
Because it was assumed that the motion of the solar system would cause an additional component to the wind, the yearly cycles would be detectable as an alteration of the maginitude of the wind. An example of this effect is a helicopter flying forward. While on the ground, a helicopter's blades would be measured as travelling around at 50 km/h at the tips. However, if the helicopter is travelling forward at 50 km/h, there are points at which the tips of the blades are travelling 0 km/h and 100 km/h with respect to the air they are travelling through. This increases the magnitude of the lift on one side and decreases it on the other just as it would increase and decrease the magnitude of an ether wind on a yearly basis.
The most famous failed experiment
Ironically, after all this thought and preparation, the experiment became what might be called the most famous failed experiment to date. Instead of providing insight into the properties of the aether, Michelson and Morley's 1887 article in the American Journal of Science reported the measurement to be as small as one-fortieth of the expected displacement but "since the displacement is proportional to the square of the velocity" they concluded that the measured velocity was approximately one-sixth of the expected velocity of the Earth's motion in orbit and "certainly less than one-fourth". Although this small "velocity" was measured, it was considered far too small to be used as evidence of aether, it was later said to be within the range of an experimental error that would allow the speed to actually be zero.
Although Michelson and Morley went on to different experiments after their first publication in 1887, both remained active in the field. Other versions of the experiment were carried out with increasing sophistication. Kennedy and Illingsworth both modified the mirrors to include a half-wave "step", eliminating the possibility of some sort of standing wave pattern within the apparatus. Illingsworth could detect changes on the order of 1/300th of a fringe, Kennedy up to 1/1500th. Miller later built a non-magnetic device to eliminate magnetostriction, while Michelson built one of non-expanding invar to eliminate any remaining thermal effects. Others from around the world increased accuracy, eliminated possible side effects, or both. All of these with the exception of Dayton Miller also returned what is considered a null result.
Morley was not convinced of his own results, and went on to conduct additional experiments with Dayton Miller. Miller worked on increasingly large experiments, culminating in one with a 32m (effective) arm length at an installation at the Mount Wilson observatory. To avoid the possibility of the aether wind being blocked by solid walls, he used a special shed with thin walls, mainly of canvas. He consistently measured a small positive effect that varied, as expected, with each rotation of the device, the sidereal day and on a yearly basis. The low magnitude of the results he attributed to aether entrainment (see below). His measurements amounted to only ~10 kps instead of the expected ~30 kps expected from the earth's orbital motion alone. He remained convinced this was due to partial entrainment, though he did not attempt a detailed explanation.
Though Kennedy later also carried out an experiment at Mount Wilson, finding 1/10 the drift measured by Miller, and no seasonal effects, Miller's findings were considered important at the time, and were discussed by Michelson, Hendrik Lorentz and others at a meeting reported in 1928 (ref below). There was general agreement that more experimentation was needed to check Miller's results. Lorentz recognised that the results, whatever their cause, did not quite tally with either his or Einstein's versions of special relativity. Einstein was not present at the meeting and felt the results could be dismissed as experimental error (see Shankland ref below).
Name Year Arm length (meters) Fringe shift expected Fringe shift measured Experimental Resolution Upper Limit on Vaether
Michelson 1881 1.2 0.04 0.02
Michelson and Morley 1887 11.0 0.4 < 0.01 8 km/s
Morley and Morley 1902–1904 32.2 1.13 0.015
Miller 1921 32.0 1.12 0.08
Miller 1923–1924 32.0 1.12 0.03
Miller (Sunlight) 1924 32.0 1.12 0.014
Tomascheck (Starlight) 1924 8.6 0.3 0.02
Miller 1925–1926 32.0 1.12 0.088
Mt Wilson) 1926 2.0 0.07 0.002
Illingworth 1927 2.0 0.07 0.0002 0.0006 1 km/s
Piccard and Stahel (Rigi) 1927 2.8 0.13 0.006
Michelson et al. 1929 25.9 0.9 0.01
Joos 1930 21.0 0.75 0.002
In recent times versions of the MM experiment have become commonplace. Lasers and masers amplify light by repeatedly bouncing it back and forth inside a carefully tuned cavity, thereby inducing high-energy atoms in the cavity to give off more light. The result is an effective path length of kilometers. Better yet, the light emitted in one cavity can be used to start the same cascade in another set at right angles, thereby creating an interferometer of extreme accuracy.
The first such experiment was led by Charles H. Townes, one of the co-creators of the first maser. Their 1958 experiment put an upper limit on drift, including any possible experimental errors, of only 30 m/s. In 1974 a repeat with accurate lasers in the triangular Trimmer experiment reduced this to 0.025 m/s, and included tests of entrainment by placing one leg in glass. In 1979 the Brillet-Hall experiment put an upper limit of 30 m/s for any one direction, but reduced this to only 0.000001 m/s for a two-direction case (ie, still or partially entrained aether). A year long repeat known as Hils and Hall, published in 1990, reduced this to 2x10-13.
Fallout
This result was rather astounding and not explainable by the then-current theory of wave propagation in a static aether. Several explanations were attempted, among them, that the experiment had a hidden flaw (apparently Michelson's initial belief), or that the Earth's gravitational field somehow "dragged" the aether around with it in such a way as locally to eliminate its effect. Miller would have argued that, in most if not all experiments other than his own, there was little possibility of detecting an aether wind since it was almost completely blocked out by the laboratory walls or by the apparatus itself. Be this as it may, the idea of a simple aether, what became known as the First Postulate, had been dealt a serious blow.
A number of experiments were carried out to investigate the concept of aether dragging, or entrainment. The most convincing was carried out by Hamar, who placed one arm of the interferometer between two huge lead blocks. If aether were dragged by mass, the blocks would, it was theorised, have been enough to cause a visible effect. Once again, no effect was seen.
Walter Ritz's Emission theory (or ballistic theory), was also consistent with the results of the experiment, not requiring aether, more intuitive and paradox-free. This became known as the Second Postulate. However it also led to several "obvious" optical effects that were not seen in astronomical photographs, notably in observations of binary stars in which the light from the two stars could be measured in an interferometer.
The Sagnac experiment placed the MM apparatus on a constantly rotating turntable. In doing so any ballistic theories such as Ritz's could be tested directly, as the light going one way around the device would have different length to travel than light going the other way (the eyepiece and mirrors would be moving toward/away from the light). In Ritz's theory there would be no shift, because the net velocity between the light source and detector was zero (they were both mounted on the turntable). However in this case an effect was seen, thereby eliminating any simple ballistic theory. This fringe-shift effect is used today in laser gyroscopes.
Another possible solution was found in the Lorentz-FitzGerald contraction hypothesis. In this theory all objects physically contract along the line of motion relative to the aether, so while the light may indeed transit slower on that arm, it also ends up travelling a shorter distance that exactly cancels out the drift.
In 1932 the Kennedy-Thorndike experiment modified the Michelson-Morley experiment by making the path lengths of the split beam unequal, with one arm being very long. In this version the two ends of the experiment were at different velocities due to the rotation of the earth, so the contraction would not "work out" to exactly cancel the result. Once again, no effect was seen.
Ernst Mach was among the first physicists to suggest that the experiment actually amounted to a disproof of the aether theory. The development of what became Einstein's special theory of relativity had the Fitzgerald-Lorentz contraction derived from the invariance postulate, and was also consistent with the apparently null results of most experiments (though not, as was recognised at the 1928 meeting, with Miller's observed seasonal effects). Today relativity is generally considered the "solution" to the MM null result.
The Trouton-Noble experiment is regarded as the electrostatic equivalent of the Michelson-Morley optical experiment, though whether or not it can ever be done with the necessary sensitivity is debatable. On the other hand, the 1908 Trouton-Rankine experiment that spelled the end of the Lorentz-FitzGerald contraction hypothesis achieved an incredible sensitivity.
References
Mathematical analysis of the Michelson Morley Experiment
The Michelson interferometer splits light into rays that travel along two paths then recombines them. The recombined rays interfere with each other. If the path length changes in one of the arms the interference pattern will shift slightly, moving relative to the cross hairs in the telescope. The Michelson interferometer is arranged as an optical bench on a concrete block that floats on a large pool of mercury. This allows the whole apparatus to be rotated smoothly.
If the earth were moving through an aether at the same velocity as it orbits the sun (30 km/sec) then Michelson and Morley calculated that a rotation of the apparatus should cause a shift in the fringe pattern. The basis of this calculation is given below.
Consider the time taken $t_1$ for light to travel along Path 1 in the illustration:
$t_1 = \frac{L_f}{c-v} + \frac{L_f}{c+v}$
Rearranging terms:
$\frac{L_f}{c-v} + \frac{L_f}{c+v} = \frac{2L_fc}{c^2-v^2}$
further rearranging:
$\frac{2L_fc}{c^2-v^2} = \frac{2L_f}{c}\frac{1}{1-v^2/c^2}$
hence:
$t_1 = \frac{2L_f}{c}\frac{1}{1-v^2/c^2}$
Considering Path 2, the light traces out two right angled triangles so:
$ct_2 = 2 \sqrt{L_m^2 + (vt_2/2)^2}$
Rearranging:
$t_2 = \frac{2L_m}{\sqrt{c^2-v^2}}$
So:
$t_2 =\frac{2L_m}{c} \frac{1}{\sqrt{1-(v/c)^2}}$
It is now easy to calculate the difference ($\Delta t$ between the times spent by the light in Path 1 and Path 2:
$\Delta t = \frac{2}{c} \left(\frac{L_m}{\sqrt{1-v^2/c^2}}-\frac{L_f}{1-v^2/c^2}\right)$
If the apparatus is rotated by 90 degrees the new time difference is:
$\Delta t^' = \frac{2}{c} \left(\frac{L_m}{1-v^2/c^2}-\frac{L_f}{\sqrt{1-v^2/c^2}}\right)$
because $L_m$ and $L_f$ exchange roles.
The interference fringes due to the time difference between the paths will be different after rotation if $\Delta t$ and $\Delta t^'$ are different.
$\Delta t^' - \Delta t = \frac{2}{c} \left(\frac{L_m+L_f}{1-v^2/c^2}-\frac{L_f+L_m}{\sqrt{1-v^2/c^2}}\right)$
This difference between the two times can be calculated if the binomial expansions of $\frac{1}{1-v^2/c^2}$ and $\frac{1}{\sqrt{1-v^2/c^2}}$ are used:
$\frac{1}{1-v^2/c^2}= 1 + \frac{v^2}{c^2} + \left(\frac{v^2}{c^2}\right)^2 + ....$
$\frac{1}{\sqrt{1-v^2/c^2}}= 1 + \frac{1}{2}\frac{v^2}{c^2} + \frac{3}{8}\left(\frac{v^2}{c^2}\right)^2 + ....$
So:
$\Delta t^' - \Delta t \approx \frac{L_f + L_m}{c}\frac{v^2}{c^2}$
If the period of one vibration of the light is $T$ then the number of fringes ($n$), that will move past the cross hairs of the telescope when the apparatus is rotated will be:
$n = \frac{\Delta t^' - \Delta t}{T}$
Inserting the formula for $\Delta t^' - \Delta t$:
$n \approx \frac{L_f + L_m}{cT}\frac{v^2}{c^2}$
But $cT$ for a light wave is the wavelength of the light ie: $cT = \lambda$ so:
$n \approx \frac{L_f + L_m}{\lambda}\frac{v^2}{c^2}$
If the wavelength of the light is $5 \times 10^{-7}$ and the total path length is 20 metres then:
$n = \left(\frac{20}{5 \times 10^{-7}}\right)10^{-8}$
So the fringes will shift by 0.4 fringes (ie: 40%) when the apparatus is rotated.
However, no fringe shift is observed. The null result of the Michelson-Morley experiment is nowdays explained in terms of the constancy of the speed of light. The assumption that the light would have a velocity of $c-v$ and $c+v$ depending on the direction relative to the hypothetical "aether wind" is false, the light always travels at $c$ between two points in a vacuum and the speed of light is not affected by any "aether wind". This is because, in {special relativity} the Lorentz transforms induce a {length contraction}. Doing over the above calculations we obtain:
$L_f=L_m{\sqrt{1-v^2/c^2}}$
(taking into consideration the length contraction)
It is now easy to recalculate the difference $\Delta t$ between the times spent by the light in Path 1 and Path 2:
$\Delta t = \frac{2}{c} \left(\frac{L_m}{\sqrt{1-v^2/c^2}}-\frac{L_f}{1-v^2/c^2}\right)=0$ because $L_f=L_m{\sqrt{1-v^2/c^2}}$
If the apparatus is rotated by 90 degrees the new time difference is:
$\Delta t^' = \frac{2}{c} \left(\frac{L_m}{1-v^2/c^2}-\frac{L_f}{\sqrt{1-v^2/c^2}}\right)=0$
The interference fringes due to the time difference between the paths will be different after rotation if $\Delta t$ and $\Delta t^'$ are different.
$\Delta t^' - \Delta t = \frac{2}{c} \left(\frac{L_m+L_f}{1-v^2/c^2}-\frac{L_f+L_m}{\sqrt{1-v^2/c^2}}\right)=0$
Wave propagation in moving medium
To date, it is pointed out that the medium of light in Michelson-Morley experiment is the air. And the velocity of medium is zero. Therefore,
$t_1 = \frac{L}{c-v} + \frac{L_f}{c+v}=\frac{2L}{c}$
$t_2=\frac{2L}{c}$
$\Delta t=0$
After apparatus rotated 90°, there is no interference movement. [1]
Coherence length
The coherence length of light rays from a source that has wavelengths that differ by $\Delta \lambda$ is:
$x = \frac{\lambda^2}{2 \pi \Delta \lambda}$
If path lengths differ by more than this amount then interference fringes will not be observed. White light has a wide range of wavelengths and interferometers using white light must have paths that are equal to within a small fraction of a millimetre for interference to occur. This means that the ideal light source for a Michelson Interferometer should be monochromatic and the arms should be as near as possible equal in length.
The calculation of the coherence length is based on the fact that interference fringes become unclear when light rays are about 60 degrees (about 1 radian or one sixth of a wavelength ($\approx 1/2\pi$)) out of phase. This means that when two beams are:
$\frac{\lambda}{2 \pi}$
metres out of step they will no longer give a well defined interference pattern. Suppose a light beam contains two wavelengths of light, $\lambda$ and $\lambda + \Delta \lambda$, then in:
$\frac{\lambda}{2 \pi \Delta \lambda}$
cycles they will be $\frac{\lambda}{2 \pi}$ out of phase.
The distance required for the two different wavelengths of light to be this much out of phase is the coherence length. Coherence length = number of cycles x length of each cycle so:
coherence length = $\frac{\lambda^2}{2 \pi \Delta \lambda}$ .
Lorentz-Fitzgerald Contraction Hypothesis
After the first Michelson-Morley experiments in 1881 there were several attempts to explain the null result. The most obvious point of attack is to propose that the path that is parallel to the direction of motion is contracted by $\sqrt{1-v^2/c^2}$ in which case $\Delta t$ and $\Delta t^'$ would be identical and no fringe shift would occur. This possibility was proposed in 1892 by Fitzgerald. Lorentz produced an "electron theory of matter" that would account for such a contraction.
Students sometimes make the mistake of assuming that the Lorentz-Fitzgerald contraction is equivalent to the Lorentz transformations. However, in the absence of any treatment of the time dilation effect the Lorentz-Fitgerald explanation would result in a fringe shift if the apparatus is moved between two different velocities. The rotation of the earth allows this effect to be tested as the earth orbits the sun. Kennedy and Thorndike (1932) performed the Michelson-Morley experiment with a highly sensitive apparatus that could detect any effect due to the rotation of the earth; they found no effect. They concluded that both time dilation and Lorentz-Fitzgerald Contraction take place, thus confirming relativity theory.
If only the Lorentz-Fitzgerald contraction applied then the fringe shifts due to changes in velocity would be: $n = (v_1^2 - v_2^2)/c^2 \times (L_f-L_m)/\lambda$. Notice how the sensitivity of the experiment is dependent on the difference in path length $L_f-L_m$ and hence a long coherence length is required.
Recent Michelson-Morley experiments
Optical tests of the isotropy of the speed of light have become commonplace. New technologies, including the use of lasers and masers, have significantly improved measurement precision.
Author Year Description Upper bounds
Essen[2] 1955 The frequency of a rotating microwave optical cavity resonator is compared with that of a quartz clock ~3 km/s
Jaseja et al.[3] 1964 The frequencies of two Helium–neon lasers, mounted on a rotating table, placed perpendicular to each other. ~30 m/s
Shamir and Fox[4] 1969 Both arms of the interferometer were contained in a transparent solid (Poly(methyl methacrylate). The light source was a Helium–neon laser. ~7 km/s
More recent experiments still, using other types of experiment such as optical resonators (Eisele et al.[5]), have shown that the speed of light is constant to within $10^{-8}$ m/s .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1074, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468759298324585, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/157769-prove-ab-aa-b-ab.html
|
# Thread:
1. ## Prove a(AB)=(aA)B=A(aB) ..
Prove a(AB)=(aA)B=A(aB) for any scalar a. At first I started out like:
a(AB)=(aA)(aB) and quickly noticed that that was very wrong. And then I kept trying to play around with it in different ways and I seem to get tangled and I know this is such an easy concept.
2. I suppose $A$ and $B$ are matrices.
You need to apply the definition of matrix product. Note that two matrices are equal, precisely when each of the corresponding entries are equal. Let me change the name of the scalar to $c$.
Let $a_{ij}$ be the $(i,j)$-entry of $A$ and let $b_{ij}$ be the $(i,j)$-entry of $B$. Suppose also that $A$ is an $m \times n$ matrix, and that $B$ is an $n \times p$ matrix.
Then the $(i,j)$-entry of $AB$ is "the $i$'th row of $A$ times the $j$'th column of $B$". So this $(i,j)$-entry of $AB$ is:
$\sum_{k=1}^n a_{ik}b_{kj}$,
and so the $(i,j)$-entry of $c(AB)$ is
$c\sum_{k=1}^n a_{ik}b_{kj} = \sum_{k=1}^n ca_{ik}b_{kj},$
where the first expression is the entry that follows by applying the definition of the product directly, and where the equality follows from bringing the scalar $c$ into each term of the sum.
Consider then the product $(cA)B$. We need to check that the $(i,j)$-entry of this product is same expression as for $c(AB)$. But $cA$ is the matrix, where the $(i,j)$-entry is $ca_{ij}$. The product $(cA)B$ has as its $(i,j)$-entry the $i$'th row of this matrix multiplied by the $j$'th column of $B$, so
$\sum_{k=1}^n ca_{ik}b_{kj}.$
But this is exactly the same expression as above.
Similarly you may show that A(cB) has the correct $(i,j)$-entries.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9510067105293274, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/150936-solving-problem-involving-nonlinear-functions-equations.html
|
# Thread:
1. ## Solving a problem involving nonlinear functions and equations
Hello, quick question. I'm pretty sure I know how to do this, but it's just slipping my mind right now. Could anyone tell me how to solve this problem?
If a≠0 and 5/x=5+a/x+a, what is the value of x?
2. I'm guessing you meant
$\dfrac{5}{x}=\dfrac{5+a}{x+a}.$
Is that correct? (You should be more careful with parentheses!)
If this is correct, then I don't think it's really nonlinear. What would you do as a first step?
3. Yup, that's what I meant. What I tried to do first was get a common denominator on both sides, so I multiplied x on the right side and x+a on the left side.
4. Originally Posted by Mariolee
Yup, that's what I meant. What I tried to do first was get a common denominator on both sides, so I multiplied x on the right side and x+a on the left side.
I think you have gone the way right! and then x=5. So what is wrong here??!
5. Dang, I did know how to do this. Thanks guys!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9704124331474304, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/17830/how-should-i-proceed-in-proving-this-tautology/17890
|
# How should I proceed in proving this tautology?
I know that following is a tautology because I've checked its truth table. I am now attempting to prove that it is a tautology by using the rules of logic, which is more difficult. How should I proceed?
$(p\land(p\implies q))\implies q$
$(p\land(\lnot p \lor q))\implies q$
$(p\land \lnot p) \lor (p\land q) \implies q$ This step is where I'm getting stuck at. I know that $(p\land \lnot p)$ is false. So it seems to me that the truth value of everything to the left of the $\implies$ operator depends on the truth value of $(p\land q)$ So what I want to do is this:
FALSE $\lor (p\land q) \implies q$ which reduces to
$(p\land q) \implies q$
Is my thinking correct so far? If so, then I want to rewrite $(p\land q) \implies q$ as
$\lnot(p \land q) \lor (p \land q)$ by using the identity $p\implies q \equiv \lnot p \lor q$ Am I on the right track?
-
Everything looks good for your first question. I would think that $(p\land q)\implies q$ is more fundamental than the identity you used. – Jonas Meyer Jan 17 '11 at 6:52
You are correct! – lampShade Jan 17 '11 at 7:27
## 4 Answers
Use De Morgan's law on $\neg(p\land q)$ near the end. This should give you a disjunction which should easily be seen as tautological by the law of excluded middle. Also, remember that $(p\land q)\implies q\equiv \neg(p\land q)\lor q$, not $\neg(p\land q)\lor(p\land q)$ as you wrote.
-
This is the essential step. Thanks for all the help! I have it now. – lampShade Jan 17 '11 at 7:26
Your reasoning is correct so far. It seems to me that since $a\to b$ is equivalent to $\lnot a\lor b$, then $(p\land q)\to q$ is equivalent to $\lnot(p\land q)\lor q$, which is different from what you wrote in the last line. To continue, I guess you want to use that $\lnot(p\land q)$ is equivalent to $\lnot p\lor \lnot q$.
-
$$\begin{align} (p\land(p\rightarrow q))\rightarrow q &\Longleftrightarrow (p\land(\neg p\lor q))\rightarrow q\\ &\Longleftrightarrow ((p\land \neg p)\lor (p\land q))\rightarrow q\\ &\Longleftrightarrow (F\lor (p\land q))\rightarrow q & \text{Negation law}\\ &\Longleftrightarrow \neg(F\lor (p\land q))\lor q\\ &\Longleftrightarrow (T\land \neg(p\land q))\lor q \\ &\Longleftrightarrow (T\land(\neg p\lor \neg q))\lor q &\text{DeMorgan's law}\\ &\Longleftrightarrow (\neg p\lor \neg q)\lor q &\text{Domination law}\\ &\Longleftrightarrow \neg p\lor (\neg q\lor q) \\ &\Longleftrightarrow \neg p\lor T\\ &\Longleftrightarrow T\\ \end{align}$$
-
I edited your post to put it in the LaTeX form we use here. To see how I did it, click on the text right after "edited" above this comment. By the way, welcome to the site! – Rick Decker Sep 22 '12 at 19:32
Using natural deduction notation
Modus ponens: $$\backslash\!\!\!\!\!{A}$$ $$\overline{B}$$ $$\overline{A\to B}$$ If from the assumption $A$ can deduce $B$, then $A\to B$ is deducible.
Law of Conjunction: $$A\wedge B$$ $$\overline{\qquad A \qquad}$$ and $$A\wedge B$$ $$\overline{\qquad B \qquad}$$ and If $A\wedge B$ holds, then both $A$ and $B$ also holds.
To prove the statement using ND, start by breaking up the first implication - and you soon figure out what to do. (If there is any doubt just ask.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549977779388428, "perplexity_flag": "head"}
|
http://chorasimilarity.wordpress.com/2013/02/14/towards-qubits-graphic-lambda-calculus-over-conical-groups-and-the-barycentric-move/
|
# chorasimilarity
computing with space
Home > Uncategorized > Towards qubits: graphic lambda calculus over conical groups and the barycentric move
## Towards qubits: graphic lambda calculus over conical groups and the barycentric move
February 14, 2013
In this post I want to pave the way to the application of graphic lambda calculus to the realm of quantum computation. It is not a short, nor too lengthy way, which will be explained in several posts. Also, some experimentation is to be expected.
Disclaimer: For the moment it is not very clear to me which are the exact relations between the approach I am going to explain and linear lambda calculus or the lambda calculus for quantum computation. I expect a certain overlapping, but maybe not as much as expected (by the specialist in the field). The reason is that the instruments and goals which I have come from fields apparently far away from quantum computation, as for example sub-riemannian geometry, which is my main field of interest (however, for an interaction between sub-riemannian geometry and computation see L_p metrics on the Heisenberg group and the Goemans-Linial conjecture, by James R. Lee and Asaf Naor). Therefore, I feel the need to issue such a disclaimer for the narrow specialist.
Background for his post:
• The page Graphic lambda calculus
• [1] Infinitesimal affine geometry of metric spaces endowed with a dilatation structure, Houston J. Math., 36, 1 (2010), 91-136, arXiv:0804.0135.
• [2] On graphic lambda calculus and the dual of the graphic beta move, arXiv:1302.0778.
Affine conical spaces. In the article [1] they appear under the name “normed affine group spaces”, definition 3. We may use the same type of arguments as the ones from emergent algebras in order to get rid of the need to have a norm on such spaces. Instead of anorm we shall put an uniformity on such a space, such that the topology associated to the uniformity makes the space to be locally compact.
Theorem 2.2 [1] characterizes affine conical spaces as self-distributive emergent algebras. The relations satisfied by self-distributive emergent algebras, if graphically represented by gates in graphic lambda calculus, are the following:
• fan-out moves, pruning moves,
• emergent algebra moves
• The Reidemeister move R3a (which represents self-distributivity), described in this post.
Notice that I don’t want to use the dual of the graphic beta move ([2], section 8), which is simply too powerful in this context (see [2] section 10). That is why I use instead the move R3a (which is a composite of dual beta moves). Another instance of this choice will be explained in a future post, having to do with the distributivity of the emergent algebra operations with respect to the application and lambda gates.
The barycentric move. In order to obtain usual affine spaces instead of their more general, noncommutative versions (i.e. affine conical spaces), we have to add the barycentric condition. This condition appears as (Af3) in Theorem 2.2 [1]. I shall transform this condition into a move in graphic lambda calculus.
The barycentric move BAR is described by the following figure and explanation. We take the commutative group $\Gamma$, which is used to label the emergent algebra gates, as $\Gamma = K^{*}$, where $K$ is a field. (Therefore $K = \Gamma \cup \left\{ 0 \right\}$.) We have then two operations on the field $K$: multiplication $\varepsilon, \mu) \mapsto \varepsilon \mu$ and addition $(\varepsilon, \mu) \mapsto \varepsilon + \mu$. Because $K$ contains also the element $0$, the neutral element for addition, we add a new gate $\bar{0}$. With these preparations, the BAR move is the following:
Notice that when $\varepsilon = 1$ at the left hand side of the figure appears the gate $\bar{0}$. This gate corresponds, in the particular case of a vector space, to the usual dilation of coefficient $0$. We don’t need to put this as a sort of an axiom, because we can obtain it as a combination of the BAR move and ext2 moves. Indeed:
Knowing this, we can extend the emergent algebra moves R1a, R1b and R2 to the case $\varepsilon = 0$. Here is the proof. For R1a we do this:
The move R1b, for the degenerate case $\varepsilon = 0$, is this:
Finally, for the move R2 we have two cases, corresponding to $0 \, \varepsilon = 0$ and $\varepsilon \, 0 = 0$. The first case is this:
The second case is this:
Final remark: The move BAR can be seen as analogous of an infinite sequence of moves R3 (but there is no rigorous sense for this in graphic lambda calculus). Indeed, this is related to the fact that $\frac{1}{1-\varepsilon} = \sum^{\infty}_{0} \varepsilon^{k}$. See [1] section 8 “Noncommutative affine geometry” for the dilation structures correspondent of this equality and also see the post Menelaus theorem by way of Reidemeister move 3.
### Like this:
1. No comments yet.
1. February 19, 2013 at 5:25 pm | #1
2. March 21, 2013 at 5:08 pm | #2
3. April 28, 2013 at 1:04 pm | #3
4. May 15, 2013 at 12:55 pm | #4
Sauropod Vertebra Picture of the Week #AcademicSpring
SV-POW! ... All sauropod vertebrae, except when we're talking about Open Access
Science to Grok
computing with space
isomorphismes
computing with space
DIANABUJA'S BLOG: Africa, the Middle East, Agriculture, History & Culture
Ambling through the present and past
Retraction Watch
Tracking retractions as a window into the scientific process
Shtetl-Optimized
computing with space
Not Even Wrong
computing with space
Theoretical Atlas
He had bought a large map representing the sea, / Without the least vestige of land: / And the crew were much pleased when they found it to be / A map they could all understand.
Gödel's Lost Letter and P=NP
a personal view of the theory of computation
Gowers's Weblog
Mathematics related discussions
Research and Lecture notes
by Fabrice Baudoin
Calculus VII
being boring
The "Putnam Program"
Language & Brains, Machines & Minds
What's new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8922616839408875, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/75009/local-finiteness-and-coarse-bounded-geometry/75025
|
## Local finiteness and coarse bounded geometry
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I've just started learning these things and so probably my questions will be very easy. Please forgive me.
A metric space $(X,d)$ is called locally finite if every bounded set is finite. A metric space is said to have coarse bounded geometry if there is $\Gamma\subseteq X$ such that
1) there exists $c>0$ such that the set of points $x\in X$ such that $d(x,\Gamma)\leq c$ is dense in $X$.
2) For all $r>0$, there exists $K_r$ such that, for all $x\in X$, $|\Gamma\cap B_r(x)|\leq K_r$, where $B_r(x)$ stands for the ball of radius $r$ about $x$.
Question 1: what is an example of metric space without coarse bounded geometry?
Well, infinite dimensional Banach spaces. But I would like something more handable.
Question 2: Is it true that locally finiteness implies coarse bounded geometry?
Maybe I have misunderstood, but in a published paper I have found a sentence that looks implicitly assume that the answer is positive. It might be trivial, but I am not quite convinced.
Thanks in advance,
Valerio
-
## 3 Answers
Q2--no. Let $A_n$ have cardinality $n+1$ for $n=0,1,...$. Specify all distances between distinct points in the same $A_n$ to be one, and the distance between a point in $A_n$ to a point in $A_m$ to be $n+m$ when $n\not= m$.
This gives a simple example for Q1 as welll.
-
Thank you! I was just thinking to an example of that sort, while getting back home! – Valerio Capraro Sep 9 2011 at 16:12
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The answer to question 2 is negative, but if you require quasi-homogeneity (i.e. you have a group of isometries with a `$c-$`dense orbit form some `$c$`) then it becomes affirmative. You typically have this.
Also, to construct examples as in question 1 you can consider non-quasi-homogeneous spaces. Hope this helps, I can be more explicit on this point if you need clarifications.
-
thank you - also for the specification about the density. – Valerio Capraro Sep 9 2011 at 16:14
@Alessandro re ps: What the OP wrote is equivalent even if your wording is the usual way of writing the condition. – Bill Johnson Sep 9 2011 at 20:45
@Bill: I think it has been edited. In any case, I removed the "ps" part, thanks for pointing this out. – Alessandro Sisto Sep 10 2011 at 0:00
I am sorry, what is quasi-homogeneity? – Valerio Capraro Sep 10 2011 at 2:19
As I mentioned, it means that there are many isometries, meaning that there exists some `$c$` such that for each `$x,y$` there exists an isometry mapping `$y$` to a point at distance at most `$c$` from `$x$`. Notice that being homogeneous is the same thing as being quasi-homogeneous with constant `$c=0$`. – Alessandro Sisto Sep 10 2011 at 14:04
One more comment (which also implies answers to your questions): bounded geometry implies that the space has a finite exponential growth rate (defined, say, with respect to covers by balls of a fixed radius).
-
I am sorry, what do you mean by "finite exponential growth"? – Valerio Capraro Sep 9 2011 at 18:50
In the discrete case the exponential growth rate is defined as $\limsup \frac1n \log|B_n(x)|$, where $B_n(x)$ is the ball of radius $n$ centered at a point $x$. In the continuous case instead of cardinality one takes the minimal number of balls of a fixed radius necessary to cover $B_n$. – R W Sep 9 2011 at 19:03
OK, at least in the discrete case it's the classical notion. I got a bit scared because of the "balls of fixed radius". – Valerio Capraro Sep 9 2011 at 19:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315597414970398, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/130014/finding-an-equivalent-of-u-n-u-infty-where-u-n-sum-k-1n-frac
|
# Finding an equivalent of $u_{n}-u_{\infty}$ where $u_{n}= \sum_{k=1}^{n} \frac{n}{n^2+k^2}$
I would like to find an equivalent of
$$u_{n}-u_{\infty}=\sum_{k=1}^{n} \frac{n}{n^2+k^2}-u_{\infty}$$
Using Riemann sums, it is easy to show that:
$$u_{n} \sim \frac{\pi}{4}=u_{\infty}$$
Using integrals, we have:
$$\int_{1}^{n+1} \frac{n}{n^2+x^2} \mathrm dx \leq u_{n} \leq \int_{0}^{n} \frac{n}{n^2+x^2} \mathrm dx$$
$$\arctan(1+1/n)-\arctan(1/n) \leq u_{n} \leq \frac{\pi}{4}$$
$$\arctan(1+1/n)-\arctan(1/n)-\frac{\pi}{4} \leq u_{n} -\frac{\pi}{4}\leq 0$$
$$\arctan(1+1/n)-\arctan(1/n)= \frac{\pi}{4}+\frac{1}{2n}-\frac{1}{n}+o(1/n)=\frac{\pi}{4}-\frac{1}{2n}+o(1/n)$$
So:
$$-\frac{1}{2n}+o(1/n) \leq u_{n}-\frac{\pi}{4} \leq 0$$
However the inequality prevents from writing $$u_{n}-\frac{\pi}{4} \sim -\frac{1}{2n}$$ and numerical values seem to show that:
$$u_{n}-\frac{\pi}{4} \sim -\frac{1}{4n}$$
Where did I go wrong?
-
## 1 Answer
Your numerical work indeed leads to the right conjecture $u_{n}-\frac{\pi}{4} \sim -\frac{1}{4n}$. I am feeling lazy, so to prove the result I will appeal to a standard result about $\text{TRAP}(n)$, the Trapezoidal Rule with division into $n$ equal parts. It is known that under suitable differentiability assumptions, which are amply met here, the error in $\text{TRAP}(n)$ is $O(1/n^2)$. Note that $$\text{TRAP}(n)=\sum_{k=1}^{n-1}\frac{n}{n^2+k^2} +\frac{1}{2}\left(\frac{n}{n^2}+\frac{n}{2n^2}\right).$$ Thus $$\text{TRAP}(n)=\sum_{k=1}^{n}\frac{n}{n^2+k^2} +\frac{1}{2}\left(\frac{n}{n^2}+\frac{n}{2n^2}\right)-\frac{n}{2n^2}=\sum_{k=1}^{n}\frac{n}{n^2+k^2}+\frac{1}{4n}.$$ It follows that $$\sum_{k=1}^{n}\frac{n}{n^2+k^2}=\frac{\pi}{4}-\frac{1}{4n}+O(1/n^2).$$
-
Added some WP links. Please feel free to erase them. – Did Apr 10 '12 at 15:22
@Didier: Many thanks, I felt guilty about not doing the details of the local error estimate. – André Nicolas Apr 10 '12 at 15:34
Thank you for your answer, but is there a more analytical method which does not "trivialize" the exercise? – Chon Apr 10 '12 at 16:02
@Chon: You can calculate by approximately how much the term $\frac{n}{n^2+k^2}$ underestimates the area under $y=\frac{1}{1+x^2}$ from $\frac{k-1}{n}$ to $\frac{k}{n}$. If you want to do the details, it is very much like the proof of the estimate of the error in TRAP. That estimate first approximates the "local" error, that is, the difference between the area of the little trapezoid and the area under the curve from $\frac{k-1}{n}$ to $\frac{k}{n}$. The point is that $\frac{1}{1+x^2}$ is almost linear in the small interval. – André Nicolas Apr 10 '12 at 16:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331425428390503, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/246794/how-to-prove-that-mathrmgl-2-bbb-z-2-has-only-six-subgroups
|
How to prove that $\mathrm{GL}_2(\Bbb Z_2)$ has only six subgroups
I have found the six subgroups that I know that $\mathrm{GL}_2(\Bbb Z_2)$ has, but now I want to prove that this is all. How can I do this? I am currently thinking that I should argue using the fact that if $H \le G$ is a subgroup, then $|H|$ must divide $|G|$ which would imply that I can have no subgroup of order $4$ or $5$. However I don't think my reasoning is going down a conclusive route. Any help appreciated, thanks!
-
Josh I and @joriki: Sorry for the confusion! – amWhy Nov 28 '12 at 21:02
1 Answer
As you wrote in a comment, you're aware that $\mathrm{GL}_2(\Bbb Z_2) \cong S_3$. Subgroups of order $2$ or $3$ must be cyclic and thus generated by one non-identity element; there are five non-identity elements, two of which generate the same subgroup, so that makes $4$ subgroups. The only other possibilities are order $1$ for the trivial subgroup and order $6$ for the group itself, for a total of $6$ subgroups.
-
+1 I didn't intend to delete your comment, only my (misguided) answer! – amWhy Nov 28 '12 at 21:05
Aha! I knew I had forgotten that we knew that subgroups of order 2 or 3 must be cyclic. – JJR Nov 28 '12 at 21:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9685144424438477, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/9026/how-do-you-handle-the-floor-and-ceiling-function-in-an-equation?answertab=active
|
# How do you handle the floor and ceiling function in an equation?
I tried to do some math in a blog post of mine and came to one with a floor function. I wasn't sure how to deal with it so I just ignored it, and then added the ceiling function in my final equation as that seemed to give me the result I wanted. I'm wondering what is the correct way of handling these functions in equations?
$$\begin{align} G(n) &= \left\lfloor n\log{\varphi}-\dfrac{\log{5}}{2}\right\rfloor+1 \\\\ n\log{\varphi} &= G(n)+\dfrac{\log{5}}{2}-1 \\\\ n &= \left\lceil\dfrac{G(n)+\dfrac{\log{5}}{2}-1}{\log\varphi}\right\rceil \end{align}$$
How should I have done this in a correct way? How do I work with the ceiling and floor functions when I shuffle around with equations?
-
3
The floor and ceiling functions are very very deep and have very interesting connections to analytic number theory and modular forms. You might be surprised to hear this, but they are not so easy to manipulate algebraically. – user126 Nov 5 '10 at 5:11
## 3 Answers
Your final expression gives you the number you want.
According to your blog post, you're looking for the smallest integer $n$ (i.e., the "first Fibonacci number with 1000 digits") that satisfies $$G(n) = \left\lfloor n \log \varphi - \frac{\log 5}{2} \right\rfloor + 1.$$ There may, of course, be more than one integer $n$ for which this is true.
By definition of the floor function, the values of $n$ that satisfy this are the values that satisfy $$G(n) - 1 \leq n \log \varphi - \frac{\log 5}{2} < G(n),$$ which, since $\log \phi > 0$, are the values that satisfy $$\frac{G(n) + \frac{\log 5}{2}}{\log \varphi} - \frac{1}{\log \varphi} \leq n < \frac{G(n) + \frac{\log 5}{2}}{\log \varphi}.$$
Since $\frac{1}{\log \varphi} \approx 4.78$, there are either four or five integers in this interval. But the smallest one is obtained by taking the ceiling of the lower endpoint of the interval; i.e., $$\left\lceil\frac{G(n) + \frac{\log 5}{2} - 1}{\log \varphi}\right\rceil.$$
Incidentally, this argument also apparently shows that there are either four or five Fibonacci numbers that have a given number of digits. (Except in the single-digit case, where there are six (not counting 0). But your formula for $G(n)$ doesn't hold when $n=1$, so we shouldn't expect this calculation to be true in the single-digit case anyway.)
-
Observe that \begin{eqnarray} G(n) = \left \lfloor n \log \varphi - \log \sqrt{5} \right \rfloor + 1 = \left \lceil n \log \varphi - \log \sqrt{5} \right \rceil \end{eqnarray} and write \begin{eqnarray} \left \lceil \frac{G(n)}{\log \varphi} + \log_{\varphi} \sqrt{5} \right \rceil & = & \left \lceil \tfrac{1}{\log \varphi} \left \lceil n \log \varphi - \log \sqrt{5} \right \rceil + \log_{\varphi} \sqrt{5} \right \rceil = n. \end{eqnarray}
-
You can replace $\lfloor x \rfloor$ with $x - \theta$, where $\theta \in [0,1)$ is some unknown quantity. Similarly, $\lceil x \rceil = x + \theta$ (a different $\theta$ within the same range).
Another helpful identity is $\lfloor x \rfloor + n = \lfloor x + n \rfloor$ for any integer $n$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8838496804237366, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/88269-congruence.html
|
# Thread:
1. ## Congruence
Can somebody help with this question please?
Prove that
(e^x)(e^x2/2)(e^x^3/3)...=1+x+x^2+... when |x|<1.
Show that the coefficient of x^19 in the power series expansion on the LHS has the form
1/19! + 1/19 + r/s,
where 19 does not divide s.
Deduce that 18!= -1(mod 19).
2. Hi
$\prod_{k=1}^{+\infty}e^{\frac{x^k}{k}} = e^{\sum_{k=1}^{+\infty}\frac{x^k}{k}}$
To find $\sum_{k=1}^{+\infty}\frac{x^k}{k}$ let's take the derivative
$\sum_{k=0}^{+\infty}x^k = \frac{1}{1-x}$
Therefore $\sum_{k=1}^{+\infty}\frac{x^k}{k}$ is the antiderivative of $\frac{1}{1-x}$ that is 0 for x=0
$\sum_{k=1}^{+\infty}\frac{x^k}{k} = -\ln|1-x|$
Therefore $\prod_{k=1}^{+\infty}e^{\frac{x^k}{k}} = e^{\sum_{k=1}^{+\infty}\frac{x^k}{k}} = e^{-\ln|1-x|} = \frac{1}{|1-x|}$
3. Thanks.
Is anybody able to help with the second part of the question please?
4. $e^{x} = 1 + x + \frac{x^2}{2} + \frac{x^3}{3!} + \cdots + \frac{x^n}{n!} + \cdots$
$e^{\frac{x^2}{2}} = 1 + \frac{x^2}{2} + \frac{x^4}{8} + \frac{x^6}{8\cdot 3!} + \cdots + \frac{x^{2n}}{2^n\cdot n!} + \cdots$
$e^{\frac{x^3}{3}} = 1 + \frac{x^3}{3} + \frac{x^6}{18} + \frac{x^9}{27\cdot 3!} + \cdots + \frac{x^{3n}}{3^n\cdot n!} + \cdots$
etc
To find the coefficient of x^19 in the power series expansion you need to multiply all these identities and find the coefficient of x^19
5. surely there must be an easier way to do this?
6. I hope so
If you consider each term of the expansion of e^x from x^19 down to 1
- x^19/19! must be multiplied by a constant which is 1 therefore 1/19!
- x^18/18! must be multiplied by something times x, but this is not possible since in the other series are involved terms starting from x², therefore 0
- x^17/17! must be multiplied by something times x², which is x²/2 coming from the expansion of e^(x²/2) therefore 1/(2x17!)
- x^16/16! must be multiplied by something times x^3, which is x^3/3 coming from the expansion of e^(x^3/3) therefore 1/(3x16!)
- x^15/15! must be multiplied by something times x^4, which is x^4/4 coming from the expansion of e^(x^4/4) but also x^4/8 coming from the expansion of e^(x^2/2) therefore (1/4+1/8)/15! = 3/(8x15!)
- x^14/14! must be multiplied by something times x^5, which is x^5/5 coming from the expansion of e^(x^5/5) but also x^5/6 coming from the product of the expansion of e^(x^2/2) and the expansion of e^(x^3/3) therefore (1/5+1/6)/14! = 11/(30x14!)
and so on ... but it becomes more and more difficult
7. ## You guys are so close...
Yes, this is the ticket. Running-gag has the winning strategy, but don't think for a minute you have to find the actual coefficient. All the problem calls for is showing that it is of the form $\frac{1}{19!}+\frac{1}{19}+\frac{r}{s}$ , for 19 not dividing s.
For reference, call the series expansion $e^{\frac{x^k}{k}}$ by $a_k$, and the coefficient preceding $x^{kn}$ by $a_k(n)$ . For example, $a_1(3)=\frac{1}{3!}$ and $a_3(2)=\frac{1}{18}$ .
Now the coefficient on $x^{19}$ is the sum of all possible products of terms whose exponents sum to $19$, for example, one of which is $a_1(4)*a_2(3)*a_3(3) = \frac{1}{4!}*\frac{1}{2^3 3!}*\frac{1}{3^3 3!}$
(i) $a_1(19)=\frac{1}{19!}$
(ii) $a_{19}(1)=\frac{1}{19}$
(iii) Since $19$ is prime, nowhere else in any coefficient $a_k(n)$ does $19$ appear, for $k<19$ and $n<19$.
Therefore, the sum of all such terms must be of the form $\frac{1}{19!}+\frac{1}{19}+\frac{r}{s}$ , for 19 not dividing s.
QED
EDIT: The coefficient is actually $\frac{1}{19!}+\frac{1}{19}+\frac{31546534213457683 52}{15687942664565435561256}$
8. ## Thanks
How do we deduce that 18! = -1 (mod 19) from all this?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295609593391418, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/spacetime+string-theory
|
# Tagged Questions
1answer
69 views
### Can the fuzzball conjecture be applied to microscopically explain the entropy of a region beyond the gravitational observer horizon?
In this article discussing this and related papers, it is explained among other things, how the neighborhood of an observer's worldline can be approximated by a region of Minkowsky spacetime. If I ...
1answer
205 views
### What happens to string theory if spacetime is doomed?
What is expected to happen with string theory, if physics is reformulated according the lines hinted at by the twistor-uprising business discussed in this question and its answers for example and ...
1answer
158 views
### Why does the universe exhibit three large-scale spatial dimensions? [duplicate]
Possible Duplicate: Is 3+1 spacetime as privileged as is claimed? Regardless of your favorite theory of how many dimensions the universe has in total, the universe seems to have a deep ...
1answer
494 views
### Is String Theory formulated in flat or curved spacetime?
String Theory is formulated in 10 or 11 (or 26?) dimensions where it is assumed that all of the space dimensions except for 3 (large) space dimensions and 1 time dimension are a compact manifold with ...
2answers
296 views
### Does String theory say that spacetime is not fundamental but should be considered an emergent phenomenon?
Does String theory say that spacetime is not fundamental but should be considered an emergent phenomenon? If so, can quantum mechanics describe the universe at high energies where there is no ...
4answers
833 views
### Is spacetime discrete or continuous?
Is the spacetime continuous or discrete? Or better, is the 4-dimensional spacetime of general-relativity discrete or continuous? What if we consider additional dimensions like string theory ...
2answers
587 views
### Why does string theory require 9 dimensions of space and one dimension of time?
String theorists say that there are many more dimensions out there, but they are too small to be detected. However, I do not understand why there are ten dimensions and not just any other number? ...
1answer
146 views
### If a fundamental theory exibits e.g. a mirror symmetry, in what sense it the underlying geometry real?
Are the more recently discovered symmetries in string theory such that the theories based on mirroring geometries are absolutely the same from an observable point of view? I have mirror symmetry ...
1answer
205 views
### What do scientists believe about existence in dimensions? [closed]
I couldn't really think of a suitable question title, I'm not sure if it's completely related or not. But this is as far as I know (well, I thought it all up last night and it seemed extremely ...
3answers
437 views
### Space-time in String Theory
I would like to understand how Physicists think of space-time in the context of String Theory. I understand that there are $3$ large space dimensions, a time dimension, and $6$ or $7$ (or $22$) extra ...
2answers
245 views
### How is the complexification of spacetime justified?
As always the caveat is that I am a mathematician with very little knowledge of physics. I've started my quest for knowledge in this field, but am very very far from having a good grasp. General ...
7answers
1k views
### Why are extra dimensions necessary?
Some theories have more than 4 dimensions of spacetime. But we only observe 4 spacetime dimensions in the real world, cf. e.g. this Phys.SE post. Why are the theories (e.g. string theory) that ...
3answers
518 views
### How could spacetime become discretised at the Planck scale?
I didn't have much luck getting a response to this question before so I have tried to reword and expand it a little: In early 2010 I attended this inaugural lecture by string theorist- Prof. ...
1answer
236 views
### What are the current (popular(ish)) approaches to modelling the quantum nature of spacetime at the Planck scale?
My guess at a list of them would be: spin foams, casual sets, non-commutative geometry, Machian theories, twistor theory or strings and membranes existing in some higher-dimensional geometry... ...
1answer
635 views
### How does classical GR concept of space-time emerge from string theory?
First, I'll state some background that lead me to the question. I was thinking about quantization of space-time on and off for a long time but I never really looked into it any deeper (mainly because ...
3answers
624 views
### What are some approaches to discrete space-time used in modern physics?
This thought gave rise to some new questions in my mind. What are the consequences for: How would it affect duality i.e. particle, wave property of photons? How does this statement affect the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359846711158752, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/121543-question-semi-elliptical-arch-bridge.html
|
# Thread:
1. ## question on semi elliptical arch bridge
the arch bridge takes the shape of a circular arc. the portion of the arch bridge above the water has a span of 30m and a rise of 10m. a boat passes through the arch bridge. if the cross section of the boat is modeled by a rectangle with vertical sides, find correct to 2 decimal places the width of the boat which has the largest cross sectional area above water level. justify your answer.
2. You state semi-elliptical in the heading, but circular in the problem. It is
an ellipse with semi-major axis length 15 and semi-minor axis length 10.
The equation of said ellipse is $\frac{x^{2}}{15^{2}}+\frac{y^{2}}{10^{2}}=1$
The area of the rectangle, designating the boat, has area A=2xy
Solve the ellipse equation for y and sub into the area formula.
Differentiate w.r.t. x, set to 0 and solve for x. y will follow.
Attached Thumbnails
3. thanks (:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8902360796928406, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/38123/what-do-you-feel-when-crossing-the-event-horizon
|
# What do you feel when crossing the event horizon?
I have heard the claim over and over that you won't feel anything when crossing the event horizon as the curvature is not very large. But the fundamental fact remains that information cannot pass through the event horizon so you cannot feel your feet when they have passed it.
Is there a way to cross the even horizon at reasonable speed in radial direction (below say 0.01*c )? So what would it really be like?
-
1
If you could include a reference to the recent "firewall" paper, it would make the question more topical. – Ron Maimon Sep 23 '12 at 20:45
– Qmechanic♦ Sep 23 '12 at 20:52
– Luboš Motl Sep 24 '12 at 5:58
## 2 Answers
It depends on the size of the black hole. With small ones (a few solar masses) the tidal forces are strong enough to "spaghettify" your body as you approach the event horizon.
With supermassive (a million solar masses) black holes the gravity gradient is very small and the tidal forces are so low that you won't feel anything until well after you have crossed the horizon. According to this site, the tidal force on your body at the horizon is $10^6$g for a 30 solar mass hole, but only 1g for a 30,000 solar mass hole.
-
You always cross the event horizon at the speed of light. There is no way to cross it at a lower speed.
But ...
Your frame is locally just Minkowski space, and nerve impulses from your feet reach your brain just as they always have done. Other observers might argue that it's actually your brain moving towards the nerve impulses faster than light rather than the nerve impulses managing to move outwards, but this is an interpretation based on the co-ordinate system they are using. In your co-ordinate system you will notice nothing unusual.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464761018753052, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/45247/list
|
## Return to Answer
2 more details
I think it would be NP complete. One NP complete problem is to determine if a graph has an Vertex Cover of size $k$: given graph with m vertices and e edges is there a set of k vertices including at least on endpoint of each edge. TO ? To encode this in your problem, assign a unique prime $p_i$ to each edge $e_i$, let $P=\prod p_i$, and assign to each vertex $v$ the integer $\frac{P}{\prod_{v \in e_i}p_i}$. Then the $\gcd$ is $1$ and a subset with that $\gcd$ is a vertex cover of the edges.
I am sure that there are other more elegant covering or satisfiability problems but that will do.
I'd say leave the issue of factoring out of it by assuming that the factorizations are all known. Of course then you could replace each integer $2^a3^b5^c\cdots$ by a vector $[a,b,c,\cdots]$ and look at the entry-wise minimum over the whole set and over various subsets.
1
I think it would be NP complete. One NP complete problem is to determine if a graph has an Vertex Cover of size $k$: given graph with m vertices and e edges is there a set of k vertices including at least on endpoint of each edge. TO encode this in your problem, assign a unique prime $p_i$ to each edge $e_i$, let $P=\prod p_i$, and assign to each vertex $v$ the integer $\frac{P}{\prod_{v \in e_i}p_i}$. Then the $\gcd$ is $1$ and a subset with that $\gcd$ is a vertex cover of the edges.
I am sure that there are other more elegant covering problems but that will do.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532603025436401, "perplexity_flag": "head"}
|
http://mathforum.org/mathimages/index.php?title=Logistic_Bifurcation&oldid=27246
|
# Logistic Bifurcation
### From Math Images
Revision as of 01:23, 26 July 2011 by 144.118.94.68 (Talk)
(diff) ←Older revision | Current revision (diff) | Newer revision→ (diff)
Logistic Bifurcation
Field: Dynamic Systems
Created By: Diana Patton
Website: [ ]
Logistic Bifurcation
This is a section of a bifurcation diagram. It shows the relationship between a population's potential for growth and its size over time.
# Basic Description
Bifurcation diagrams are key to understanding how dynamic systems change as the parameters used to model them grow and shrink. The bifurcation diagram of logistic systems shows how changes in conditions can lead population sizes toward stability or chaos.
### The Logistic Map and Logistic Systems
When there are fewer animals in a population, the population can grow at a faster rate.
When there are more animals in a population, they run out of space and resources, causing population growth rate to slow or reverse.
Consider a school of fish living in a pond. Two factors affecting the population size are fixed and easy to determine – the initial population size, which is directly measurable, and the school’s maximum rate of change. We can think of the latter as the fecundityFertility; Birth rate. This article will use "fecundity" to refer to maximum rate of change. of the species, a constant potential for growth that is based on the specific group of animals.
A third factor must be acting on the population, though, because as the fish population expands, it will begin to run out of space and food in the pond, leading to a decrease in its rate of growth. There is, essentially, a maximum population density that the pond can sustain. When the population is low, it can operate close to its full fecundity, but as it approaches that maximum density, its rate of reproduction drops.
This sort of constraint exists for any population, and it causes some distinctive behaviors that we can model using the logistic map. The logistic map is not a "map" in the way that we usually use the term – it is not an image that conveys many pieces of information at once – but rather a function that takes the population size at the current time and returns what the population size will be after one time interval (usually a year) has passed.
Image 1. The path shown here is a logistic system over 50 years. Each green point is generated by applying the logistic map to the previous green point. For example, point B is generated by applying the logistic map to point A, F is generated by applying the logistic map to E, and Q comes from the logistic map when applied to P.
In this sense, the logistic map is not so different from the paper maps we use to follow paths through the woods. The main difference is that the logistic map only defines one step of a path; to see the whole path, we apply the logistic map to our starting point (the initial population size), then observe where that step has taken us, and apply the logistic map again to our new location (the current population size), and repeat the process. The path that appears as we do this is called a logistic system – a population, considered over an extended time period, whose size after every successive time interval is determined by applying the logistic map to its previous size. A logistic system is shown in Image 1. Each point in the system is the result of inputting the previous point into the logistic map.
It is important to note that the logistic map is not just any function that models population change, but is in fact a very specific function. It multiplies three factors to calculate where a population size will be after the passage of one time interval:
• The current population size.
• The fecundity of the population. Recall that this is the maximum rate of change – the speed at which the species can reproduce under ideal conditions.
• The difference between the current population size and the maximum population size.
Think about this third factor for a moment. Notice that it is smaller when the population is larger and larger when the population is smaller. In this way, it allows us to mathematically represent the manner in which a population’s rate of change is fast when it has more room to grow, but slows or reverses as it approaches its maximum sustainable value.
This model, of course, is highly simplified – in a real-world animal population, myriad other variables affect population changes. But the logistic map and the logistic systems it generates provide a general framework for considering the overall shape or movement of population growth in most animal communities. To read more about the relationship between the logistic map and real animal populations, jump to The Issue of Real-World Applications.
Image 2. Logistic systems bifurcate as their rates of change increase.
### Bifurcation and Chaos
Bifurcation occurs when changing a parameter causes a dynamic system to "branch" into multiple values. In the case of logistic bifurcation, we are considering the limits or end behaviors of logistic systems. Recall that a logistic system is different from the logistic map – the logistic map only describes the end behavior between the current population size and the subsequent one, and therefore has no meaningful "end behavior." A logistic system, on the other hand, changes over time to approach either a steady value, a stable oscillation (such as the one we saw in Image 1), or chaos.
In order to better understand this idea, let us consider how a bifurcation diagram is plotted. To create a bifurcation diagram, we generate a logistic system for every value of a range of fecundities, let those systems run over many iterations of the logistic map to see their end behaviors, then plot their limits as a set of points over the axis of their fecundities. The method of plotting is similar to that for creating a scatter plot, but what we observe is far from scattered.
Consider Image 2, where I used the fecundity range 2.5 to 4 and displayed the limits of the logistic systems generated by those parameters. The branching visible in the result indicates that, as fecundity increases, the end behaviors of the population sizes of these systems cease to be constant and begin fluctuating between multiple values – first two values, then four, then eight, etc. As fecundity continues to grow, the diagram appears grey as the "branches" fill the range of possible values, showing that the system has become chaotic.
However, to read this diagram, do not think of the branching action as a continuous motion. Instead consider a single vertical line through the image. Such a line captures exactly one system – that is, one animal community – with one fecundity rate, where each black point that the vertical line intersects is a population size that the system yields after an infinite passage of time intervals. Thus the portion of the diagram with two "branches" appears over the range of fecundities that create systems that oscillate between two population sizes; the portion with eight "branches" shows the range of systems that oscillate among eight population sizes; and the grey portions show the ranges of systems that are chaotic and oscillate among all possible values.
[This can be confusing! Please click here to see a visual breakdown of how to read the bifurcation diagram.]
[[Image:Bifurcation_Explication1.gif|center|frame|Here we see how the branches and grey areas of the logistic bifurcation diagram correspond the the d [...]
[Click to hide the visual breakdown.]
Here we see how the branches and grey areas of the logistic bifurcation diagram correspond the the development of actual systems generated by many iterations – in this case 100 – of the logistic map. Note that while the systems do not start on the points shown on the diagram, they quickly approach them, or in the case of the chaotic system, move among all of them. Key points on the diagram appear where the value of fecundity is 3 – where bifurcation begins – and where fecundity is 4 – where chaos becomes continuous.
This indicates an interesting property of logistic systems: While both the initial population size and the fecundity of that population are variables in the logistic map, the fecundity is more mathematically powerful when the map is iterated to generate a system; except for in very special circumstances, it is this value that determines whether the size of the population settles to a specific value, oscillates between two or more values, or becomes chaotic.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *
[Click to view A More Mathematical Explanation]
### Deriving the Logistic Map
The logistic map is a function that defines th [...]
[Click to hide A More Mathematical Explanation]
### Deriving the Logistic Map
The logistic map is a function that defines the amount of change a system goes through in exactly one time interval. When we iterate it, we generate a logistic system that models population growth as discussed above. So to find the logistic map, let us simply start with a function we can iterate to generate basic, unrestricted population growth:
$x_{n+1}=Px_n$
Where xn+1 is the population size or density at time (n + 1). If we iterate this function to generate a system, we will see a pattern of infinite, linear growth. But, as discussed above, indefinite and steady growth is not a realistic model of population growth in the real, ecological world. To account for the changing rate of change, P, of an actual population, we construct a new P:
$P=\mathbf{r}(1-x_n)$
Where r is a parameter for the fecundity or maximum rate of change of the population. Here we have set the maximum carrying capacity for the population (discussed above in terms of maximum population density) to 1. We can think of xn as a percent population density or a population size set to a scale where the unit measure is the carrying capacity of the environment. Either way, we set 0 < x0 < 1.
In this way, the (1 - xn) factor means that the overall rate of change, P, is higher when xn is lower and lower when xn is higher. This fits our earlier discussion of population change; fluctuations in rate of change are directly and inversely related to population size, because as the population grows, it runs out of space and resources, and its growth decreases.
Re-inserting this P in our initial representation of growth, we have:
$x_{n+1}=\mathbf{r}x_n(1-x_n)$
This is the logistic map. When we iterate it, we generate a logistic system that simulates changes in population size over time.
[Click to show an algorithm for using the logistic map to generate a bifurcation diagram.]
The heavy lifting of creating a logistic bifurcation diagram is all done by the logistic map. All that is needed to reveal the patterns of bifurcatio [...]
[Click to conceal the algorithm.]
The heavy lifting of creating a logistic bifurcation diagram is all done by the logistic map. All that is needed to reveal the patterns of bifurcation is iteration. This is a basic outline of the process, with arrows indicating assignment of value:
r 0 ← 0 set of r values over the range you wish to observe. The smaller the step between values, the better quality image you will produce.
a 0 ← 0 empty or zero set of a size equal to the number of iterations of the logistic map you would like to run to observe end behavior.
b 0 ← 0 empty or zero set of a size equal to the maximum number of points you would like to plot over each r value.
For i 0 ← 0 1 to (length of r)
For j 0 ← 0 1 to (length of b)
a1 0 ← 0 randomly generated value between 0 and 1.
For k 0 ← 0 1 to ((length of a) - 1)
ak + 1 0 ← 0 ri * ak (1 - ak )
bj 0 ← 0 a(length of a)
plot bj over ri
r_vec = linspace(2.5,4,1000);
x_vec = zeros(1,1000);
x_temp = zeros(1,100);
for j = 1:1000
for k = 1:100
x_vec(1) = rand;
for i = 1:999
x_vec(i+1) = r_vec(j)*x_vec(i)*(1-x_vec(i));
end
x_temp(k) = x_vec(1000);
hold on;
plot(r_vec(j),x_temp(k),'k');
end
end
### A Mathematical View of Bifurcation
What occurs, mathematically, in logistic systems to cause bifurcation? To understand, let us consider first what condition is necessary to prevent bifurcation; what occurs in the range 0 < r < 3, where systems yield a single value?
Since a logistic system is created by iterating the logistic map, we can answer that question easily. In order for any iterated function to yield only one value, it must be generating that value over and over at every step. That is, it must have the form:
$x_{n+1}=x_n$
Inputting the logistic map, this means that, where the bifurcation diagram has a single line showing the range of logistic systems that yield a single value, we have:
$x_{n+1}=\mathbf{r}x_n(1-x_n)=x_n$
This gives us a system of equations,
$x_{n+1}=\mathbf{r}x_n(1-x_n)$
$x_{n+1}=x_n$
This system of equations is represented graphically in Image 2 for the system at r = 2.9, where the intersection of the black line and the red curve is the solution to the system. (The intersection at (0,0) is a trivial solution.) We can use a web diagram to see that this intersection is indeed the final xn value for the system.
Image 2. Eq. 6 is shown in red with Eq. 7 in black. Here we introduce the notation xn + k on the vertical axis, where k is a placeholder for the number of iterations of the logistic map represented by the graph. Image 3. A web diagram showing the iterations of the logistic map for r = 2.9. The blue lines converging on the intersection at ~0.6552 show that this logistic system approaches that xn value and stays there.
Web diagrams, such as the one in Image 3 (also for r = 2.9), will be integral to our discussion of bifurcation, so let us take a moment to consider how they work. Such diagrams trace the evolution of iterated functions by representing each iteration as a pair of lines on a graph. They are laid out on the framework of the functions
$x_{n+1}=x_n$
and
$x_{n+1}=f(x_n)$
where f is the iterated function’s relationship between each step.
The "web" portion of the diagram, shown in blue in Image 3, begins at some point, (x0, 0), then proceeds in a series of perpendicular lines that move between the graphed curve and graphed line so that the lines have consecutive endpoints, $(x_0, 0)\rightarrow (x_0, f(x_0))\rightarrow (f(x_0), f(x_0))\rightarrow (f(x_0), f(f(x_0)))\rightarrow \dots$
That is, starting at a point x0 on the horizontal xn axis, we move vertically until we meet the curve of f(xn). In this way, we apply the function f to our original x0. To set this new value as our new xn, as we do when iterating a function, we must move horizontally until we meet the line xn + 1 = xn. From that point we can repeat the process – move vertically to the curve, move horizontally to the line – to find the values of further iterations.
Thus every two consecutive vertices of the web diagram show the progression through one iteration of a function. In the case of the r = 2.9 diagram here, the lines of the web spiral in to the intersection of the line and the curve, showing us that the limit of the system is to a single point – the same intersection that we found as the solution to the system of equations above. (In this case, as both the system of equations and web diagram show, that value is ~0.6552.)
At this point, we have found, analytically, the point at which logistic systems approach a single value. Using a web diagram, we confirmed that, for the example of r = 2.9, the point we can find analytically is in fact the point the system approaches. This is called the period-one fixed pointA fixed point of a function is a point that is mapped to itself by the function.. The bifurcation diagram supports what we found here: r values less than 3 create logistic systems that converge to single values.
Image 4. The blue lines in this web diagram do not converge to a single point, but rather converge to a box in which they orbit infinitely. This shows that the system oscillates between the two points at which this box intersects the xn + 1 = xn line.
What about r values greater than 3? The bifurcation diagram shows us that such values generate systems that either oscillate among many values or are chaotic. However, If we examine systems with r values greater than 3, we will find that they, too, have period-one fixed points where the line xn + 1 = xn intersects the curve xn + 1 = f(xn). But as we can see in the web diagram for r = 3.4 in Image 4, these systems yield oscillations that never intersect the systems' period-one fixed points.
What happens at bifurcation points of the logistic map to stop systems from converging to their period-one fixed points? The phenomenon is clearer if we think of bifurcation not as "branching" but as period doubling – a doubling of the number of iterations necessary to return the system to any previous value. Systems that yield a single value have a period of 1, as we have seen above represented by x(n+1) = xn. And systems that yield an oscillation between two values – such as the one we observe in the web diagram for r = 3.4 – have a period of 2, where
$x_{n+2}=x_n$
To input the logistic map in this expression, we must find an expression for xn + 1 in the logistic map. We can do this following the rules of iteration, where
$x_{n+1}=f(x_n)~~\rightarrow~~x_{n+2}=f(f(x_n))$
If we express the logistic map as a function l where
$x_{n+1}=l(x_n)=\mathbf{r}x_n(1-x_n)$
then we have
$x_{n+2}=l(l(x_n))=\mathbf{r}l(x_n)(1-l(x_n))$
$x_{n+2}=\mathbf{r}(\mathbf{r}x_n(1-x_n))(1-\mathbf{r}x_n(1-x_n))$
$x_{n+2}=\mathbf{r}^2x_n(1-x_n)(1-\mathbf{r}x_n(1-x))$
Image 5. Eq. 10 is shown in red, with Eq. 11 (the line) shown in black. The curve of the first iteration of the logistic map is included in black to show the location of the period-one fixed point. Notice that, while the red curve also intersects that point, it is the other two non-zero intersections of the red curve and the black line that correspond to the oscillation points we see in the web diagram in Image 4.
Returning to Eq. 6, we find that this, like a system of period 1, creates a system of equations:
$x_{n+2}=\mathbf{r}^2x_n(1-x_n)(1-\mathbf{r}x_n(1-x))=x_n$
$x_{n+2}=x_n$
This system is shown in Image 5 for r = 3.4. Here we see the system has two solutions other than the trivial point (0, 0) and the fixed point. Comparing the web diagram in Image 4 and the system in Image 5, we see that these solutions lie at points corresponding to the two values between which the system oscillates. We can call these solutions period-two fixed points.
Earlier, we found that period-one fixed points exist in all logistic systems, whether or not they approach only one point. Now, we will see that the condition x(n+2) = xn also has a solution for logistic systems with r < 3. This would seem to indicate that there are period-two fixed points as well as the period-one fixed point that we found as the limit to such systems in Eq. 5, so why do these systems not oscillate as well? The reason is clear if we graph the situation as we have in Image 6 for r = 2.9. We can see by the graph that, while both x(n+1) = xn and x(n+2) = xn have solutions, the two solutions are equal, so the system does not orbit to any higher-periodic fixed points.
Image 6. Here we see the logistic map for r = 2.9 plotted for a single iteration (in red) and two iterations (in green) along with the line xn + 1 = xn (in black). Note that the red and green curves both intersect the black line in the same places, showing that two iterations do not produce any period-two fixed points that are distinct from the period-one fixed point. Image 7. Here we see the logistic map for r = 3.4 plotted for two iterations (in red) and three iterations (in green) along with the line xn + 1 = xn (in black). The curve for one iteration is also provided in black for reference. Note that, while two iterations show distinct period-two fixed points, three iterations do not.
Similarly, if we graph x(n+3) = xn for r = 3.4, as we have in Image 7, we do not find any new solutions that we did not see in Image 5. But for r = 3.54, shown as a system of equations in Image 8 and a web diagram in Image 9, we see that where there are solutions to higher-order iterations – that is, higher-periodic fixed points – the system will yield values corresponding to those solutions.
Image 8. Here the logistic map at r = 3.54 is plotted for four iterations in red, along with black curves showing one and two iterations, and a the black line xn + 1 = xn. In this case, higher iterations do yield higher-periodic fixed points. In the web diagram for this situation in Image 9, we see that the logistic system has limits at those higher-periodic fixed points. Image 9. In this web diagram for the logistic system at r = 3.54, the blue lines converge to an orbit that resembles two boxes. Each point where these boxes cross the black line xn + 1 = xn represents one of the limits of the system. Note that these points are the same as the highest-order fixed points in Image 8.
So what are we seeing here? There seems to be a pattern: logistic systems have limits located at their highest-periodic distinct fixed points. To form a general rule, we will bring back the term l(xn) from Eq. 9. Based on the pattern we have seen, at the highest value of k for which
$x_{n+k}=l^k(x_n)$
has solutions, any solutions that do not appear at any lower k values are end behaviors of the logistic system in question. The logistic bifurcation diagram shows us that, as r values increase, the number of distinct higher-periodic fixed points also tends to increase. In fact, at r = 4, the number of distinct fixed points has increased to the point that it is infinite, and we observe chaos.
You can find the limits of any logistic system yourself, with no graphs or diagrams, by simply solving the system of equations produced by Eq. 12. Start at k = 1, and continue until you reach a k value that yields only solutions that appear at lower k values. The system of equations before that point contains the solutions you are looking for; all solutions to those equations that do not exist at lower k values are limits of the logistic system in question. Of course, if you find web diagrams more appealing, you will find the same results by directly diagramming the iterations of the logistic map.
### Special Cases
#### r < 1
We have not paid much attention to those logistic systems in the range 0 < r < 1, because these systems all converge to xn = 0. The animation to the left should help illustrate why: no matter what x0 begins the system, the values inevitably move toward zero.
To see the same result obtained mathematically,
First, let us take as given the properties of the logistic map that 0 < x0 < 1 and that such an x0 will never produce [...]
[Click to hide proof.]
First, let us take as given the properties of the logistic map that 0 < x0 < 1 and that such an x0 will never produce xn values outside the range 0 < xn < 1 as long as r is between 0 and 4. (The former assertion is part of the definition of the logistic map; the latter is a basic property that you can verify for yourself.)
Now let us assume that there is, in fact, some non-zero solution to the system of equations
$x_{(n+1)}=\mathbf{r}x_n(1-x_n)=x_n$
where 0 < r < 1.
Based on the properties of systems of equations, any solution to this system will have the form
$\mathbf{r}x_n(1-x_n)-x_n=0$
or
$x_n(\mathbf{r}-\mathbf{r}x_n-1)=0$
Where xn = 0 does not meet our criterion for a nonzero system, leaving us with
$\mathbf{r}-\mathbf{r}x_n-1=0$
or
$\mathbf{r}-\mathbf{r}x_n=1$
However, with the condition that 0 < r < 1 and the previously established overall property of the logistic map that 0 < xn < 1, we have two positive values less than 1, with a difference of one – an equation with no real solution.
Thus, our initial assumption cannot hold and we see that, after sufficient iterations, a logistic system with r < 1 cannot yield any limit other than zero.
#### r ≥ 4
As mentioned earlier, r = 4 marks the beginning of continuous chaos – any r value of 4 or greater yields a chaotic logistic system. To the left is a web diagram showing only 250 iterations of the logistic map at r = 4. It is clear that no pattern is emerging; if more iterations were added to the web diagram, it would begin to look completely "filled-in" with blue, but the iterations would continue to move to new points, distinct (by infinitesimal values) from any previous points.
If we were to attempt to analyze the system at r = 4 using a system of equations, as we did for previous systems, we would find that there is an infinite number of higher-order period functions that have intersections with the line x(n+1) = xn. We found above that the logistic map settles on its highest-order periodic oscillation with distinct solutions. But there is no such highest-order oscillation for a system with r ≥ 4, so such systems are non-periodic; they never repeat, instead moving chaotically.
There is only one blue line in this web diagram, despite the fact that at r = 4 we would generally observe chaos. This occurs because the line begins on the horizontal axis precisely below the intersection of the line and curve and moves directly to the system's period-one fixed point. From that point, because the web is in contact with both the line and the curve, it does not move; the system will never yield any other values.
#### x0 at a Lower-Periodic Fixed Point
Thus far, we have operated on the claim laid forth in the "Basic Description" that the r value has a much greater impact on the outcome of the logistic map than the x0 value does. In general, this is true. We can see the validity of the claim in the web diagrams; given an r value, almost any valid (that is, 0 < xn < 1) starting point on the xn axis will move to the same point, oscillation, or chaotic motion, determined by that r value. The only exception is for x0 equal to the one of the lower-periodic fixed points of the system. In that case, as shown on the left, the system will never yield any value other than the one or ones represented by the fixed point or points. This is the only case in which the value of x0 impacts the values generated by the logistic map.
# Why It's Interesting
### The Issue of Real-World Applications
Does the type of chaotic growth predicted by the logistic map ever actually occur in the natural world? Though the logistic map was created to mimic real population development, this question is still highly controversial among biologists. Several laboratory experiments on microscopic organisms and insects seem to indicate that chaotic population patterns are possible, but none have been conclusively proven to arise spontaneously, outside the lab.
This is in large part due to the high number of variables, many of them difficult to measure, that contribute to population changes when the population in question is not in a controlled setting. These make it almost impossible to determine whether populations that appear to behave chaotically are doing so because of fecundity rates predicted to cause chaos or because of other factors such as disease, drought, or famine. Coral reefs, for example, seem to exhibit chaos in many of the populations they support, but they are also very delicate systems. It is unclear whether this delicacy is a sign of logistic chaos or simply a complex web of other variables making stability look like chaos.
Many scientists argue that, if chaos is possible in nature, it must be incredibly rare and short-lived. Recall that "chaos" in this case means that the population size moves, at some point, to every possible value, and does this in a manner that has no recognizable pattern. A population that has such frequent and erratic dips to low sizes would be particularly susceptible to any natural disasters or drops in levels of resources, most likely quickly leading to its extinction. In this way, some scientists say, natural selection would eliminate any traits that lead to levels of fecundity that cause chaos. This, however, is only a theory; the debate is ongoing.
### Fractal Properties
The logistic system, like many other dynamic systems, is created by applying a single process over and over to one initial condition. Because of this, any one step in a logistic system, no matter how far it is from x0, will look the same, mathematically, as any other single step. In this way, logistic systems share an important characteristic with fractals, and visual representations of them often have fractal dimensions of self-similarity. The logistic bifurcation diagram is one such representation; many parts of the diagram, taken by themselves, resemble the whole. You can use the applet below to explore the diagram by zooming in on different sections. Can you find the sections that exhibit self-similarity?
Applet created by Professors [Takashi Kanamaru] and [J. Michael T. Thompson].
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306923747062683, "perplexity_flag": "head"}
|
http://nrich.maths.org/543
|
### Real(ly) Numbers
If x, y and z are real numbers such that: x + y + z = 5 and xy + yz + zx = 3. What is the largest value that any of the numbers can have?
### Overturning Fracsum
Solve the system of equations to find the values of x, y and z: xy/(x+y)=1/2, yz/(y+z)=1/3, zx/(z+x)=1/7
### Bang's Theorem
If all the faces of a tetrahedron have the same perimeter then show that they are all congruent.
# System Speak
##### Stage: 4 and 5 Challenge Level:
Solve the system of equations:
$ab = 1$
$bc = 2$
$cd = 3$
$de = 4$
$ea = 6$
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8711817860603333, "perplexity_flag": "middle"}
|
http://www.cfd-online.com/W/index.php?title=Baldwin-Lomax_model&diff=13274&oldid=13267
|
[Sponsors]
Home > Wiki > Baldwin-Lomax model
# Baldwin-Lomax model
### From CFD-Wiki
(Difference between revisions)
| | | | |
|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Xadmin (Talk | contribs) () | | Peter (Talk | contribs) m (Reverted edits by Xadmin (talk) to last revision by Fluid) | |
| Line 119: | | Line 119: | |
| | The model is popular in quick design-iterations due to its robustness and reliability. It seldom leads to any convergence problems and it seldom gives completely unphysical results. | | The model is popular in quick design-iterations due to its robustness and reliability. It seldom leads to any convergence problems and it seldom gives completely unphysical results. |
| | | | |
| - | The Baldwin-Lomax model should be used with great care in cases with large separations. It has been shown by several researchers that the Baldwin-Lomax model tends to overpredict separated regions (see for example the comments made by David Wilcox [[#References|[Wilcox (1998)]]]). However, there are ad-hoc modifications which reduce this problem. For instance, prediction of separation is sensitive to the value of the <math>C_{WK}</math> coefficient and higher values than the original value tend to reduce the problems with too early separation. Also note that the Granville correction mentioned above, which attempts to account for adverse pressure gradient effects, increases the problem with too large separations.[http://www.gadgetsdotcom.com/ ] | + | The Baldwin-Lomax model should be used with great care in cases with large separations. It has been shown by several researchers that the Baldwin-Lomax model tends to overpredict separated regions (see for example the comments made by David Wilcox [[#References|[Wilcox (1998)]]]). However, there are ad-hoc modifications which reduce this problem. For instance, prediction of separation is sensitive to the value of the <math>C_{WK}</math> coefficient and higher values than the original value tend to reduce the problems with too early separation. Also note that the Granville correction mentioned above, which attempts to account for adverse pressure gradient effects, increases the problem with too large separations. |
| | | | |
| | The Baldwin-Lomax model does not account for the effect of a high free-stream turbulence level. Hence, it can not be used reliably when the free-stream turbulence has a signigicant effect on the boundary layer development. | | The Baldwin-Lomax model does not account for the effect of a high free-stream turbulence level. Hence, it can not be used reliably when the free-stream turbulence has a signigicant effect on the boundary layer development. |
| | | + | |
| | | | |
| | == Implementation issues == | | == Implementation issues == |
## Revision as of 12:13, 3 September 2011
Turbulence
RANS-based turbulence models Linear eddy viscosity models Nonlinear eddy viscosity models Explicit nonlinear constitutive relation v2-f models $\overline{\upsilon^2}-f$ model $\zeta-f$ model Reynolds stress model (RSM)
Large eddy simulation (LES)
Detached eddy simulation (DES)
Direct numerical simulation (DNS)
Turbulence near-wall modeling
Turbulence free-stream boundary conditions
The Baldwin-Lomax model [Baldwin and Lomax (1978)] is a two-layer algebraic 0-equation model which gives the eddy viscosity, $\mu_t$, as a function of the local boundary layer velocity profile. The model is suitable for high-speed flows with thin attached boundary-layers, typically present in aerospace and turbomachinery applications. It is commonly used in quick design iterations where robustness is more important than capturing all details of the flow physics. The Baldwin-Lomax model is not suitable for cases with large separated regions and significant curvature/rotation effects (see below).
## Equations
$\mu_t = \begin{cases} {\mu_t}_{inner} & \mbox{if } y \le y_{crossover} \\ {\mu_t}_{outer} & \mbox{if } y > y_{crossover} \end{cases}$ (1)
Where $y_{crossover}$ is the smallest distance from the surface where ${\mu_t}_{inner}$ is equal to ${\mu_t}_{outer}$:
$y_{crossover} = MIN(y) \ : \ {\mu_t}_{inner} = {\mu_t}_{outer}$ (2)
The inner region is given by the Prandtl - Van Driest formula:
${\mu_t}_{inner} = \rho l^2 \left| \Omega \right|$ (3)
Where
$l = k y \left( 1 - e^{\frac{-y^+}{A^+}} \right)$ (4)
$\left| \Omega \right| = \sqrt{2 \Omega_{ij} \Omega_{ij}}$ (5)
$\Omega_{ij} = \frac{1}{2} \left( \frac{\partial u_i}{\partial x_j} - \frac{\partial u_j}{\partial x_i} \right)$ (6)
The outer region is given by:
${\mu_t}_{outer} = \rho \, K \, C_{CP} \, F_{WAKE} \, F_{KLEB}(y)$ (7)
Where
$F_{WAKE} = MIN \left( y_{MAX} \, F_{MAX} \,\,;\,\, C_{WK} \, y_{MAX} \, \frac{u^2_{DIF}}{F_{MAX}} \right)$ (8)
$y_{MAX}$ and $F_{MAX}$ are determined from the maximum of the function:
$F(y) = y \left| \Omega \right| \left(1-e^{\frac{-y^+}{A^+}} \right)$ (9)
$F_{KLEB}$ is the intermittency factor given by:
$F_{KLEB}(y) = \left[1 + 5.5 \left( \frac{y \, C_{KLEB}}{y_{MAX}} \right)^6 \right]^{-1}$ (10)
$u_{DIF}$ is the difference between maximum and minimum speed in the profile. For boundary layers the minimum is always set to zero.
$u_{DIF} = MAX(\sqrt{u_i u_i}) - MIN(\sqrt{u_i u_i})$ (11)
## Model constants
The table below gives the model constants present in the formulas above. Note that $k$ is a constant, and not the turbulence energy, as in other sections. It should also be pointed out that when using the Baldwin-Lomax model the turbulence energy, $k$, present in the governing equations, is set to zero.
| | | | | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $A^+$ | $C_{CP}$ | $C_{KLEB}$ | $C_{WK}$ | $k$ | $K$ |
| 26 | 1.6 | 0.3 | 0.25 | 0.4 | 0.0168 |
## Model variants
In order to improve the Baldwin-Lomax model modifications of the model-constants can be made in order to account for the effect of adverse pressure gradients. This has been done by Granville and Turner and Jennions. For further information see the references below.
## Performance, applicability and limitations
The Baldwin-Lomax model is suitable for high-speed flows with thin attached boundary layers. Typical applications are aerospace and turbomachinery applications. It is a low-Re model and as such it requires a fairly well-resolved grid near the walls, with the first cell located at $y+ < 1$.
The model is popular in quick design-iterations due to its robustness and reliability. It seldom leads to any convergence problems and it seldom gives completely unphysical results.
The Baldwin-Lomax model should be used with great care in cases with large separations. It has been shown by several researchers that the Baldwin-Lomax model tends to overpredict separated regions (see for example the comments made by David Wilcox [Wilcox (1998)]). However, there are ad-hoc modifications which reduce this problem. For instance, prediction of separation is sensitive to the value of the $C_{WK}$ coefficient and higher values than the original value tend to reduce the problems with too early separation. Also note that the Granville correction mentioned above, which attempts to account for adverse pressure gradient effects, increases the problem with too large separations.
The Baldwin-Lomax model does not account for the effect of a high free-stream turbulence level. Hence, it can not be used reliably when the free-stream turbulence has a signigicant effect on the boundary layer development.
## Implementation issues
The computation of most of the model looks to relatively straightforward, but upon further examination, a few issues crop up. First, the model is nonlocal in nature due to the presence of the damping function. This means that for any location in the flow interior, we need a wall (or other suitable location) to compute a $y^+$ from. Further, the calculation of $y_{MAX}$ and $F_{MAX}$ is best suited to a structured grid in which grid lines emanate outward from a wall (or wakeline, etc.). The model is thus best used in a structured grid setting, but has been used with unstructured grids via background grids [Mavriplis (1991)]. Second, the determination of $y_{MAX}$ and $F_{MAX}$ is sensitive to gridpoint location, as the vorticity magnitude is typically only available pointwise. One solution (perhaps with limited justification) is to do a fit of $F$ to reduce any problems. Finally, it is tempting to use the minimum of the two (inner and outer) eddy viscosity results instead of the correct crossover formula. This simplifies the programming, but is not justifiable on any other grounds (and can lead to the use of the wrong eddy viscosity). The (minimal) additional programming is required for correct model implementation.
We need some further information here about what to think about when implementing this model in a CFD code. For example, there are some issues when computing the max and min values in the formulas - in complex 3D cases you can sometimes find several local mins/maxs. Can anyone add something about this?
## References
• Baldwin, B. S. and Lomax, H. (1978), "Thin Layer Approximation and Algebraic Model for Separated Turbulent Flows", AIAA Paper 78-257.
• Granville, P. S. (1987), "Baldwin-Lomax Factors for Turbulent Boundary Layers in Pressure Gradients", AIAA Journal, Vol. 25, No. 12, pp. 1624-1627.
• Mavriplis, D. J. (1991), "Algebraic turbulence modeling for unstructured and adaptive meshes", AIAA Journal, Vol. 29, pp. 2086-2093.
• Turner, M. G. and Jennions, I. K. (1993), "An Investigation of Turbulence Modeling in Transonic Fans Including a Novel Implementation of an Implicit $k-\epsilon$ Turbulence Model", Journal of Turbomachinery, Vol. 115, April, pp. 249-260.
• Wilcox, D.C. (1998), Turbulence Modeling for CFD, ISBN 1-928729-10-X, 2nd Ed., DCW Industries, Inc..
Return to Turbulence modeling
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9126463532447815, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/29161?sort=newest
|
## Does the concept of a basis for a topology on a category exist?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If we want to define a sheaf F on a topological space X and we have a basis B for the topology of X, what we can do is to define objects and restrictions for guys in B, check that they satisfy the "B-sheaf axioms" and then use
Theorem 1: the B-sheaf extends uninquely to the whole of X.
I was wondering if there's a similar thing for more general sites, and actually not just for sheaves on a given site but for stacks.
The question I'm really interested in is the following:
If one has a fibred category over Schemes (say Schemes over some fixed field with the fppf topology) and one wants to check that descent is effective, would it be sufficient to check it on some subcategory of schemes (using perhaps some vague analogue of Theorem 1)?
Thanks.
EDIT
For example one might want to construct the stack M of coherent sheaves on some scheme X. One way to do it is to define the functor which associates with each scheme S the groupoid of coherent sheaves on $S\times X$ flat over S $$M(S) = { E \in Coh\ S\times X,\ E \text{ flat over S} }.$$
Let's say I want to use a different characterization of $M(S)$, perhaps using Lemma 3.31 on page 82 of Huybrechts' Fourier-Mukai book.
My ignorance prevents me from knowing if the that lemma is valid for a general scheme S (no matter how nice my X might be). This is why I'd like to work over some nice subcategory of schemes (where the lemma is valid) and then extend.
The stack I'd be interested in defining would be a stack of perverse sheaves on X, where the matters would be a bit worse.
-
2
Let's address the question you say is of real interest. If your fibered category satisfies the limit criterion for locally finite presentation then for effective descent it suffices to check over base schemes of finite presentation over (or ground field, or whatever). I am vague since you were vague about finiteness hypotheses on your fibered category, so it isn't a precise answer. This principle is implemented in a precise way and is used all the time by those who look under the hood. It is an important application of the massive theory of limits of schemes in EGA IV3, sections 8--12, 17. – Boyarsky Jun 23 2010 at 0:23
Would it be too much to ask for an example (an easy one?) where the technique you describe is applied? – babubba Jun 23 2010 at 8:35
@angoleirovero: please name a fibered category for which you want to know effective descent for the fppf topology, such as a stack of interest to you which is not a scheme. – Boyarsky Jun 23 2010 at 8:57
@Boyarksy: I'll edit my question. – babubba Jun 29 2010 at 18:44
1
@angoleirovero: you mean "quasi-coherent and finitely presented", not "coherent". The general limit formalism implies that if $\{S_i\}$ is an inverse system of affine schemes with limit $S$ then $\varinjlim M(S_i) \rightarrow M(S)$ is an equivalence in a sense I hope is evident. If $S' \rightarrow S$ is an fppf cover then it arises from an fppf cover of some $S_i$ via base change. So if effective descent holds with $M$ for $S$ of finite type over $\mathbb{Z}$, it holds in general. (This example is crazy, since fppf descent for all quasi-coherent sheaves works directly on arbitrary schemes.) – Boyarsky Jun 30 2010 at 5:03
show 1 more comment
## 1 Answer
Let $S$ be your Grothendieck site. What you want is a subcategory $j:B \hookrightarrow S$ such that the Grothendieck topology of $S$ restricts to $B$ in the sense that every covering sieve of $b \in B$ can be refined by one coming from a family $(U_i \to b)_i$ with each $U_i \in B$, AND such that $j^*:Sh(B) \to Sh(S)$ is an equivalence of topoi. This is exactly what makes Theorem 1 work.
Concretely, you want the topology to restrict and every $s$ in $S$ to have a covering family $(b_i \to s)$ with $b_i \in B$, so that you can then say:
`$$F(s):=\varprojlim \left(\prod_{i}F(b_i)\rightrightarrows \prod_{i,j}F(b_i \times_{s} b_j)\right).$$`
However, you need to make sure this doesn't depend on the covering family you chose.
Suppose only that every $s$ in $S$ to have a covering family $(b_i \to s)$ with $b \in B$ and that the Grothendieck topology on $S$ restricts to $B$ in the sense described above. Then, since the Grothendieck topology is subcanonical, $s$ is a colimit of $b_i$s, hence $s \mapsto Hom(blank,s)$ embeds $S$ into $Sh(B)$ (note that the left-Kan extension of this embedding is precisely $j^*$, which is literally restriction).
I claim $j^*$ is fully-faithful. This is essentially because `$Hom(j^*F,j^*G)$` for two sheaves on $S$ determines `$Hom(F,G)$` since the value of $F(s)$ is determined by the value of $F$ on $b_i$s by the cover $(b_i \to s)$, by descent.
Now, if $F$ is a $B$-sheaf (i.e. an element of $Sh(B)$), then `$j_*F(s)=Hom(j^*s,F).$` Hence, `$$j_*j^*(F)(s)\cong Hom(s,j^*j_*F)\cong Hom(j^*s,j^*F)\cong Hom(s,F)\cong F(s),$$`
by Yoneda, adjointness, and full an faithfulness.
Note also that `$j^*j_*(G) \cong G$` for all $G \in Sh(B)$ pretty much by definition. Hence the adjoint pair `$j_*,j^*$` is an equivalence.
So, what does this mean concretely? You need to find a subcategory $B$ of schemes such that
1.)every cover in the fppf topology of an element of $b \in B$ can be refined by one with domains in $B$ (at least you need to be able to find a family of morphisms whose SIEVE is in the topology GENERATED by the fppf pretopology)
2.) Every scheme can be covered by elements of $B$.
I'll leave it to you to find such a subcategory, as I don't know much AG.
P.S., everything I said will hold for stacks as well.
EDIT: Condition 2.) implies condition 1.), so this becomes simpler:
You just need a category subcategory $B$ of schemes such that every scheme can be covered by elements of $B$.
-
I can't, for the life of me, figure out why this bit of Latex won't compile. I DIDN'T use "underleftarrow" anywhere, and I used backticks.... anyway, the thing that's not displaying is essentially THIS upload.wikimedia.org/math/c/d/6/… (but with the correct variables and intersection replaced with fibred-product). – David Carchedi Jun 23 2010 at 1:01
I assume you meant "...can be refined by one with \emph{domains} in $B$..." in your condition 1 above, anyway it seems to me that condition 2 implies 1: if you have a covering of $b$ by arbitrary objects of $S$, then by 2 you can find a covering of each one of those by elements of $B$, and putting these together will give you a refinement with domains in $B$... right? – Mattia Talpo Jun 23 2010 at 22:39
@Mattia: The domain/codomain thing was of course a typo due to the fact that I answered this at 3 in the morning. I'll fix it. As to your other comment, I totally agree. I should've made this simplification. Please let me also blame this on it having been 3 in the morning. – David Carchedi Jun 23 2010 at 23:00
Ok thanks, I only asked because I thought I was missing something.. Oh, and I totally understand about late hours :) – Mattia Talpo Jun 23 2010 at 23:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464919567108154, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/24270?sort=oldest
|
## A number encoding all primes
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This may be a soft question, but it's just something I thought of one night before sleeping. It's not my field at all, so I am just asking out of curiosity. Has anyone studied the number which is the sum over primes $\sum{ 2^{-p}}$? Its binary expansion (clearly) has a 1 in each prime^th "decimal place", and a zero everywhere else, so, it should be important in number theory I would guess.
-
5
No, unlikely to be of interest in number theory. – Gerald Edgar May 11 2010 at 18:20
9
Why not use $\sum 10^{-p}$ instead, so you can remove the quotes around decimal place? Anyway, numbers of this form are nothing else than a curiosity but of little use. Even though they encode information about all primes, you need to input all the same information in the definition of the number. – Álvaro Lozano-Robledo May 11 2010 at 18:20
@Alvaro: That's fair enough. I used 2 because what made me think of this was an old homework problem from a number theory course I took in Russia: Write a "formula" for the n^th prime. You can use a similar "trick" as above to do so. – David Carchedi May 11 2010 at 18:39
3
Compare with the product formula for the Riemann zeta function: tinyurl.com/29bythb With the zeta function you elegantly get prime numbers combining together in all possible ways to form all natural numbers raised to the power of $s$. The whole thing is very natural. Your sum doesn't really have such properties. If you start trying to form powers of it (say) you get all kinds of "cross" terms that make it hard to assign meaning to the expansion. – Dan Piponi May 11 2010 at 19:55
1
there are numbers that also encode this thread; check them: they also include a lot of variations that we are ashamed to try here ;) – Pietro Majer Oct 28 at 21:16
## 3 Answers
Here is Hardy & Wright's answer from "An Introduction to the Theory of Numbers", (5th ed, p344), where they discuss a similar number:
"Although ... gives a 'formula' for the nth prime, it is not a very useful one. To calculate $p_n$ from this formula, it is necessary to know the value of $a$ correct to $2^n$ decimal places; and to do this, it is necessary to know the values of $p_1$, $p_2$, ..., $p_n$ ... There are a number of similar formulae which suffer from the same defect ... Any one of these formulae (or any similar one) would attain a different status if the exact value of the number $a$ which occurs in it could be expressed independently of the primes. There seems no likelihood of this, but it cannot be ruled out as entirely impossible."
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You might take a look at the paper by Forenc Adorjan, "Binary Mappings of monotonic sequences and the Aronson function". It specifically discusses the number you describe.
-
See http://oeis.org/A051006 and http://mathworld.wolfram.com/PrimeConstant.html which cover this particular sequence.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343446493148804, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/315050/using-pigeonhole-principle-to-prove-two-numbers-in-a-subset-of-2n-divide-eac
|
# Using Pigeonhole Principle to prove two numbers in a subset of $[2n]$ divide each other
Let $n$ be greater or equal to $1$, and let $S$ be an $(n+1)$-subset of $[2n]$. Prove that there exist two numbers in $S$ such that one divides the other.
Any help is appreciated!
-
## 1 Answer
HINT: Create a pigeonhole for each odd positive integer $2k+1<2n$, and put into it all numbers in $[2n]$ of the form $(2k+1)2^r$ for some $r\ge 0$.
-
Ok, so for any set S=(1, 2, ..., 2n), we choose all odd numbers from that S, so So=(1, 3,...,2k+1) such 2k+1 < 2n. For each element in S choose those that satisfy (2k+1)2^r, which are multiples of each element in So. Let this set be S2 = (2, 6,...,(2k+1)2^r), as long as (2k+1)2^r < 2n. S2 is necessarily bigger than So, and thus, each element in So can be 'mapped' to multiple multiples of itself. Consequently, any set bigger than S, of size n+1, must also have this property. Is this reasoning correct? – user64093 Feb 26 at 18:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9038315415382385, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/100391/list
|
## Return to Answer
3 Item #3 added.
Here are a couple of few others:
1. Let $H_n=\sum_{j=1}^n 1/j$. Then for all $n\geq 1$, $$\sum_{d|n}d\leq H_n+(\log H_n)e^{H_n}.$$ Jeff Lagarias showed that this is equivalent to the Riemann hypothesis!
2. Let $x_0=2$, $x_{n+1}=x_n-\frac{1}{x_n}$ for $n\geq 0$. Then $x_n$ is unbounded.
3. The largest integer that cannot be written in the form $xy+xz+yz$, where $x,y,z$ are positive integers, is 462. It is known that there exists at most one such integer $n>462$, which must be greater than $2\cdot 10^{11}$. See J. Borwein and K.-K. S. Choi, On the representations of $xy+yz+xz$, Experiment. Math. 9 (2000), 153-158; http://projecteuclid.org/Dienst/UI/1.0/Summarize/euclid.em/1046889597.
2 $n\geq 1$ replaced with $n\geq 0$
Here are a couple of others:
1. Let $H_n=\sum_{j=1}^n 1/j$. Then for all $n\geq 1$, $$\sum_{d|n}d\leq H_n+(\log H_n)e^{H_n}.$$ Jeff Lagarias showed that this is equivalent to the Riemann hypothesis!
2. Let $x_0=2$, $x_{n+1}=x_n-\frac{1}{x_n}$ for $n\geq 1$0$. Then$x_n\$ is unbounded.
1 [made Community Wiki]
Here are a couple of others:
1. Let $H_n=\sum_{j=1}^n 1/j$. Then for all $n\geq 1$, $$\sum_{d|n}d\leq H_n+(\log H_n)e^{H_n}.$$ Jeff Lagarias showed that this is equivalent to the Riemann hypothesis!
2. Let $x_0=2$, $x_{n+1}=x_n-\frac{1}{x_n}$ for $n\geq 1$. Then $x_n$ is unbounded.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053930044174194, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/177767-division-algorithm-polynomials-print.html
|
# Division algorithm for polynomials
Printable View
• April 15th 2011, 06:39 AM
worc3247
Division algorithm for polynomials
Let M and N be positive integers with M > N. The division algorithm for Z implies that
there exist integers Q and R such that M = QN + R, where 0=<R<N. The division algorithm for $\mathbb{R}[x]$ tells us that there exist polynomials q and r such that
x^M -1=q(x^N-1)+r, where r = 0 or deg r < N.
Find q and r.
I understand the division algorithm, but am not quite sure how to go about finding q and r. Help please!
• April 15th 2011, 11:53 AM
Deveno
what happen when you try to carry out the long division of x^M - 1 by x^N - 1?
here is the first step:
x^M - 1 = x^(M-N)(x^N - 1) + ????
• April 15th 2011, 12:09 PM
worc3247
That gives r=x^M-N -1, but then the degree of r is not necessarily less than n.
• April 15th 2011, 01:15 PM
emakarov
Quote:
That gives r=x^M-N -1
No, this does not necessarily give the remainder x^M-N - 1. (This should be written as x^(M-N) - 1 or, in LaTeX style, as x^{M-N} - 1.) I suggest doing several examples, such as (x^10 - 1) / (x^6 - 1), (x^10 - 1) / (x^4 - 1) and (x^10 - 1) / (x^3 - 1).
• April 15th 2011, 01:31 PM
Deveno
right. remember M = QN + R. M-N is not necessarily < N, you have to keep going until it is.
• April 15th 2011, 02:53 PM
worc3247
Ok, but for this general case where you have M-N how are you supposed to know when to stop so that you can get a value for r?
• April 15th 2011, 04:26 PM
emakarov
Your question is a little vague. There is the long division algorithm. Applying it to x^M - 1 and x^N - 1 gives the quotient and the remainder. If you understand the algorithm, you understand when to stop. Basically, when, after subtraction and bringing down the next term from the numerator, you get a polynomial whose degree is smaller than that of the denominator, that is the remainder.
I suggest doing several examples and then forming a hypothesis about the quotient and the remainder in the general case.
All times are GMT -8. The time now is 05:32 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205897450447083, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/111890?sort=newest
|
## Geometrically connected curve
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What is the definition of a geometrically connected curve?
-
6
Seriously, the first hit on google gives you the answer... google.fr/search?q=geometrically+connected – Lierre Nov 9 at 13:10
## 1 Answer
For a variety over a non-algebraically closed field, "geometrically connected" means connected over the algebraic closure.
As an example where this fails, note that the curve $x^2+1=0$ in $\mathbb{A}^2$ is connected over $\mathbb{Q}$, but not over $\mathbb{Q}[i]$, where is becomes $(x+i)(x-i)=0$, which is a union of two lines. Hence this curve is connected but not geometrically connected.
You can also use the same adjective for many other properties, so that you can talk about something being geoemtrically integral, geometrically rational, etc...
-
I think your example doesn't work, at least in the projective plane, where any two lines meet in a point (in this case in the point $[0:0:1]\in\mathbb{P}^1(\mathbb{Q}[i])$). – Qfwfq Nov 9 at 13:01
Projective or not, the two lines in the example meet at `$x=y=0$`. For an example of a connected, non geometrically connected curve, better consider the affine curve over `$\mathbb{Q}$` with affine ring `$\mathbb{Q}(\sqrt{2})[x]$`. – Matthieu Romagny Nov 9 at 13:32
Yes I realise now that I do not properly think through my example as I wrote it in a rush... I have edited it accordingly. – Daniel Loughran Nov 9 at 14:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328391551971436, "perplexity_flag": "middle"}
|
http://jwbales.us/precal/part5/part5.2.html
|
## Discovering other identities
Other trigonometric identities can be derived from the elementary identities.
For example, an identity for $$\cot A\sin A$$ can be found by replacing $$\cot A$$ with $$\dfrac{\cos A}{\sin A}$$ and simplifing to $$\cos A$$.
Thus $$\cot A\sin A = \cos A$$ is an identity.
## A caveat
Never begin a proof by assuming the truth of that which you are attempting to prove.
The following is an invalid proof of the identity above.
$$\begin{eqnarray*} \cot A\sin A &=& \cos A\\[12pt] \dfrac{\cot A\sin A}{\sin A}&=&\dfrac{\cos A}{\sin A}\\[12pt] \cot A &=& \cot A \end{eqnarray*}$$
This so-called ‘proof’ begins by using the very identity it seeks to prove. The presumption is that if we begin with some statement and go through a sequence of logical inferences and arrive at a true statement, then the original statement must have been true. But this presumption is false. It is possible to begin with a false statement and yet arrive at a true statement by a series of logical inferences. The fact that the final statement is true implies nothing about whether the original statement is true or false. It is a common logical fallacy that only true statements imply true statements. But false statements can imply true statements.
For example, consider the following invalid proof that $$0=1$$.
$$0 = 1$$
Multiplying both sides by $$-1$$ yields
$$( -1 )( 0) = ( -1 )( 1 )$$ thus
$$0 = - 1$$
Since $$0 = 1$$ and $$0 = -1$$, add the two equations to get
$$0 + 0 = 1 + ( -1 )$$, thus
$$0 = 0$$ which is true.
Thus the original statement $$0 = 1$$ must be true.
This is an example of a false statement implying a true statement. These two fallacious ‘proofs’ illustrate why you cannot prove an identity if you begin by using the identity.
## Exercise 5.2.1
Verify that $$\sin^2 A = ( 1 – \cos A )( 1 + \cos A )$$.
See Solution
## Exercise 5.2.2
Verify that $$\dfrac{1+\tan A}{\sec A} = \cos A + \sin A$$
See Solution
## Exercise 5.2.3
Verify that $$\dfrac{1}{\sec A + \tan A}= \sec A – \tan A$$
See Solution
## Exercise 5.2.4
Verify that $$\cos ( \frac{\pi}{2} – A ) \sec A = \cot ( \frac{\pi}{2} – A )$$
See Solution
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 20, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8677307367324829, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/02/24/generators-and-relations/?like=1&source=post_flair&_wpnonce=c1c9c39881
|
# The Unapologetic Mathematician
## Generators and Relations
Now it’s time for the reason why free groups are so amazingly useful. Let $X$ be any set, $F(X)$ be the free group on $X$, and $G$ be any other group. Now, every function $f$ from $X$ into $G$ extends to a unique homomorphism $f: F(X)\rightarrow G$. Just write down any word in $F(X)$, send each letter into $G$ like the function tells you, and multiply together the result!
So what does this get us? Well, for one thing every group $G$ is (isomorphic to) a quotient of a free group. If nothing else, consider the free group $F(G)$ on the set of $G$ itself. Then send each element to itself. This extends to a homomorphism $f$ from $F(G)$ to $G$ whose image is clearly all of $G$. Then the First Isomorphism Theorem tells us that $G$ is isomorphic to $F(G)/{\rm Ker}(f)$. That’s pretty inefficient, but it shows that we can write $G$ like that if we want to. How can we do better?
In a moment we’ll need this little technical construction. Remember that not every subgroup is suitable as the kernel of a homomorphism — it needs to be normal. We can beef up any subgroup to a normal one in a straightforward way, though. First notice that the intersection of any collection of normal subgroups is a normal subgroup again (check it). Now if $G$ is a group with subgroup $H$, consider the collection of all normal subgroups of $G$ that contain $H$. There’s always at least one, since $G$ itself is an example. Now take their intersection and call it $N$. This is the smallest normal subgroup of $G$ containing $H$, since it’s contained in every other one. We call it the “normal closure” of $H$ in $G$.
Okay, so what we want is a free group $F$ and a normal subgroup $N$ of $F$ so that $G$ is isomorphic to $F/N$. By the previous paragraph, we can settle for any subgroup and take its normal closure to get our $N$. But this subgroup is a group in its own right, and is itself the image of some homomorphism from another free group $F'$.
Now we’re getting somewhere. Let $X$ and $Y$ be any two sets, so we have the free groups $F(X)$ and $F(Y)$. Now take a function from $Y$ to $F(X)$. This extends to a homomorphism, whose image is some subgroup of $F(X)$. Take the normal closure $N$ of this subgroup and get the quotient $F(X)/N$. We can get any group at all like this! We call the elements of $X$ “generators” and the words in the image of $Y$ “relations”.
The best situation is when a group is “finitely presented” — $X$ and $Y$ are finite sets. In this case we just have to write down the names of elements of $X$, the words on $X$ in the image of $Y$, and the machinery above gives us back a group. Quite a lot of groups arise like this. We write such a presentation as $<x_1,...,x_n|w_i,...,w_j>$ for the $n$ generators $x_i$ and $r$ relations $w_j$.
• The cyclic group $\mathbb{Z}_n$ is $<x|x^n>$.
• The symmetric group $S_n$ is $<s_1,...,s_{n-1}|s_i^2 (1\leq i\leq n-1),s_is_{i+1}s_is_{i+1}s_is_{i+1} (1\leq i\leq n-2),s_is_js_i^{-1}s_j^{-1} (|i-j|\geq 2)>$.
• The free group $F_n$ is $<x_1,...,x_n|>$.
So this should provide a way to tell if groups are isomorphic, right? Wrong. You might think that you should be able to tell when two presentations give isomorphic groups, but in fact it’s known that there’s no way to tell in general. The hangup is in what’s called the “word problem” for a group: given a presentation of a group and a word on the generators, do the relations make the word correspond to the trivial element of the group? It’s known that there is no method that solves this problem for all groups, and that many groups have no method for solving it at all.
Still, presentations by generators and relations are extremely useful for understanding the structure of a given group. As for free groups we can specify a homomorphism from $G$ by defining it on a generating set, though now we have to check that the relations are respected so the image of an element of $G$ doesn’t depend on the word we use to represent it. We can also prove facts about elements of $G$ “by induction”: show a statement holds for a generator and that composition and inversion preserve the truth of the statement. We can’t do everything we’d like with presentations, but they’re still one of the most concrete ways to actually get our hands on a group.
## 5 Comments »
1. The whole presentation business only gets more interesting when you look away from groups as well. You can play the same game with algebras, and you’ll get different results for different conditions on the algebras; but with the common denominator that you end up with some sort of Gröbner basis type theory each time.
So, the classical Gröbner bases are basically an answer to the problem “If I have a set X, and consider the algebra FX of all commutative polynomials with the elements in X as variables, then how do I work with the quotient ring FX/I – for some I given by a set of generators for the relations?”
We get, in the end, a highly algorithmic method where we complete the generating set of I until it represents enough of the ideal, and then almost any task is reduced to the Euclidean algorithm for division (with some modifications).
And once you take a good look at it, if all your polynomials are linear – then all this is nothing else than classic first term linear algebra.
Comment by | February 25, 2007 | Reply
2. [...] the properties of such presentations, any function of the generators — of — defines a linear function on if and only if it [...]
Pingback by | April 6, 2007 | Reply
3. Maybe I’m missing something, but the development here seems more indirect than necessary. The inefficiency of the F(G)/Ker F construction lies in listing the entire kernel, bit the normal closure construction works just as well an arbitrary subsets of F(G) as on subgroups of it, so, if we’re lucky, we might only need to specify a small or otherwise intelligible subset of Ker F. So why introduce the extraneous set Y and then F(Y)?
I really like this ‘blath’, and the general idea of ‘math for outsiders’ (such as myself).
Comment by Avery Andrews | September 19, 2007 | Reply
4. ok i am interesting in mathematics and playings chess games and seeing a english movies. thanks have you comments me then my e-mail address.
Comment by | December 1, 2007 | Reply
5. [...] symmetric group for a moment and consider the cyclic group . This consists of powers of a single generator with the relation . That is, we have . The definition of a representation tells us that we must [...]
Pingback by | September 13, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 67, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272955656051636, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/217291/figure-out-a-function-expression-from-graph-sine-and-cosine?answertab=votes
|
# Figure out a function expression from graph (sine and cosine)
I am trying to recreate the following image in latex (pgfplots), but in order to do so I need to figure out the mathematical expressions for the functions
So far I am sure that the gray line is $\sin x$, and that the redline is some version of $\sin x / x$. Whereas the green line is some linear combination of sine and cosine functions.
Anyone know a good way to find these functions?
-
## 2 Answers
$$f(x)=\frac{\sin\left(\frac{x-x_0}{k}\right)}{\frac{x-x_0}{k}}\cos\left(\frac{x-x_0}{h}\right)$$
with $x_0=100$, $k=10$, $h=2$
-
The gray curve has maximum at $x=0$, so I'd use a cosine. All you need to do to write its function is determine the frequency. It completes one cycle ($2\pi$ radians) at the next peak.
The red line does indeed look like the form $\sin(x)/x$ but note that the peak is at $x=100$, and that the zero crossings occur three times for every $100$ units of $x$ (pretending there's a zero crossing at the peak; the sine has one even if the whole function doesn't). So you need to shift the position of the function and set its frequency. $\sin(a(x-100))\over{a(x-100)}$. All you have to do is figure out $a$ such that you get $\pi$ radians when $x$ changes by $33.333...$
The green curve looks like the product of the gray and red curves.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469491243362427, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/116313?sort=votes
|
## Homotopy Transfer Theorem for Differential Graded Associative Algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
As in Algebra+Homotopy=Operad by Bruno Vallette, let $A$ with multiplication $\nu$ be a differential graded associative algebra equipped with degree +1 map $h$ and let $H$ be a chain complex such that there exist chain maps $i$ and $p$ such that
and I work in characteristic 2 to make everything easier. Define
$$\mu_2=p\circ\nu\circ(i\otimes i):H\otimes H\to H,$$
and in general,
is a degree $+(n-2)$ map, where $PBT_n$ means binary trees with $n$ nodes and Vallette's summand on the right is an example of a summand for $n=5.$ For $f\in\hom(H^{\otimes n},H),$ define $\partial f=d\circ f + f\circ d_{H^{\otimes n}}$; remember that we are in characteristic 2.
One way to visualize $\partial\mu_n$ is that the one term decorates the leaves with $d$'s, as $d$ is a derivation for the raw tensor product, and the other puts a $d$ at the root, which then propagates upwards, as $d$ is a derivation for $\nu.$
The Homotopy Transfer Theorem for Differential Graded Associative Algebras is that $H$ equipped with the $\mu_n$ is an $A_\infty$ algebra, which means precisely that
.
All images have been directly screencapped from Vallette's paper. He writes that it should be an "easy and pedagogical" exercise to prove this theorem, but I'm getting caught in the thicket even in this characteristic 2 case where there are far fewer $\pm$'s to keep track of. I was wondering if anyone could provide me with any insights as to how to proceed without trees popping up all over the place occluding the forest.
-
## 2 Answers
There is a systematic graphical notation that allows the tracking of signs, which I will mention at the end of this answer. But before doing so, let me outline the situation when $2 = 0$.
Note first that since $(A,\nu)$ is strictly associative, $\mu_0 = 0$ and $\mu_1 = d_H = d : H \to H$. By convention, if I have multilinear maps $f: H^{\otimes k}\to H$ and $g: H^{\otimes l}\to H$, then I will write $f\circ g : H^{\otimes(k+l -1)} \to H$ for: $$(f\circ g) (x_1\otimes \cdots \otimes x_{k+l-1}) = \sum_{i=1}^k f\bigl(x_1\otimes \dots \otimes x_{i-1} \otimes g(x_i\otimes \dots \otimes x_{i+l-1}) \otimes x_{i+l} \otimes \cdots \otimes x_{k+l-1}\bigr)$$ This has a useful graphical notation, wherein the composition is the sum over all rooted planar trees with an $f$ at the bottom node and precisely one $g$ at one of the upper nodes.
Then axiom to be an $A_\infty$-algebra in characteristic $2$ is $0 = \sum_{j=0}^{n+1} \mu_j\circ \mu_{n+1-j}$, or, since $\mu_0 = 0$ and $\mu_1 = d$: $$[d,\mu_n] = \sum_{j=2}^{n-1} \mu_j \circ \mu_{n+1-j}$$ The right-hand side is a sum over all rooted planar trees with $n$ leaves and precisely two nodes, each of which has at least two branches from it.
To check this, the first thing to convince yourself is that the operator $[d,-] : f \mapsto d\circ f + f\circ d$ is a derivation of composition and tensor, so that to apply $[d,-]$ to some large diagram, you sum all diagrams you get by replacing one component of your original diagram by $[d,-]$ of it. Note also that $[d,-]$ comutes with (i.e. annihilates) $i$, $p$, and $\nu$. So when you work out $[d,\mu_n]$, you get a sum over diagrams that look like $\mu_n$ (i.e. planar rooted trees with $n$ leaves, each node has two branches, and interior edges labeled by $h$), except one of the interior edges has been replaced by $[d,h] = \mathrm{id}_A + ip$.
Now, it should be completely clear that the diagrams where the $h$ is replaced by an $ip$ are precisely the diagrams appearing in $\sum_{j=2}^{n-1} \mu_j \circ \mu_{n+1-j}$. (If this is not clear, let me know, and I will try to make it clearer.)
Finally, we must dispense with the diagrams in which an $h$ is replaced by an $\mathrm{id}$. For any such diagram, consider contracting it along the offending $\mathrm{id}$ vertex, to produce a node with three branches. Except the resulting diagram with the trivalent vertex can be produced in two ways, corresponding to the two planar ways of blowing up a rooted node with three branches into two two-branch nodes. So, after sorting all of your offending diagrams into such pairs, you get a sum of diagrams that looks like a $\mu_n$-type sum, except one vertex is has three branches. What is this vertex labeled by? Why, $\nu \circ \nu$, of course, which is a sum of two terms. On the other hand, $\nu \circ \nu = 0$ by the associativity law for $(A,\nu)$.
In characteristic not equal to $2$, the exact same argument works, but you must find a good convention / notation for signs. The best notation that I know is as follows. It should, of course, already by understood that the solid "$H$" or "$A$" edges extend to "infinity" at the top and bottom of the page. You should additionally draw diagrams with some other color of edge (I usually used "dashed") that records the degrees of operators — so this "dashed" edge should carry an arrow denoting its direction. A vertex that raises homological degree by $n$ is required to receive $n$ dashed edges, and a vertex that lowers homological degree by $n$ is required to emit $n$ dashed edges. Free dashed edges are sent off to "infinity" at (say) the left-hand side of the page, and the order from top to bottom that the free dashed edges arrive is important. Just as you cannot add diagrams whose numbers of input and output "$H$"$strands mismatch, you similarly must have the same sequence of dashed edges. (In categorical language, what I'm saying is that you only work with "global" elements of endomorphism spaces, which is to say actual morphisms in the category of homologically-graded abelian groups, but that you give yourself access to the objects of this category which are lines in degree$\pm 1\$.)
Now whenever two edges cross, something happens with signs. The notation basically takes care of this, but if you ever insist on working with "homogeneous elements" (which is a bad habit — it's better to work more categorically) the convention is that as an element runs down the "wire" of a solid edge, when it passes through a dashed edge it remains unchanged if it is of even degree and changes sign if it is of odd degree. The notations that do matter are:
1. A closed dashed circle can be removed for a factor of $-1$.
2. A dashed crossing can be resolved for a factor of $-1$. (Since dashed edges are directed, any dashed crossing has a unique resolution.)
For example, the operator $d$ emits a dashed edge, and the operator $h$ receives a dashed edge. Thus an equation like "$\mathrm{id} - ip = dh + hd$" is nonsense: the left-hand side has no dashed edges running to infinity, whereas on the right-hand side the first summand emits an edge and then receives one, and the second summand does those in the opposite order. To sum the two terms on the right-hand side you have to at least get them into their edges-at-infinity into the same order, which you can do by adding a crossing (but remember that resolving that crossing changes a sign). To make the two sides agree, you get connect up the two dashed edges, and again you should think a moment about signs. The correct right-hand side to "$\mathrm{id} - ip = dh + hd$" is:
Yes, the sign is correct. (Incidentally, I'm lifting these images from my thesis, which works out a slightly different question, and so the colors and numbers are not for this post.)
Anyway, I'll leave it as an exercise to write out diagrams for $A_\infty$ algebras in this notation, and to get all the signs right. (Hint: there should be no "weird" signs.) Part of the reason that I'll leave it as an exercise is that there's not really a unique correct answer: you make a sign convention, and work with it.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If your tree had no $h$ in it, then when $d$ propagated upward, it would pass unchanged through $i$ and $p$, which are chain maps, and pass as a derivation through the product so you'd get a sum of terms which would cancel with the terms that have $d$ on top. Something similar will happen when you have $h$ on the edges, but you will get extra terms.
When you have $d$ below an $h$, that is equal to a sum of three terms. One is $d$ above an $h$, one is $ip$, and one is $\mathrm{Id}$. This is a local phenomenon that happens within the tree. We can take the term with $d$ above the $h$ and continue to allow the $d$ to propagate upward. At the end, the $d$s reach the top and cancel with the terms that have $d$ on top, so the sum turns out to be made up of two kinds of terms:
Terms where one $h$ has been replaced by $ip$
Terms where one $h$ has been replaced by $\mathrm{Id}$
Each $h$ is passed through precisely once by a $d$ as it propagates upward, so we have this sum over all internal edges of our tree.
The sum of terms of the first type is exactly the right side of your screen capture. The terms of the second type will cancel in pairs by the associativity of $\nu$. That is, there will be precisely two trees in the overall sum over $PBT_n$ that give rise to each term of the second type.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 100, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9589459896087646, "perplexity_flag": "head"}
|
http://citizendia.org/Diffraction
|
The intensity pattern formed on a screen by diffraction from a square aperture
Colors seen in a spider web are partially due to diffraction, according to some analyses. [1]
Diffraction is normally taken to refer to various phenomena which occur when a wave encounters an obstacle. Very similar effects are observed when there is an alteration in the properties of the medium in which the wave is travelling, for example a variation in refractive index for light waves or in acoustic impedance for sound waves and these can also be referred to as diffraction effects. The refractive index (or index of Refraction) of a medium is a measure for how much the speed of light (or other waves such as sound waves is reduced inside the medium The acoustic impedance Z (or sound impedance) is a frequency f dependent parameter and is very useful for example for describing the behaviour of musical Diffraction occurs with all waves, including sound waves, water waves, and electromagnetic waves such as visible light, x-rays and radio waves. Sound' is Vibration transmitted through a Solid, Liquid, or Gas; particularly sound means those vibrations composed of Frequencies Water is a common Chemical substance that is essential for the survival of all known forms of Life. Electromagnetic radiation takes the form of self-propagating Waves in a Vacuum or in Matter. X-radiation (composed of X-rays) is a form of Electromagnetic radiation. Radio waves are electromagnetic waves occurring on the Radio frequency portion of the Electromagnetic spectrum. As physical objects have wave-like properties, diffraction also occurs with matter and can be studied according to the principles of quantum mechanics. Quantum mechanics is the study of mechanical systems whose dimensions are close to the Atomic scale such as Molecules Atoms Electrons
While diffraction occurs whenever propagating waves encounter such changes, its effects are generally most pronounced for waves where the wavelength is on the order of the size of the diffracting objects. In Physics wavelength is the distance between repeating units of a propagating Wave of a given Frequency. The complex patterns resulting from the intensity of a diffracted wave are a result of the superpostion, or interference of different parts of a wave that traveled to the observer by different paths. In physics interference is the addition ( superposition) of two or more Waves that result in a new wave pattern
The formalism of diffraction can also describe the way in which waves of finite extent propagate in free space. For example, the expanding profile of a laser beam, the beam shape of a radar antenna and the field of view of an ultrasonic transducer are all explained by diffraction theory.
## Examples of diffraction in everyday life
The effects of diffraction can be readily seen in everyday life. The most colorful examples of diffraction are those involving light; for example, the closely spaced tracks on a CD or DVD act as a diffraction grating to form the familiar rainbow pattern we see when looking at a disk. In Optics, a Diffraction grating is an optical component with a regular pattern which splits ( diffracts) light into several beams travelling in different This principle can be extended to engineer a grating with a structure such that it will produce any diffraction pattern desired; the hologram on a credit card is an example. Holography (from the Greek, ὅλος - hólos whole + γραφή - grafē writing drawing is a technique that allows the Diffraction in the atmosphere by small particles can cause a bright ring to be visible around a bright light source like the sun or the moon. Atmospheric diffraction is manifested in the following principal ways Fourier optics is the bending of light rays in the Atmosphere A shadow of a solid object, using light from a compact source, shows small fringes near its edges. The speckle pattern which is observed when laser light falls on an optically rough service is also a diffraction phenomenon. A speckle pattern is a random intensity pattern produced by the mutual Interference of a set of Wavefronts This phenomenon has been investigated by scientists All these effects are a consequence of the fact that light is a wave.
Diffraction can occur with any kind of wave. Ocean waves diffract around jetties and other obstacles. Sound waves can diffract around objects, this is the reason we can still hear someone calling us even if we are hiding behind a tree. Diffraction can also be a concern in some technical applications; it sets a fundamental limit to the resolution of a camera, telescope, or microscope. The resolution of an optical imaging system like a Microscope or Telescope or Camera can be limited by multiple factors like imperfections in the lenses or misalignment
## History
Thomas Young's sketch of two-slit diffraction, which he presented to the Royal Society in 1803
The effects of diffraction of light were first carefully observed and characterized by Francesco Maria Grimaldi, who also coined the term diffraction, from the Latin diffringere, 'to break into pieces', referring to light breaking up into different directions. The Royal Society of London for the Improvement of Natural Knowledge, known simply as The Royal Society, is a Learned society for science that was founded in 1660 Francesco Maria Grimaldi ( April 2, 1618 - December 28, 1663) was an Italian Mathematician and Physicist who The results of Grimaldi's observations were published posthumously in 1665. [2][3] Isaac Newton studied these effects and attributed them to inflexion of light rays. Sir Isaac Newton, FRS (ˈnjuːtən 4 January 1643 31 March 1727) Biography Early years See also Isaac Newton's early life and achievements James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating. James Gregory (November 1638 &ndash October 1675 was a Scottish Mathematician and Astronomer. In Optics, a Diffraction grating is an optical component with a regular pattern which splits ( diffracts) light into several beams travelling in different In 1803 Thomas Young did his famous experiment observing interference from two closely spaced slits. Thomas Young (13 June 1773 &ndash 10 May 1829 was an English Polymath who contributed to the scientific understanding of vision, Light Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves. Augustin-Jean Fresnel did more definitive studies and calculations of diffraction, published in 1815 and 1818, and thereby gave great support to the wave theory of light that had been advanced by Christiaan Huygens and reinvigorated by Young, against Newton's particle theory. Christiaan Huygens (ˈhaɪgənz in English ˈhœyɣəns in Dutch) ( April 14, 1629 &ndash July 8, 1695) was a Dutch
## The mechanism of diffraction
Photograph of single-slit diffraction in a circular ripple tank
Diffraction arises because of the way in which waves propagate; this is described by the Huygens–Fresnel principle. In Physics and Engineering, a ripple tank is a shallow glass tank of water used in schools and colleges to demonstrate the basic properties of Waves It The Huygens–Fresnel principle (named for Dutch Physicist Christiaan Huygens, and French physicist Augustin-Jean Fresnel The propagation of a wave can be visualized by considering every point on a wavefront as a point source for a secondary radial wave. The subsequent propagation and addition of all these radial waves form the new wavefront. When waves are added together, their sum is determined by the relative phases as well as the amplitudes of the individual waves, an effect which is often known as wave interference. In physics interference is the addition ( superposition) of two or more Waves that result in a new wave pattern The summed amplitude of the waves can have any value between zero and the sum of the individual amplitudes. hence, diffraction patterns usually have a series of maxima and minima.
To determine the form of a diffraction pattern, we must determine the phase and amplitude of each of the Huygens wavelets at each point in space and then find the sum of these waves. There are various analytical models which can be used to do this including the Fraunhoffer diffraction equation for the far field and the Fresnel Diffraction equation for the near-field. In Optics, Fraunhofer diffraction is a form of wave Diffraction, which occurs when field waves are passed through an Aperture or slit causing only the In Optics, Fresnel diffraction or near-field diffraction is a process of diffraction which occurs when a wave passes through an Aperture and diffracts Most configurations cannot be solved analytically; solutions can be found using various numerical analytical methods including Finite element and boundary element methods
## Diffraction systems
It is possible to obtain a qualitative understanding of many diffraction phenomena by considering how the relative phases of the individual secondary wave sources vary, and in particular, the conditions in which the phase difference equals half a cycle in which case waves will cancel one another out. The finite element method (FEM (sometimes referred to as finite element analysis) is a numerical technique for finding approximate solutions of Partial differential The boundary element method is a numerical computational method of solving linear Partial differential equations which have been formulated as Integral equations (i
The simplest descriptions of diffraction are those in which the situation can be reduced to a two dimensional problem. For water waves, this is already the case, water waves propagate only on the surface of the water. For light, we can often neglect one direction if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes we will have to take into account the full three dimensional nature of the problem.
Some of the simpler cases of diffraction are considered below.
### Single-slit diffraction
Main article: Diffraction formalism
Numerical approximation of diffraction pattern from a slit of width four wavelengths with an incident plane wave. See also Diffraction Quantitative description and analysis Because diffraction is the result of addition of all waves (of given wavelength along all unobstructed The main central beam, nulls, and phase reversals are apparent.
Graph and image of single-slit diffraction
A long slit of infinitesmal width which is illuminated by light diffracts the light into a series of circular waves and the wavefront which emerges from the slit is a cylindrical wave of uniform intensity.
A slit which is wider than a wavelength has a large number of point sources spaced evenly across the width of the slit. The light at a given angle is made up contributions from each of these point sources and if the relative phases of these contributions vary by more than 2π, we expect to find minima and maxima in the diffracted light.
We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning. The light from a source located at the top edge of the slit interferes destructively with a source located at the middle of the slit, when the path difference between them is equal to λ/2. Similarly, the source just below the top of the slit will interferes destructively with the source located just to below the middle of the slit at the same angle. We can continue this reasoning along the entire height of the slit to conclude that the condition for destructive interference for the entire slit is the same as the condition for destructive interference between two narrow slits a distance apart that is half the width of the slit. The path difference is given by (d sinθ)/2 so that the minimum intensity occurs at an angle θmin given by
$d \sin \theta_{min} = \lambda \,$
where d is the width of the slit.
A similar argument can be used to show that if we imagine the slit to be divided into four, six eight parts, etc, minima are obtained at angles θn given by
$d \sin \theta_{n} = n\lambda \,$
where n is an integer greater than zero.
There is no such simple argument to enable us to to find the maxima of the diffraction pattern. The intensity profile can be calculated using the Fraunhofer diffraction integral as
$I(\theta)\,$ $= I_0 {\left[ \mathrm{sinc} \left( \frac{\pi d}{\lambda} \sin \theta \right) \right] }^2$
where the sinc function is given by sinc(x)=sin(x)/x. See also Diffraction Quantitative description and analysis Because diffraction is the result of addition of all waves (of given wavelength along all unobstructed In Optics, Fraunhofer diffraction is a form of wave Diffraction, which occurs when field waves are passed through an Aperture or slit causing only the In Mathematics, the sinc function, denoted by \scriptstyle\mathrm{sinc}(x\ and sometimes as \scriptstyle\mathrm{Sa}(x\ has two definitions sometimes
It should be noted that this analysis applies only to the far field, i. The near field and far field of an antenna or other isolated source of Electromagnetic radiation are regions around the source where different parts of the field e a significant distance from the diffracting slit.
2-slit and 5-slit diffraction of red laser light
### Diffraction Grating
Main article: Diffraction grating
A diffraction grating is an optical component with a regular pattern. In Optics, a Diffraction grating is an optical component with a regular pattern which splits ( diffracts) light into several beams travelling in different The form of the light diffracted by a grating depends on the structure of the elements and the number of elements present, but all gratings have intensity maxima at angles θm which are given by the grating equation
$d \left( \sin{\theta_m} + \sin{\theta_i} \right) = m \lambda.$
where θi is the angle at which the light is incident, d is the separation of grating elements and m is an integer which can be positive or negative.
The light diffracted by a grating is found by summing the light diffracted from each of the elements, and is essentially a convolution of diffraction and interference patterns. In Mathematics and in particular Functional analysis, convolution is a mathematical operation on two functions f and
The figure shows the light diffracted by 2-element and 5-element gratings where the grating spacings are the same; it can be seen that the maxima are in the same position, but the detailed structures of the intensities are different.
A computer-generated image of an Airy disk
### Diffraction by a circular aperture
Main article: Airy disk
The far-field diffraction of a plane wave incident on a circular aperture is often referred to as the Airy Disk. The Airy disk (or Airy disc) is a phenomenon in Optics. Owing to the wave nature of light, light passing through an Aperture is diffracted The Airy disk (or Airy disc) is a phenomenon in Optics. Owing to the wave nature of light, light passing through an Aperture is diffracted The variation in intensity with angle is given by
$I(\theta) = I_0 \left ( \frac{2 J_1(ka \sin \theta)}{ka \sin \theta} \right )^2$
where a is the radius of the circular aperture, k is equal to 2π/λ and J1 is a Bessel function. The Airy disk (or Airy disc) is a phenomenon in Optics. Owing to the wave nature of light, light passing through an Aperture is diffracted In Mathematics, Bessel functions, first defined by the Mathematician Daniel Bernoulli and generalized by Friedrich Bessel, are Canonical The smaller the aperture, the larger the spot size at a given distance, and the greater the divergence of the diffracted beams.
### Propagation of a laser beam
The way in which the profile of a laser beam changes as it propagates is determined by diffraction. In Optics, a Gaussian beam is a Beam of Electromagnetic radiation whose transverse Electric field and Intensity ( Irradiance A laser is a device that emits Light ( Electromagnetic radiation) through a process called Stimulated emission. The output mirror of the laser is an aperture, and the subsequent beam shape is determined by that aperture. Hence, the smaller the output beam, the quicker it diverges. Diode lasers have much greater divergence than He-Ne lasers for this reason.
Paradoxically, it is possible to reduce the divergence of a laser beam by first expanding it with one convex lens, and then collimating it with a second convex lens whose focal point is co-incident with that of the first lens. A lens is an optical device with perfect or approximate Axial symmetry which transmits and refracts Light, converging or diverging The resulting beam has a larger aperture, and hence a lower divergence.
### Diffraction limited imaging
Main article: Diffraction-limited system
The Airy disc around each of the stars from the 2. The resolution of an optical imaging system like a Microscope or Telescope or Camera can be limited by multiple factors like imperfections in the lenses or misalignment The Airy disk (or Airy disc) is a phenomenon in Optics. Owing to the wave nature of light, light passing through an Aperture is diffracted 56m telescope aperture can be seen in this lucky image of the binary star zeta Boötis. Lucky imaging (also called lucky exposures) is one form of Speckle imaging used for Astronomical photography. A binary star is a Star system consisting of two Stars orbiting around their Center of mass. Zeta Boötis (ζ Boo / ζ Boötis is a bright speckle binary in the constellation of Boötes.
The ability of an imaging system to resolve detail is ultimately limited by diffraction. The resolution of an optical imaging system like a Microscope or Telescope or Camera can be limited by multiple factors like imperfections in the lenses or misalignment This is because a plane wave incident on a circular lens or mirror is diffracted as described above. The light is not focused to a point but forms an Airy pattern with a central spot of diameter
$d = 1.22 \lambda \frac{f}{a},\,$
where λ is the wavelength of the light, f is the focal length of the lens, and a is the diameter of the beam of light, or (if the beam is filling the lens) the diameter of the lens.
This is why telescopes have very large lenses or mirrors, and why optical microscopes are limited in the detail which they can see.
### Speckle patterns
Main article: speckle pattern
The speckle pattern which is seen when using a laser pointer is another diffraction phenomenon. A speckle pattern is a random intensity pattern produced by the mutual Interference of a set of Wavefronts This phenomenon has been investigated by scientists A speckle pattern is a random intensity pattern produced by the mutual Interference of a set of Wavefronts This phenomenon has been investigated by scientists It is a result of the superpostion of many waves with different phases, which are produced when a laser beam illuminates a rough surface. They add together to give a resultant wave whose amplitude, and therefore intensity varies randomly.
## Common features of diffraction patterns
Several qualitative observations can be made of diffraction in general:
• The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction, in other words: the smaller the diffracting object the 'wider' the resulting diffraction pattern and vice versa. (More precisely, this is true of the sines of the angles. )
• The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object.
• When the diffracting object has a periodic structure, for example in a diffraction grating, the features generally become sharper. The third figure, for example, shows a comparison of a double-slit pattern with a pattern formed by five slits, both sets of slits having the same spacing, between the center of one slit and the next.
## Particle diffraction
See also: neutron diffraction and electron diffraction
Quantum theory tells us that every particle exhibits wave properties. Neutron diffraction is a crystallographic method for the determination of the atomic and/or magnetic structure of a material Electron diffraction is a technique used to study matter by firing Electrons at a sample and observing the resulting Interference pattern In particular, massive particles can interfere and therefore diffract. Diffraction of electrons and neutrons stood as one of the powerful arguments in favor of quantum mechanics. Quantum mechanics is the study of mechanical systems whose dimensions are close to the Atomic scale such as Molecules Atoms Electrons The wavelength associated with a particle is the de Broglie wavelength
$\lambda=\frac{h}{p} \,$
where h is Planck's constant and p is the momentum of the particle (mass × velocity for slow-moving particles) . In Physics, the de Broglie hypothesis (pronounced /brœj/ as French breuil close to "broy" is the statement that all Matter (any object has a Wave The Planck constant (denoted h\ is a Physical constant used to describe the sizes of quanta. In Classical mechanics, momentum ( pl momenta SI unit kg · m/s, or equivalently N · s) is the product For most macroscopic objects, this wavelength is so short that it is not meaningful to assign a wavelength to them. A Sodium atom traveling at about 3000 m/s would have a De Broglie wavelength of about 5 pico meters.
Because the wavelength for even the smallest of macroscopic objects is extremely small, diffraction of matter waves is only visible for small particles, like electrons, neutrons, atoms and small molecules. The short wavelength of these matter waves makes them ideally suited to study the atomic crystal structure of solids and large molecules like proteins.
Relatively recently, larger molecules like buckyballs,[4] have been shown to diffract. "C60" and "C-60" redirect here For other uses see C60 (disambiguation. Currently, research is underway into the diffraction of viruses, which, being huge relative to electrons and other more commonly diffracted particles, have tiny wavelengths so must be made to travel very slowly through an extremely narrow slit in order to diffract. A virus (from the Latin virus meaning Toxin or Poison) is a sub-microscopic infectious agent that is unable The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J
## Bragg diffraction
Following Bragg's law, each dot (or reflection), in this diffraction pattern forms from the constructive interference of X-rays passing through a crystal. In Physics, Bragg's law is the result of experiments into the Diffraction of X-rays or neutrons off Crystal surfaces at certain angles The data can be used to determine the crystal's atomic structure.
For more details on this topic, see Bragg diffraction. Bragg diffraction (also referred to as the Bragg formulation of X-ray diffraction) was first proposed by William Lawrence Bragg and William Henry Bragg
Diffraction from a three dimensional periodic structure such as atoms in a crystal is called Bragg diffraction. Bragg diffraction (also referred to as the Bragg formulation of X-ray diffraction) was first proposed by William Lawrence Bragg and William Henry Bragg It is similar to what occurs when waves are scattered from a diffraction grating. In Optics, a Diffraction grating is an optical component with a regular pattern which splits ( diffracts) light into several beams travelling in different Bragg diffraction is a consequence of interference between waves reflecting from different crystal planes. The condition of constructive interference is given by Bragg's law:
$m \lambda = 2 d \sin \theta \,$
where
λ is the wavelength,
d is the distance between crystal planes,
θ is the angle of the diffracted wave. In Physics wavelength is the distance between repeating units of a propagating Wave of a given Frequency.
and m is an integer known as the order of the diffracted beam.
Bragg diffraction may be carried out using either light of very short wavelength like x-rays or matter waves like neutrons (and electrons) whose wavelength is on the order of (or much smaller than) the atomic spacing[5]. X-ray scattering techniques are a family of non-destructive analytical techniques which reveal information about the crystallographic structure chemical composition Neutron diffraction is a crystallographic method for the determination of the atomic and/or magnetic structure of a material Electron diffraction is a technique used to study matter by firing Electrons at a sample and observing the resulting Interference pattern The pattern produced gives information of the separations of crystallographic planes d, allowing one to deduce the crystal structure. Diffraction contrast, in electron microscopes and x-topography devices in particular, is also a powerful tool for examining individual defects and local strain fields in crystals. An electron microscope is a type of Microscope that uses Electrons to illuminate a specimen and create an enlarged image Diffraction topography (short "topography") is an X-ray imaging technique based on Bragg diffraction.
## Coherence
Main article: Coherence (physics)
The description of diffraction relies on the interference of waves emanating from the same source taking different paths to the same point on a screen. In Physics, coherence is a property of waves that enables stationary (i In this description, the difference in phase between waves that took different paths is only dependent on the effective path length. This does not take into account the fact that waves that arrive at the screen at the same time were emitted by the source at different times. The initial phase with which the source emits waves can change over time in an unpredictable way. This means that waves emitted by the source at times that are too far apart can no longer form a constant interference pattern since the relation between their phases is no longer time independent.
The length over which the phase in a beam of light is correlated, is called the coherence length. In Physics, coherence is a property of waves that enables stationary (i In order for interference to occur, the path length difference must be smaller than the coherence length. This is sometimes referred to as spectral coherence as it is related to the presence of different frequency components in the wave. In the case light emitted by an atomic transition, the coherence length is related to the lifetime of the excited state from which the atom made its transition. A quantum mechanical system or particle that is bound, confined spacially can only take on certain discrete values of energy as opposed to classical particles which
If waves are emitted from an extended source this can lead to incoherence in the transversal direction. When looking at a cross section of a beam of light, the length over which the phase is correlated is called the transverse coherence length. In the case of Young's double slit experiment this would mean that if the transverse coherence length is smaller than the spacing between the two slits the resulting pattern on a screen would look like two single slit diffraction patterns.
In the case of particles like electrons, neutrons and atoms, the coherence length is related to the spacial extent of the wave function that describes the particle.
## References
1. ^ Dietrich Zawischa. Optical effects on spider webs. Retrieved on 2007-09-21. Year 2007 ( MMVII) was a Common year starting on Monday of the Gregorian calendar in the 21st century. Events 1217 - The Estonian tribal leader Lembitu of Lehola was killed in a battle against Teutonic Knights.
2. ^ Jean Louis Aubert (1760). Memoires pour l'histoire des sciences et des beaux arts. Paris: Impr. de S. A. S. ; Chez E. Ganeau, 149.
3. ^ Sir David Brewster (1831). A Treatise on Optics. London: Longman, Rees, Orme, Brown & Green and John Taylor, 95.
4. ^ Brezger, B. ; Hackermüller, L. ; Uttenthaler, S. ; Petschinka, J. ; Arndt, M. ; Zeilinger, A. (February 2002). "Matter-Wave Interferometer for Large Molecules" (reprint). Physical Review Letters 88 (10): 100404. doi:. A digital object identifier ( DOI) is a permanent identifier given to an Electronic document.
5. ^ John M. Cowley (1975) Diffraction physics (North-Holland, Amsterdam) ISBN 0 444 10791 6
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330891370773315, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/17305?sort=oldest
|
## ray class field of rational function field
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $f \in \mathbf{F}_q[T]$ be irreducible. I know that the ray class field for $\mathrm{Cl}((f)) \cong (\mathbf{F}_q[T]/(f))^\times$ can be constructed by adjoining torsion points of a Carlitz module. Is there an easy explicit minimal polynomial for a generator of this extension?
-
## 1 Answer
The minimal polynomial is $\phi_f(X)/X$, where $\phi_g$ (the Carlitz module) is defined by being $\mathbb{F}_q$-linear in $g$, satisfy
$\phi_{T^{n+1}} = \phi_T(\phi_{T^n})$ and $\phi_T =X^q+TX$.
It even has the bonus of being an Eisenstein polynomial at $f$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8816692233085632, "perplexity_flag": "middle"}
|
http://en.m.wikibooks.org/wiki/Modern_Physics/Characteristics_of_Relativistic_Waves
|
# Modern Physics/Characteristics of Relativistic Waves
Special Relativity
1 - 2 - 3 - 4 - 5 - 6
In classical physics, ω and k for light are related by
$\omega = c k \,$
In relativistic physics, we've seen that for waves with no special reference frame, such as light, ω and k are related by
$\omega^2 = c^2 k^2 + \mu^2 \,$
If μ=0 then the relativistic equation reduces to the classical, so we can assume that, for light, μ does equal zero.
This means that light does not have a mimimum frequency.
If μ is not zero then the wave being described are dispersive. The phase speed is
$u_p = \frac{\omega}{k} = \sqrt{c^2 + \frac{\mu^2}{k^2}}$
This phase speed always exceeds c, which at first may seem like an unphysical conclusion. However, the group velocity of the wave is
$u_g = \frac{d\omega}{dk} = \frac{kc^2}{\sqrt{k^2c^2+\mu^2}} = \frac{kc^2}{\omega} = \frac{c^2}{u_p}$
which is always less than c. Since wave packets and hence signals propagate at the group velocity, waves of this type are physically reasonable even though the phase speed exceeds the speed of light.
Another interesting property of such waves is that the wave four-vector is parallel to the world line of a wave packet in spacetime. This is easily shown by the following argument.
The spacelike component of a wave four-vector is k, while the timelike component is ω/c . The slope of the four-vector on a spacetime diagram is therefore ω/kc. However, the slope of the world line of a wave packet moving with group velocity is c/ug, which is also ω/kc .
Note that when we have k is zero we have ω=μ. In this case the group velocity of the wave is zero. For this reason we sometimes call μ the rest frequency of the wave.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393036365509033, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/139995/representation-of-compactly-supported-distribution
|
# Representation of compactly supported distribution
Is this true?
Any compactly supported distribution $T\in \cal D'$ can be represented as finite sum of partial derivatives of functions.
-
If $T$ is compactly supported, then $T$ can be extended to the space of infinitely differentiable function, and hence have a finite order. – Davide Giraudo May 2 '12 at 17:50
What do you mean exactly by sum of partial derivatives of functions? – Davide Giraudo May 2 '12 at 19:18
– Yimin May 2 '12 at 19:25
The link doesn't work. – Davide Giraudo May 2 '12 at 20:59
Eh...maybe you need to edit the link, just to retype the ".pdf", it is strange because the link is correct.people.oregonstate.edu/~peterseb/mth627/docs/… – Yimin May 3 '12 at 1:45
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9074539542198181, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/90794/is-a-semicontinuous-real-function-borel-measurable/91669
|
## Is a semicontinuous real function Borel measurable?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $f(x,u): [0,1]^2 \mapsto \mathbb{R}$ be a continuous function.
[Q] Is $g(x) = \inf_{u\in [0,1]} f(x,u)$ always Borel measurable? If not, can one find a counter-example?
Note that, for any $c$, we have $$(x: g(x) < c) = \text{Proj}_x ((x,u): f(x,u) < c),$$ where $\text{Proj}_x$ is a projection operator to $x$-axis. In the context of measurable selection theorem, the projection of Borel set $((x,u): f(x,u) < c)$ of $\mathbb{R}^2$ is not necessarily a Borel set of $\mathbb{R}$. But, I can not find a counter-example.
If there exists a proper counter-example, then it also implies that a semicontinuous real function is not necessarily Borel measurable.
Thanks.
-
Do you have an example where $g$ is not continues – Rami Mar 10 2012 at 5:27
2
If you think I answered your question, please accept it as the answer officially. Thanks! – GH Mar 11 2012 at 8:01
## 3 Answers
We have that $g(x) = \inf_{u\in [0,1]\cap\mathbb{Q}} f(x,u)$, because $f(x,u)$ is continuous. This shows immediately that $g(x)$ is Borel, in fact Baire-1 because it is the pointwise limit of continuous functions (since $\mathbb{Q}$ is countable).
In general, any upper semi-continuous function $g(x)$ is Borel, in fact Baire-1. To see this, note first that each level set `$\{x:g(x)\geq c\}$` is closed, hence `$\{x:g(x)>c\}$` is an $F_\sigma$-set, `$\{x:a<g(x)<b\}$` is the intersection of two $F_\sigma$'s which is $F_\sigma$, hence the inverse image of any open set is a countable union of $F_\sigma$'s which is $F_\sigma$.
-
GH. thanks for your answer. – kenneth Mar 11 2012 at 4:32
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think that the answer is positive:
It is enough to show that the set $( x | g(x) < c )$ is Borel. as you saed it is an image under $Proj_x$ of an open set $U$. divide $[0,1]^2$ to a union of its interior $(0,1)^2$ and the boundary. Correspondingly divide $U$ into $U_0:= (0,1)^2 \cap U$ and its complement $Z$. it is enough to show the the image of each of them under $Proj_X$ is Borel. Which is evident.
-
In-fact the division of $U$ into 2 sets is unnecessary and the image of $U$ is just open. This dose not prove continuity yet since it is not enough to check continuity on sets like this. – Rami Mar 10 2012 at 5:23
I did not explained why it is enough to show that $(x|g(x) < c)$ is Borel. I now understand that is probably the point that you where interested in. But it is explained in the answer of GH so there is no point to repeat it – Rami Mar 10 2012 at 5:32
I think every (lower) semicontinuous function $f:X \to \mathbb{R}$ is Borel measurable, since you have the following characterization: for every $a \in \mathbb{R}$ the set $$f^{-1}((-\infty, a])$$ is closed in the topology that you are considering in $X$.
Since you only have to check the measurability property for a generating class of the Borelians in $\mathbb{R}$ you are done.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.959087610244751, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/97259/differential-equations-exact-differential-equations
|
# Differential equations - exact differential equations
I am self studying ode from boyce diprima book and while doing exercises on chapter 2.6.
I couldnt undestand how question 17 is derived I checked the solution manual and I found the same answer which I have already found by myself but what the the question asks is different.
The integral on solution manual is not definite and the question asks me to show how a definite integral is derived and I am not sure exactly how that definite integral is derived
M(s,y0) : this expression on question makes me suspicious
I uploaded the question to scribd because it is too long to write together with the theorem they refer to.
http://www.scribd.com/fullscreen/77494675?access_key=key-1fqwhkmcex23ufgyz0sh
can anyone show me how the definite integral, question asks, can be derived ? I also also put the solution manuals answer to pdf.
-
## 1 Answer
I'm pretty shaky with DEs myself, and the notation always confuses the heck out of me, but nobody else has answered this yet, so I thought I'd take a stab at it. Hopefully my answer will at least give you something to work with (or encourage better answers):
The $\int_{x_0}^{x} M(s, y_0) \, ds$ is probably just changing the name of the first variable inside $M$'s parenthesis. It is used to get get the result in the correct letter, $x$, after integrating.
Note also that the book probably uses the notation that $x$ is a variable, and $x_0$ is a fixed unknown (a.k.a. constant).
Now let me change notations here for a minute. Have you seen where sometimes they represent a function with a lowercase letter and the function's integral with an uppercase letter (e.g., $M'(x, y) = m(x, y)$)? I'm going to use that notation and try to find $\int_{x_0}^{x} m(s, y_0) \, ds$.
If the integral of $m(s, y_0)$ is $M(s, y_0)$ (that is, $\int m(s, y_0) \, ds = M(s, y_0)$), then
$$\int_{x_0}^{x} m(s, y_0) \, ds = \left. M(s, y_0) \right|_{x_0}^{x} = M(x, y_0) - M(x_0, y_0)$$
Since the last term, $M(x_0, y_0)$, is a number (aka constant), taking the derivative of this with respect to (wrt) $x$ yields $m(x, y_0)$.
Next, in the textbook's answer they used the definite integral of $N$ to get the value for $\psi$. That is if the partial derivative of $\psi$ wrt $y$ equals $N$ (e.g. $\frac{\partial \psi}{\partial y} = N$), then
$$\psi(x, y) = \int_{y_0}^{y} \frac{\partial \psi}{\partial y} \, dy = \int_{y_0}^{y} N(x, t) \, dt = \int N(x, y) \, dy - \int N(x, y_0) \, dy$$
The first term on the right hand side (rhs), $\int N(x, y) \, dy$, is written that way because we don't know the integral of $N$ (it's the capital of capital $n$, or a really big $N$ :/ ). It's not an indefinite integral in the usual sense, it's just a notation to indicate "the function that you'd get when you integrate $N(x, t)$ with respect to $t$ and then evaluate that function at $t = y$".
The second term, $\int N(x, y_0) \, dy$, is obtained the same way and it means "the function you get when you integrate $N(x, t)$ wrt $t$ and then evaluate the function at $t = y_0$". It is a function of $x$, and I think they just renamed it $h$, so that $h(x) = \int N(x, y_0)$.
Next, they took the partial derivative of $\psi$ wrt $x$ and set it equal to $M$ and solved for $h'(x)$. I don't understand why they reversed the integral and partial derivative signs, but I guess that's allowed, and I don't understand the rest of the book's answer from this point on. But I hope I at least clarified why the answer looked like it was using indefinite integrals. Maybe someone can now chip in with an explanation of the rest of the text's answer.
-
1
Welcome to math.SE. You can typeset mathematics using MathJax by enclosing LaTeX code in `$` or `$$`. – Zhen Lin Jan 8 '12 at 3:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499186277389526, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/53073/diffusion-sample-paths-as-deformed-brownian-sample-paths/69299
|
## Diffusion sample paths as deformed Brownian sample paths
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $X$ is a non-explosive diffusion with dynamics
$dX_t = \mu(X_t)dt + \sigma(X_t)dW_t$,
where $W$ is a standard Brownian motion. My intuition about $X$ is that if $\mu$ and $\sigma$ are sufficiently nice, then the sample paths of $X$ are in some sense "deformed" sample paths of $W$. Is there any way to formalise this idea? For example, is it possible to define a suitable topology on sample paths of $W$ and construct diffusion sample paths $X(\omega)$ as homeomorphisms of $W(\omega)$?
Part of the motivation for this question comes from the observation that it's possible to do something very similar in the discrete-time case. Given the Euler approximation
$\Delta X_{t+1} = \mu(X_t)\Delta t + \sigma(X_t)\sqrt{\Delta t} W_t$
with $W_t \sim N(0,1)$, then if one knows the values of $\Delta X_t$, then one can unambiguously recover the driving noise $W$. In that sense, one can view $X$ as a transformed version of $W$.
-
1
A (one-dimensional) diffusion is expressed fairly explicitly in terms of so-called "scale measure" and "speed measure", you can easily find the formula in old classical textbooks. – zhoraster Jan 24 2011 at 18:30
@Zhoraster Thanks - I'll work through the material in Karatzas & Shreve. – Simon Lyons Jan 24 2011 at 18:32
I mean, expressed as a transform of a Wiener process path. – zhoraster Jan 24 2011 at 18:33
Just give a shout if you don't find the formula, I'll find a reference. – zhoraster Jan 24 2011 at 18:34
## 6 Answers
Hi,
This is an interesting question
I don't have the complete answer to your question rather some leads about it, but it seems to me that what you need to show for solutions of your SDE, is that the natural filtrations of $X$ is the same as the natural filtration of $W$.
If you have this done, then by some kind of Doob's Lemma you should be able to write your Brownian path with respect to a "measurable" functional of the path of $X$ (i.e. $W_t=f((X_s,s\le t))$ for some functional $f$). This not a constructive way of showing the result though (i.e. you only have existence)
Anyway I think this is not the case for very broad class of SDEs even if I don't have a counterexample at hand, but there must be some litterature about this (maybe in Revuz and Yor's book).
You can also look at the Lamperti's Transform (beware I think that there two Lamperti transform in the litterature), which says that under some conditions you can transform a SDE of the form : $dX_t=\sigma(X_t,t)dW_t$ into some SDE of the form $dX_t=\mu_{\sigma}(X_t,t)dt+dB_t$ but I can't remember if this is done in a path-by-path way (i.e. if B_t =f(W.) where f is a function over the path space). You should have a look at the proof yourself. Here is a paper where the Lamperti's Transform is disccussed "Moller, Madsen - From State Dependent Diffusion to Constant Diffusion in SDEs by the Lamperti Transformation" and the references therein.
Best Regards and let us know if you find something interesting
-
Thanks, this is the type of idea I was looking for. Yes, I can see that this kind of strategy will not work for many SDEs. Perhaps one could construct a counterexample based on Tanaka's formula. I'll take a look at the Lamperti transform. – Simon Lyons Jan 25 2011 at 15:56
The pleasure is mine if I was of any help – The Bridge Jan 25 2011 at 16:10
Simon, you are right, for $X_t=|B_t|$ it seems difficult to rebuilt $B_t$ from $X_t$'s paths. – The Bridge Jan 25 2011 at 17:49
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As other have said, in the one dimensional case at least, you can suppose that the volatility is constant. Then the solution of the SDE is nothing else than a solution of the integral equation $$X(t) = \int_0^t \mu(X_s) ds + \sigma W_t \qquad \forall t \in [0,T].$$ You can then check that if $\mu(\cdot)$ is a Lipschitz function, say, then the function $\Psi$ that maps $(W_t)_{t \in [0,T]}$ to the solution of the above integral equation is continuous (Gronwall Lemma) on $C([0,T],\mathbb{R})$ with the supremum norm. Hence you can indeed write $X = \Psi(W)$ and see the path $(X_t)_{t \in [0,T]}$ as a 'deformation' of the Brownian path $(W_t)_{t \in [0,T]}$. The function $\Psi: C([0,T],\mathbb{R}) \to C([0,T],\mathbb{R})$ is sometimes called the 'Ito map' in the literature.
-
You should probably look at the Girsanov's theorem http://en.wikipedia.org/wiki/Girsanov_theorem
The process $X$ is a probability distrubution on the space of continuous functions, so is the Wiener process $W$. Girsanov' theorem states that the distribuitons of $X$ and $W$ are equivalent, and gives explicitly the Radon-Nykodim derivative.
-
Hm, Girsanov's theorem isn't really what I'm looking for. It tells you how to transform the laws of $X$ and $W$, whereas I'm looking for a transform on the paths themselves, if such a thing exists. That said, it wasn't me who voted your answer down. – Simon Lyons Jan 24 2011 at 20:15
1. If you want to have $X_t$ as a "deformed" $W_t$ - at first I advise to assume $\sigma\neq 0$ a.s. Otherwise you will have some problems (really in such points you may have almost deterministic dynamics).
2. If $\mu = 0$ then you can just change the time since all continuous martingales are time-transformed Brownian motion (it seems to me that zhoraster talked about something closer).
3. If $\mu \neq 0$ then you can either apply change of measure +change of time, but since you do not want to apply the change of measure - please make your question more precise. What do you want? If we have the function $t^2$ and $t$ are their paths "homeomorphic"? The same question I would like to ask you about all functions of bounded variation.
-
Another way to approach the problem is as follows.
One can notice that $X$ is a semimartingale (probably under some mild assumptions on $\sigma,\mu$). The martingale $M$ part of $X$ can be represented as
$$M_t = B_{< M,M>_t},$$ where $B$ is some Brownian motion. This is know as "Dambis, Dubins-Schwarz Theorem" (e.g. see Chapter V of Revuz, Yor - "Continous martingales ...").
-
Suppose $\mu$ and $\sigma$ are sufficiently well-behaved so that we may define $$B_t := - \int_0^t\frac{\mu(X_s)}{\sigma(X_s)}\,ds + \int_0^t \frac1{\sigma(X_s)}\,dX_s.$$ This should be the case, for example, if $\mu$ and $\sigma$ are globally Lipschitz with linear growth and $|\sigma|$ is bounded below. We then obtain $$B_t = - \int_0^t\frac{\mu(X_s)}{\sigma(X_s)}\,ds + \int_0^t\frac{\mu(X_s)}{\sigma(X_s)}\,ds + \int_0^t \,dW_s = W_t.$$ If $\sigma$ is continuous, then the stochastic integral in the definition of $B$ can be realized as a limit in probability of left-endpoint Riemann sums, and so it is evident that $B$ (i.e. $W$) is adapted to the natural filtration of $X$. Since $X$ is also adapted to the natural filtration of $W$, these two filtrations must be the same.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435127973556519, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/dipole+coordinate-systems
|
# Tagged Questions
1answer
907 views
### Force from point charge on perfect dipole
Have a point charge and a perfect dipole $\vec{p}$ a distance $r$ away. Angle between $\vec{p}$ and $\hat{r}$ is $\theta$. Want to find force on dipole. I'm having more than a little difficulty ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442182779312134, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/45858/root-mean-square-value-for-dc/45859
|
Root mean square value for DC
The mean value of alternating current comes to be zero because of half of the cycle being positive while the other half negative. so, take the root means square value of Alternating current given by:
$I_{rms} = 0.707\times I_{max}$
But why do we take "rms" for direct current? if we take the simple average of direct current, we would come up with a value which is not zero (it is zero in AC).
-
3 Answers
The notion of RMS voltage originated from electrical engineers trying to calculate the power dissipated from a resistive element. copying from wiki
Let's assume that the average power dissipated through a resistor $R$ be $P_{avg}$. Then,
$$P_{avg} = \langle \frac{v(t)^2}{R} \rangle$$, where $\langle f \rangle$ is the average value of function f.
$$P_{avg}=\frac{1}{R} \times \langle v(t)^2\rangle$$
But, we also have $V_{rms} = \langle v(t)^2\rangle$ by definition of RMS value. Hence,
$$P_{avg}=\frac{1}{R} \times V_{rms}$$
RMS voltage is just a representative value of voltage which gives you the average power in a resistive load.
-
i'm trying very hard to understand your question but i still dont get it that why do we take RMS value for DC current? – Muhammad Rafique Dec 4 '12 at 15:08
the point which I'm making is that electical engg. need some representative value for AC current/voltage for resistive load. It's just coincidence that this representative value is same as root mean square value of voltage/current. – Vineet Menon Dec 5 '12 at 9:00
Keep in mind that the root mean square is slightly different than simply the arithmetic mean value (and in fact will always be equal to or greater than the mean value).
The reason that the RMS value is calculated for direct current so often is that it is used in various calculations such as average power dissipated given a time-varying current or voltage, and we find that it works generally since it correctly solves for things such as power dissipated due to alternating current as well, where a standard arithmetic mean value would not give an accurate answer.
For a bit more clarification, see what Wikipedia has to say on the matter.
Hope this helps!
-
AC performs work when the voltage is negative, as well as when it is positive. This means that the "average" is an invalid measure for current or power. Instead, RMS is used.
With RMS the quantity is first squared, to "flip" the negative to positive. It is then integrated, and finally the square root is taken to remove the "error" caused by squaring.
If you perform this operation with a DC voltage, the result is simply the DC voltage. If you do it with a sinusoidal voltage, you end up with 0.707 times the peak value (amplitude) of the sinewave. With other waveforms the result will differ. For some examples, see the Wikipedia entry.
EDIT To clarify the mysterious conversion factor for sinewaves, $\frac{1}{\sqrt2} = 0.707$. This is also the factor you'll notice in the Wikipedia link.
-
as per your argument, wouldn't it be better to take a modulo average?? $|v|$ – Vineet Menon Dec 4 '12 at 5:20
It is actually $\sqrt{2}/2=0.707$ – Jaime Dec 4 '12 at 6:28
The problem with that is that AC is not steady like DC. RMS allows for the constantly changing voltage. You need to integrate the waveform rather than average it. @Jaime You're right of course, but then, so am I. Remember that √2/2 = 1/√2 – hdhondt Dec 5 '12 at 22:40
1
Must have read your answer too fast, I recall having read $\sqrt{2}=0.707$... – Jaime Dec 5 '12 at 22:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409341216087341, "perplexity_flag": "head"}
|
http://openwetware.org/index.php?title=User:Timothee_Flutre/Notebook/Postdoc/2011/11/10&diff=672771&oldid=565816
|
# User:Timothee Flutre/Notebook/Postdoc/2011/11/10
### From OpenWetWare
(Difference between revisions)
| | | | |
|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| m ( add link) | | m () | |
| (26 intermediate revisions not shown.) | | | |
| Line 6: | | Line 6: | |
| | | colspan="2"| | | | colspan="2"| |
| | <!-- ##### DO NOT edit above this line unless you know what you are doing. ##### --> | | <!-- ##### DO NOT edit above this line unless you know what you are doing. ##### --> |
| - | ==Entry title== | + | ==Bayesian model of univariate linear regression for QTL detection== |
| | | | |
| - | * generate a [http://en.wikipedia.org/wiki/Man_page man page] from the "--help" and "--version" options of a custom program using [http://www.gnu.org/s/help2man/ help2man]: | | |
| | | | |
| - | gcc -Wall -o myprogram myprogram.c | + | ''This page aims at helping people like me, interested in quantitative genetics, to get a better understanding of some Bayesian models, most importantly the impact of the modeling assumptions as well as the underlying maths. It starts with a simple model, and gradually increases the scope to relax assumptions. See references to scientific articles at the end.'' |
| - | help2man -o man_myprogram ./myprogram | + | |
| - | man ./man_myprogram | + | |
| | | + | * '''Data''': let's assume that we obtained data from N individuals. We note <math>y_1,\ldots,y_N</math> the (quantitative) phenotypes (e.g. expression levels at a given gene), and <math>g_1,\ldots,g_N</math> the genotypes at a given SNP (encoded as allele dose: 0, 1 or 2). |
| | | + | |
| | | + | |
| | | + | * '''Goal''': we want to assess the evidence in the data for an effect of the genotype on the phenotype. |
| | | + | |
| | | + | |
| | | + | * '''Assumptions''': the relationship between genotype and phenotype is linear; the individuals are not genetically related; there is no hidden confounding factors in the phenotypes. |
| | | + | |
| | | + | |
| | | + | * '''Likelihood''': we start by writing the usual linear regression for one individual |
| | | + | |
| | | + | <math>\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \; \text{ with } \; \epsilon_i \; \overset{i.i.d}{\sim} \; \mathcal{N}(0,\tau^{-1})</math> |
| | | + | |
| | | + | where <math>\beta_1</math> is in fact the additive effect of the SNP, noted <math>a</math> from now on, and <math>\beta_2</math> is the dominance effect of the SNP, <math>d = a k</math>. |
| | | + | |
| | | + | Let's now write the model in matrix notation: |
| | | + | |
| | | + | <math>Y = X B + E \text{ where } B = [ \mu \; a \; d ]^T</math> |
| | | + | |
| | | + | This gives the following [http://en.wikipedia.org/wiki/Multivariate_normal_distribution multivariate Normal distribution] for the phenotypes: |
| | | + | |
| | | + | <math>Y | X, \tau, B \sim \mathcal{N}(XB, \tau^{-1} I_N)</math> |
| | | + | |
| | | + | Even though we can write the likelihood as a multivariate Normal, I still keep the term "univariate" in the title because the regression has a single response, <math>Y</math>. |
| | | + | It is usual to keep the term "multivariate" for the case where there is a matrix of responses (i.e. multiple phenotypes). |
| | | + | |
| | | + | The likelihood of the parameters given the data is therefore: |
| | | + | |
| | | + | <math>\mathcal{L}(\tau, B) = \mathsf{P}(Y | X, \tau, B)</math> |
| | | + | |
| | | + | <math>\mathcal{L}(\tau, B) = \left(\frac{\tau}{2 \pi}\right)^{\frac{N}{2}} exp \left( -\frac{\tau}{2} (Y - XB)^T (Y - XB) \right)</math> |
| | | + | |
| | | + | |
| | | + | * '''Priors''': we use the usual [http://en.wikipedia.org/wiki/Conjugate_prior conjugate prior] |
| | | + | |
| | | + | <math>\mathsf{P}(\tau, B) = \mathsf{P}(\tau) \mathsf{P}(B | \tau)</math> |
| | | + | |
| | | + | A [http://en.wikipedia.org/wiki/Gamma_distribution Gamma distribution] for <math>\tau</math>: |
| | | + | |
| | | + | <math>\tau \sim \Gamma(\kappa/2, \, \lambda/2)</math> |
| | | + | |
| | | + | which means: |
| | | + | |
| | | + | <math>\mathsf{P}(\tau) = \frac{\frac{\lambda}{2}^{\kappa/2}}{\Gamma(\frac{\kappa}{2})} \tau^{\frac{\kappa}{2}-1} e^{-\frac{\lambda}{2} \tau}</math> |
| | | + | |
| | | + | And a multivariate Normal distribution for <math>B</math>: |
| | | + | |
| | | + | <math>B | \tau \sim \mathcal{N}(\vec{0}, \, \tau^{-1} \Sigma_B) \text{ with } \Sigma_B = diag(\sigma_{\mu}^2, \sigma_a^2, \sigma_d^2)</math> |
| | | + | |
| | | + | which means: |
| | | + | |
| | | + | <math>\mathsf{P}(B | \tau) = \left(\frac{\tau}{2 \pi}\right)^{\frac{3}{2}} |\Sigma_B|^{-\frac{1}{2}} exp \left(-\frac{\tau}{2} B^T \Sigma_B^{-1} B \right)</math> |
| | | + | |
| | | + | |
| | | + | * '''Joint posterior (1)''': |
| | | + | |
| | | + | <math>\mathsf{P}(\tau, B | Y, X) = \mathsf{P}(\tau | Y, X) \mathsf{P}(B | Y, X, \tau)</math> |
| | | + | |
| | | + | |
| | | + | * '''Conditional posterior of B''': |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X, \tau) = \frac{\mathsf{P}(B, Y | X, \tau)}{\mathsf{P}(Y | X, \tau)}</math> |
| | | + | |
| | | + | Let's neglect the normalization constant for now: |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X, \tau) \propto \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B)</math> |
| | | + | |
| | | + | Similarly, let's keep only the terms in <math>B</math> for the moment: |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B) exp((Y-XB)^T(Y-XB))</math> |
| | | + | |
| | | + | We expand: |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B - Y^TXB -B^TX^TY + B^TX^TXB)</math> |
| | | + | |
| | | + | We factorize some terms: |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X, \tau) \propto exp(B^T (\Sigma_B^{-1} + X^TX) B - Y^TXB -B^TX^TY)</math> |
| | | + | |
| | | + | Importantly, let's define: |
| | | + | |
| | | + | <math>\Omega = (\Sigma_B^{-1} + X^TX)^{-1}</math> |
| | | + | |
| | | + | We can see that <math>\Omega^T=\Omega</math>, which means that <math>\Omega</math> is a [http://en.wikipedia.org/wiki/Symmetric_matrix symmetric matrix]. |
| | | + | This is particularly useful here because we can use the following equality: <math>\Omega^{-1}\Omega^T=I</math>. |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Omega^{-1} B - (X^TY)^T\Omega^{-1}\Omega^TB -B^T\Omega^{-1}\Omega^TX^TY)</math> |
| | | + | |
| | | + | This now becomes easy to factorizes totally: |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X, \tau) \propto exp((B^T - \Omega X^TY)^T\Omega^{-1}(B - \Omega X^TY))</math> |
| | | + | |
| | | + | We recognize the [http://en.wikipedia.org/wiki/Kernel_%28statistics%29 kernel] of a Normal distribution, allowing us to write the conditional posterior as: |
| | | + | |
| | | + | <math>B | Y, X, \tau \sim \mathcal{N}(\Omega X^TY, \tau^{-1} \Omega)</math> |
| | | + | |
| | | + | |
| | | + | * '''Posterior of <math>\tau</math>''': |
| | | + | |
| | | + | Similarly to the equations above: |
| | | + | |
| | | + | <math>\mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau)</math> |
| | | + | |
| | | + | But now, to handle the second term, we need to integrate over <math>B</math>, thus effectively taking into account the uncertainty in <math>B</math>: |
| | | + | |
| | | + | <math>\mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \int \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) \mathsf{d}B</math> |
| | | + | |
| | | + | Again, we use the priors and likelihoods specified above (but everything inside the integral is kept inside it, even if it doesn't depend on <math>B</math>!): |
| | | + | |
| | | + | <math>\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} \tau^{N/2} exp(-\frac{\tau}{2} B^T \Sigma_B^{-1} B) exp(-\frac{\tau}{2} (Y - XB)^T (Y - XB)) \mathsf{d}B</math> |
| | | + | |
| | | + | As we used a conjugate prior for <math>\tau</math>, we know that we expect a Gamma distribution for the posterior. |
| | | + | Therefore, we can take <math>\tau^{N/2}</math> out of the integral and start guessing what looks like a Gamma distribution. |
| | | + | We also factorize inside the exponential: |
| | | + | |
| | | + | <math>\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} exp \left[-\frac{\tau}{2} \left( (B - \Omega X^T Y)^T \Omega^{-1} (B - \Omega X^T Y) - Y^T X \Omega X^T Y + Y^T Y \right) \right] \mathsf{d}B</math> |
| | | + | |
| | | + | We recognize the conditional posterior of <math>B</math>. |
| | | + | This allows us to use the fact that the pdf of the Normal distribution integrates to one: |
| | | + | |
| | | + | <math>\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} exp\left[-\frac{\tau}{2} (Y^T Y - Y^T X \Omega X^T Y) \right]</math> |
| | | + | |
| | | + | We finally recognize a Gamma distribution, allowing us to write the posterior as: |
| | | + | |
| | | + | <math>\tau | Y, X \sim \Gamma \left( \frac{N+\kappa}{2}, \; \frac{1}{2} (Y^T Y - Y^T X \Omega X^T Y + \lambda) \right)</math> |
| | | + | |
| | | + | |
| | | + | * '''Joint posterior (2)''': sometimes it is said that the joint posterior follows a Normal Inverse Gamma distribution: |
| | | + | |
| | | + | <math>B, \tau | Y, X \sim \mathcal{N}IG(\Omega X^TY, \; \tau^{-1}\Omega, \; \frac{N+\kappa}{2}, \; \frac{\lambda^\ast}{2})</math> |
| | | + | |
| | | + | where <math>\lambda^\ast = Y^T Y - Y^T X \Omega X^T Y + \lambda</math> |
| | | + | |
| | | + | |
| | | + | * '''Marginal posterior of B''': we can now integrate out <math>\tau</math>: |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X) = \int \mathsf{P}(\tau) \mathsf{P}(B | Y, X, \tau) \mathsf{d}\tau</math> |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}}}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \int \tau^{\frac{N+\kappa+3}{2}-1} exp \left[-\tau \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right) \right] \mathsf{d}\tau</math> |
| | | + | |
| | | + | Here we recognize the formula to [http://en.wikipedia.org/wiki/Gamma_function#Integration_problems integrate the Gamma function]: |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}} \Gamma(\frac{N+\kappa+3}{2})}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right)^{-\frac{N+\kappa+3}{2}}</math> |
| | | + | |
| | | + | And we now recognize a [http://en.wikipedia.org/wiki/Multivariate_t-distribution multivariate Student's t-distribution]: |
| | | + | |
| | | + | <math>\mathsf{P}(B | Y, X) = \frac{\Gamma(\frac{N+\kappa+3}{2})}{\Gamma(\frac{N+\kappa}{2}) \pi^\frac{3}{2} |\lambda^\ast \Omega|^{\frac{1}{2}} } \left( 1 + \frac{(B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY)}{\lambda^\ast} \right)^{-\frac{N+\kappa+3}{2}}</math> |
| | | + | |
| | | + | We hence can write: |
| | | + | |
| | | + | <math>B | Y, X \sim \mathcal{S}_{N+\kappa}(\Omega X^TY, \; (Y^T Y - Y^T X \Omega X^T Y + \lambda) \Omega)</math> |
| | | + | |
| | | + | |
| | | + | * '''Bayes Factor''': one way to answer our goal above ("is there an effect of the genotype on the phenotype?") is to do [http://en.wikipedia.org/wiki/Hypothesis_testing hypothesis testing]. |
| | | + | We want to test the following [http://en.wikipedia.org/wiki/Null_hypothesis null hypothesis]: |
| | | + | |
| | | + | <math>H_0: \; a = d = 0</math> |
| | | + | |
| | | + | In Bayesian modeling, hypothesis testing is performed with a [http://en.wikipedia.org/wiki/Bayes_factor Bayes factor], which in our case can be written as: |
| | | + | |
| | | + | <math>\mathrm{BF} = \frac{\mathsf{P}(Y | X, a \neq 0, d \neq 0)}{\mathsf{P}(Y | X, a = 0, d = 0)}</math> |
| | | + | |
| | | + | We can shorten this into: |
| | | + | |
| | | + | <math>\mathrm{BF} = \frac{\mathsf{P}(Y | X)}{\mathsf{P}_0(Y)}</math> |
| | | + | |
| | | + | Note that, compare to frequentist hypothesis testing which focuses on the null, the Bayes factor requires to explicitly model the data under the alternative. |
| | | + | This makes a big difference when interpreting the results (see below). |
| | | + | |
| | | + | |
| | | + | <math>\mathsf{P}(Y | X) = \int \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau) \mathsf{d}\tau</math> |
| | | + | |
| | | + | First, let's calculate what is inside the integral: |
| | | + | |
| | | + | <math>\mathsf{P}(Y | X, \tau) = \frac{\mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B)}{\mathsf{P}(B | Y, X, \tau)}</math> |
| | | + | |
| | | + | Using the formula obtained previously and doing some algebra gives: |
| | | + | |
| | | + | <math>\mathsf{P}(Y | X, \tau) = \left( \frac{\tau}{2 \pi} \right)^{\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} exp\left( -\frac{\tau}{2} (Y^TY - Y^TX\Omega X^TY) \right)</math> |
| | | + | |
| | | + | Now we can integrate out <math>\tau</math> (note the small typo in equation 9 of supplementary text S1 of Servin & Stephens): |
| | | + | |
| | | + | <math>\mathsf{P}(Y | X) = (2\pi)^{-\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} \frac{\frac{\lambda}{2}^{\frac{\kappa}{2}}}{\Gamma(\frac{\kappa}{2})} \int \tau^{\frac{N+\kappa}{2}-1} exp \left( -\frac{\tau}{2} (Y^TY - Y^TX\Omega X^TY + \lambda) \right)</math> |
| | | + | |
| | | + | Inside the integral, we recognize the almost-complete pdf of a Gamma distribution. |
| | | + | As it has to integrate to one, we get: |
| | | + | |
| | | + | <math>\mathsf{P}(Y | X) = (2\pi)^{-\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} \left( \frac{\lambda}{2} \right)^{\frac{\kappa}{2}} \frac{\Gamma(\frac{N+\kappa}{2})}{\Gamma(\frac{\kappa}{2})} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{2} \right)^{-\frac{N+\kappa}{2}}</math> |
| | | + | |
| | | + | We can use this expression also under the null. |
| | | + | In this case, as we need neither <math>a</math> nor <math>d</math>, <math>B</math> is simply <math>\mu</math>, <math>\Sigma_B</math> is <math>\sigma_{\mu}^2</math> and <math>X</math> is a vector of 1's. |
| | | + | We can also defines <math>\Omega_0 = ((\sigma_{\mu}^2)^{-1} + N)^{-1}</math>. |
| | | + | In the end, this gives: |
| | | + | |
| | | + | <math>\mathsf{P}_0(Y) = (2\pi)^{-\frac{N}{2}} \frac{|\Omega_0|^{\frac{1}{2}}}{\sigma_{\mu}} \left( \frac{\lambda}{2} \right)^{\frac{\kappa}{2}} \frac{\Gamma(\frac{N+\kappa}{2})}{\Gamma(\frac{\kappa}{2})} \left( \frac{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda}{2} \right)^{-\frac{N+\kappa}{2}}</math> |
| | | + | |
| | | + | We can therefore write the Bayes factor: |
| | | + | |
| | | + | <math>\mathrm{BF} = \left( \frac{|\Omega|}{\Omega_0} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda} \right)^{-\frac{N+\kappa}{2}}</math> |
| | | + | |
| | | + | When the Bayes factor is large, we say that there is enough evidence in the data to ''support the alternative''. |
| | | + | Indeed, the Bayesian testing procedure corresponds to measuring support for the specific alternative hypothesis compared to the null hypothesis. |
| | | + | Importantly, note that, for a frequentist testing procedure, we would say that there is enough evidence in the data to ''reject the null''. |
| | | + | However we wouldn't say anything about the alternative as we don't model it. |
| | | + | |
| | | + | The threshold to say that a Bayes factor is large depends on the field. It is possible to use the Bayes factor as a test statistic when doing permutation testing, and then control the false discovery rate. This can give an idea of a reasonable threshold. |
| | | + | |
| | | + | |
| | | + | * '''Hyperparameters''': the model has 5 hyperparameters, <math>\{\kappa, \, \lambda, \, \sigma_{\mu}, \, \sigma_a, \, \sigma_d\}</math>. How should we choose them? |
| | | + | Such a question is never easy to answer. But note that all hyperparameters are not that important, especially in typical quantitative genetics applications. For instance, we are mostly interested in those that determine the magnitude of the effects, <math>\sigma_a</math> and <math>\sigma_d</math>, so let's deal with the others first. |
| | | + | |
| | | + | As explained in Servin & Stephens, the posteriors for <math>\tau</math> and <math>B</math> change appropriately with shifts (<math>y+c</math>) and scaling (<math>y \times c</math>) in the phenotype when taking their limits. |
| | | + | This also gives us a new Bayes factor, the one used in practice (see Guan & Stephens, 2008): |
| | | + | |
| | | + | <math>\mathrm{lim}_{\sigma_{\mu} \rightarrow \infty \; ; \; \lambda \rightarrow 0 \; ; \; \kappa \rightarrow 0 } \; \mathrm{BF} = \left( \frac{N}{|\Sigma_B^{-1} + X^TX|} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX (\Sigma_B^{-1} + X^TX)^{-1} X^TY}{Y^TY - N \bar{Y}^2} \right)^{-\frac{N}{2}}</math> |
| | | + | |
| | | + | Now, for the important hyperparameters, <math>\sigma_a</math> and <math>\sigma_d</math>, it is usual to specify a grid of values, i.e. <math>M</math> pairs <math>(\sigma_a, \sigma_d)</math>. For instance, Guan & Stephens used the following grid: |
| | | + | |
| | | + | <math>M=4 \; ; \; \sigma_a \in \{0.05, 0.1, 0.2, 0.4\} \; ; \; \sigma_d = \frac{\sigma_a}{4}</math> |
| | | + | |
| | | + | Then, we can average the Bayes factors obtained over the grid using, as a first approximation, equal weights: |
| | | + | |
| | | + | <math>\mathrm{BF} = \sum_{m \, \in \, \text{grid}} \frac{1}{M} \, \mathrm{BF}(\sigma_a^{(m)}, \sigma_d^{(m)})</math> |
| | | + | |
| | | + | In eQTL studies, the weights can be estimated from the data using a hierarchical model (see below), by pooling all genes together as in Veyrieras ''et al'' (PLoS Genetics, 2010). |
| | | + | |
| | | + | |
| | | + | * '''Implementation''': the following R function is adapted from Servin & Stephens supplementary text 1. |
| | | + | |
| | | + | <nowiki> |
| | | + | BF <- function(G=NULL, Y=NULL, sigma.a=NULL, sigma.d=NULL, get.log10=TRUE){ |
| | | + | stopifnot(! is.null(G), ! is.null(Y), ! is.null(sigma.a), ! is.null(sigma.d)) |
| | | + | subset <- complete.cases(Y) & complete.cases(G) |
| | | + | Y <- Y[subset] |
| | | + | G <- G[subset] |
| | | + | stopifnot(length(Y) == length(G)) |
| | | + | N <- length(G) |
| | | + | X <- cbind(rep(1,N), G, G == 1) |
| | | + | inv.Sigma.B <- diag(c(0, 1/sigma.a^2, 1/sigma.d^2)) |
| | | + | inv.Omega <- inv.Sigma.B + t(X) %*% X |
| | | + | inv.Omega0 <- N |
| | | + | tY.Y <- t(Y) %*% Y |
| | | + | log10.BF <- as.numeric(0.5 * log10(inv.Omega0) - |
| | | + | 0.5 * log10(det(inv.Omega)) - |
| | | + | log10(sigma.a) - log10(sigma.d) - |
| | | + | (N/2) * (log10(tY.Y - t(Y) %*% X %*% solve(inv.Omega) |
| | | + | %*% t(X) %*% cbind(Y)) - |
| | | + | log10(tY.Y - N*mean(Y)^2))) |
| | | + | if(get.log10) |
| | | + | return(log10.BF) |
| | | + | else |
| | | + | return(10^log10.BF) |
| | | + | } |
| | | + | </nowiki> |
| | | + | |
| | | + | In the same vein as what is explained [http://openwetware.org/wiki/User:Timothee_Flutre/Notebook/Postdoc/2011/06/28 here], we can simulate data under different scenarios and check the BFs: |
| | | + | |
| | | + | <nowiki> |
| | | + | N <- 300 # play with it |
| | | + | PVE <- 0.1 # play with it |
| | | + | grid <- c(0.05, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2) |
| | | + | MAF <- 0.3 |
| | | + | G <- rbinom(n=N, size=2, prob=MAF) |
| | | + | tau <- 1 |
| | | + | a <- sqrt((2/5) * (PVE / (tau * MAF * (1-MAF) * (1-PVE)))) |
| | | + | d <- a / 2 |
| | | + | mu <- rnorm(n=1, mean=0, sd=10) |
| | | + | Y <- mu + a * G + d * (G == 1) + rnorm(n=N, mean=0, sd=tau) |
| | | + | for(m in 1:length(grid)) |
| | | + | print(BF(G, Y, grid[m], grid[m]/4)) |
| | | + | </nowiki> |
| | | + | |
| | | + | |
| | | + | * '''Binary phenotype''': using a similar notation, we model case-control studies with a [http://en.wikipedia.org/wiki/Logistic_regression logistic regression] where the probability to be a case is <math>\mathsf{P}(y_i = 1) = p_i</math>. |
| | | + | |
| | | + | There are many equivalent ways to write the likelihood, the usual one being: |
| | | + | |
| | | + | <math>y_i | p_i \; \overset{i.i.d}{\sim} \; Bernoulli(p_i)</math> with the [http://en.wikipedia.org/wiki/Log-odds log-odds] (logit function) being <math>\mathrm{ln} \frac{p_i}{1 - p_i} = \mu + a \, g_i + d \, \mathbf{1}_{g_i=1}</math> |
| | | + | |
| | | + | Let's use <math>X_i^T=[1 \; g_i \; \mathbf{1}_{g_i=1}]</math> to denote the <math>i</math>-th row of the design matrix <math>X</math>. We can also keep the same definition as above for <math>B=[\mu \; a \; d]^T</math>. Thus we have: |
| | | + | |
| | | + | <math>p_i = \frac{e^{X_i^TB}}{1 + e^{X_i^TB}}</math> |
| | | + | |
| | | + | As the <math>y_i</math>'s can only take <math>0</math> and <math>1</math> as values, the likelihood can be written as: |
| | | + | |
| | | + | <math>\mathcal{L}(B) = \mathsf{P}(Y | X, B) = \prod_{i=1}^N p_i^{y_i} (1-p_i)^{1-y_i}</math> |
| | | + | |
| | | + | We still use the same prior as above for <math>B</math> (but there is no <math>\tau</math> anymore), so that: |
| | | + | |
| | | + | <math>B | \Sigma_B \sim \mathcal{N}_3(0, \Sigma_B)</math> |
| | | + | |
| | | + | where <math>\Sigma_B</math> is a 3 x 3 matrix with <math>[\sigma_\mu^2 \; \sigma_a^2 \; \sigma_d^2]</math> on the diagonal and 0 elsewhere. |
| | | + | |
| | | + | As above, the Bayes factor is used to compare the two models: |
| | | + | |
| | | + | <math>\mathrm{BF} = \frac{\mathsf{P}(Y | X, M1)}{\mathsf{P}(Y | X, M0)} = \frac{\mathsf{P}(Y | X, a \neq 0, d \neq 0)}{\mathsf{P}(Y | X, a=0, d=0)} = \frac{\int \mathsf{P}(B) \mathsf{P}(Y | X, B) \mathrm{d}B}{\int \mathsf{P}(\mu) \mathsf{P}(Y | X, \mu) \mathrm{d}\mu}</math> |
| | | + | |
| | | + | The interesting point here is that there is no way to analytically calculate these integrals (marginal likelihoods). Therefore, we will use [http://en.wikipedia.org/wiki/Laplace_approximation Laplace's method] to approximate them, as in Guan & Stephens (2008). |
| | | + | |
| | | + | Starting with the numerator: |
| | | + | |
| | | + | <math>\mathsf{P}(Y|X,M1) = \int \exp \left[ N \left( \frac{1}{N} \mathrm{ln} \, \mathsf{P}(B) + \frac{1}{N} \mathrm{ln} \, \mathsf{P}(Y | X, B) \right) \right] \mathsf{d}B</math> |
| | | + | |
| | | + | <math>\mathsf{P}(Y|X,M1) = \int \exp \left\{ N \left[ \frac{1}{N} \left( \mathrm{ln} \left( (2 \pi)^{-\frac{3}{2}} \, \frac{1}{\sigma_\mu \sigma_a \sigma_d} \, \exp\left( -\frac{1}{2} (\frac{\mu^2}{\sigma_\mu^2} + \frac{a^2}{\sigma_a^2} + \frac{d^2}{\sigma_d^2}) \right) \right) \right) + \frac{1}{N} \left( \sum_{i=1}^N \left( y_i \, \mathrm{ln} (p_i) + (1-y_i) \, \mathrm{ln} (1-p_i) \right) \right) \right] \right\} \mathsf{d}B</math> |
| | | + | |
| | | + | Let's use <math>f</math> to denote the function inside the exponential: |
| | | + | |
| | | + | <math>\mathsf{P}(Y|X,M1) = \int \exp \left( N \; f(B) \right) \mathsf{d}B</math> |
| | | + | |
| | | + | The function <math>f</math> is defined by: |
| | | + | |
| | | + | <math>f: \mathbb{R}^3 \rightarrow \mathbb{R}</math> |
| | | + | |
| | | + | <math>f(B) = \frac{1}{N} \left( -\frac{3}{2} \mathrm{ln}(2 \pi) - \frac{1}{2} \mathrm{ln}(|\Sigma_B|) - \frac{1}{2}(B^T \Sigma_B^{-1} B) \right) + \frac{1}{N} \sum_{i=1}^N \left( y_i \, X_i^T B - \mathrm{ln}(1 + e^{X_i^TB}) \right)</math> |
| | | + | |
| | | + | This function will then be used to approximate the integral, like this: |
| | | + | |
| | | + | <math>\mathsf{P}(Y|X,M1) \approx N^{-3/2} (2 \pi)^{3/2} |H(B^\star)|^{-1/2} e^{N f(B^\star)}</math> |
| | | + | |
| | | + | where <math>H</math> is the [http://en.wikipedia.org/wiki/Hessian_matrix Hessian] of <math>f</math> and <math>B^\star = [\mu^\star a^\star d^\star]^T</math> is the point at which <math>f</math> is maximized. |
| | | + | |
| | | + | We therefore need to find <math>B^\star</math>. As it maximizes <math>f</math>, we need to calculate the first derivatives of <math>f</math>. Let's do this the univariate way: |
| | | + | |
| | | + | <math>\frac{\partial f}{\partial \beta} = - \frac{\beta}{N \, \sigma_\beta^2} + \frac{1}{N} \sum_{i=1}^N \left(\frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right) \frac{\partial p_i}{\partial \beta}</math> |
| | | + | |
| | | + | where <math>\beta</math> is <math>\mu</math>, <math>a</math> or <math>d</math>. |
| | | + | |
| | | + | A simple form for the first derivatives of <math>p_i</math> also exists when writing <math>p_i = e^{X_i^tB} (1 + e^{X_i^tB})^{-1}</math>: |
| | | + | |
| | | + | <math>\frac{\partial p_i}{\partial \beta} = \left[ e^{X_i^tB} (1 + e^{X_i^tB})^{-1} + e^{X_i^tB} \left( -e^{X_i^tB} (1 + e^{X_i^tB})^{-2} \right) \right] \frac{\partial X_i^TB}{\partial \beta}</math> |
| | | + | |
| | | + | <math>\frac{\partial p_i}{\partial \beta} = \left[ \frac{e^{X_i^tB} (1 + e^{X_i^tB}) - (e^{X_i^tB})^2}{(1 + e^{X_i^tB})^2} \right] \frac{\partial X_i^TB}{\partial \beta}</math> |
| | | + | |
| | | + | <math>\frac{\partial p_i}{\partial \beta} = \left[ p_i (1 - p_i) \right] \frac{\partial X_i^TB}{\partial \beta}</math> |
| | | + | |
| | | + | where <math>\frac{\partial X_i^TB}{\partial \beta}</math> is equal to <math>1, \, g_i, \, \mathbf{1}_{g_i=1}</math> when <math>\beta</math> corresponds respectively to <math>\mu, \, a, \, d</math>. |
| | | + | |
| | | + | This simplifies the first derivatives of <math>f</math> into: |
| | | + | |
| | | + | <math>\frac{\partial f}{\partial \beta} = - \frac{\beta}{N \, \sigma_\beta^2} + \frac{1}{N} \sum_{i=1}^N (y_i - p_i ) \frac{\partial X_i^TB}{\partial \beta}</math> |
| | | + | |
| | | + | When setting <math>\frac{\partial f}{\partial \beta}(\beta^\star) = 0</math>, we observe that <math>\beta^\star</math> is present not only alone but also inside the sum, in the <math>p_i</math>'s: indeed <math>p_i</math> is a non-linear function of <math>B</math>. This means that an iterative procedure is required, typically [http://en.wikipedia.org/wiki/Newton_method_in_optimization Newton's method]. |
| | | + | |
| | | + | To use it, we need the second derivatives of <math>f</math>: |
| | | + | |
| | | + | <math>\frac{\partial^2 f}{\partial \beta^2} = - \frac{1}{N \, \sigma_\beta^2} + \frac{1}{N} \sum_{i=1}^N \left[ (-p_i(1-p_i)\frac{\partial X_i^TB}{\partial \beta}) + (y_i-p_i)\frac{\partial^2 X_i^TB}{\partial \beta^2} \right]</math> |
| | | + | |
| | | + | The second derivatives of <math>X_i^TB</math> are all equal to 0: |
| | | + | |
| | | + | <math>\frac{\partial^2 f}{\partial \beta^2} = - \frac{1}{N \, \sigma_\beta^2} - \frac{1}{N} \sum_{i=1}^N p_i(1-p_i)\frac{\partial X_i^TB}{\partial \beta}</math> |
| | | + | |
| | | + | Note that the second derivatives of <math>f</math> are strictly negative. Therefore, <math>f</math> is globally convex, which means that it has a unique global maximum, at <math>B^\star</math>. As a consequence, we have the right to use Laplace's method to approximate the integral around its maximum. |
| | | + | |
| | | + | finding the maximums: iterative procedure, update equations or generic solver -> to do |
| | | + | |
| | | + | implementation: in R -> to do |
| | | + | |
| | | + | finding the effect sizes and their std error: to do |
| | | + | |
| | | + | |
| | | + | * '''Link between Bayes factor and P-value''': see Wakefield (2008) |
| | | + | |
| | | + | to do |
| | | + | |
| | | + | |
| | | + | * '''Hierarchical model''': pooling genes, learn weights for grid and genomic annotations, see Veyrieras ''et al'' (PLoS Genetics, 2010) |
| | | + | |
| | | + | to do |
| | | + | |
| | | + | |
| | | + | * '''Multiple SNPs with LD''': joint analysis of multiple SNPs, handle correlation between them, see Guan & Stephens (Annals of Applied Statistics, 2011) for MCMC, see Carbonetto & Stephens (Bayesian Analysis, 2012) for Variational Bayes |
| | | + | |
| | | + | to do |
| | | + | |
| | | + | |
| | | + | * '''Confounding factors in phenotype''': factor analysis, see Stegle ''et al'' (PLoS Computational Biology, 2010) |
| | | + | |
| | | + | to do |
| | | + | |
| | | + | |
| | | + | * '''Genetic relatedness''': linear mixed model, see Zhou & Stephens (Nature Genetics, 2012) |
| | | + | |
| | | + | to do |
| | | + | |
| | | + | |
| | | + | * '''Discrete phenotype''': count data as from RNA-seq, Poisson-like likelihood, see Sun (Biometrics, 2012) |
| | | + | |
| | | + | to do |
| | | + | |
| | | + | |
| | | + | * '''Multiple phenotypes''': matrix-variate distributions, tensors |
| | | + | |
| | | + | to do |
| | | + | |
| | | + | |
| | | + | * '''Non-independent genes''': enrichment in known pathways, learn "modules" |
| | | + | |
| | | + | to do |
| | | + | |
| | | + | |
| | | + | * '''References''': |
| | | + | ** Servin & Stephens (PLoS Genetics, 2007) |
| | | + | ** Guan & Stephens (PLoS Genetics, 2008) |
| | | + | ** Stephens & Balding (Nature Reviews Genetics, 2009) |
| | | | |
| | <!-- ##### DO NOT edit below this line unless you know what you are doing. ##### --> | | <!-- ##### DO NOT edit below this line unless you know what you are doing. ##### --> |
## Revision as of 18:41, 3 February 2013
Project name Main project page
Previous entry Next entry
## Bayesian model of univariate linear regression for QTL detection
This page aims at helping people like me, interested in quantitative genetics, to get a better understanding of some Bayesian models, most importantly the impact of the modeling assumptions as well as the underlying maths. It starts with a simple model, and gradually increases the scope to relax assumptions. See references to scientific articles at the end.
• Data: let's assume that we obtained data from N individuals. We note $y_1,\ldots,y_N$ the (quantitative) phenotypes (e.g. expression levels at a given gene), and $g_1,\ldots,g_N$ the genotypes at a given SNP (encoded as allele dose: 0, 1 or 2).
• Goal: we want to assess the evidence in the data for an effect of the genotype on the phenotype.
• Assumptions: the relationship between genotype and phenotype is linear; the individuals are not genetically related; there is no hidden confounding factors in the phenotypes.
• Likelihood: we start by writing the usual linear regression for one individual
$\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \; \text{ with } \; \epsilon_i \; \overset{i.i.d}{\sim} \; \mathcal{N}(0,\tau^{-1})$
where β1 is in fact the additive effect of the SNP, noted a from now on, and β2 is the dominance effect of the SNP, d = ak.
Let's now write the model in matrix notation:
$Y = X B + E \text{ where } B = [ \mu \; a \; d ]^T$
This gives the following multivariate Normal distribution for the phenotypes:
$Y | X, \tau, B \sim \mathcal{N}(XB, \tau^{-1} I_N)$
Even though we can write the likelihood as a multivariate Normal, I still keep the term "univariate" in the title because the regression has a single response, Y. It is usual to keep the term "multivariate" for the case where there is a matrix of responses (i.e. multiple phenotypes).
The likelihood of the parameters given the data is therefore:
$\mathcal{L}(\tau, B) = \mathsf{P}(Y | X, \tau, B)$
$\mathcal{L}(\tau, B) = \left(\frac{\tau}{2 \pi}\right)^{\frac{N}{2}} exp \left( -\frac{\tau}{2} (Y - XB)^T (Y - XB) \right)$
• Priors: we use the usual conjugate prior
$\mathsf{P}(\tau, B) = \mathsf{P}(\tau) \mathsf{P}(B | \tau)$
A Gamma distribution for τ:
$\tau \sim \Gamma(\kappa/2, \, \lambda/2)$
which means:
$\mathsf{P}(\tau) = \frac{\frac{\lambda}{2}^{\kappa/2}}{\Gamma(\frac{\kappa}{2})} \tau^{\frac{\kappa}{2}-1} e^{-\frac{\lambda}{2} \tau}$
And a multivariate Normal distribution for B:
$B | \tau \sim \mathcal{N}(\vec{0}, \, \tau^{-1} \Sigma_B) \text{ with } \Sigma_B = diag(\sigma_{\mu}^2, \sigma_a^2, \sigma_d^2)$
which means:
$\mathsf{P}(B | \tau) = \left(\frac{\tau}{2 \pi}\right)^{\frac{3}{2}} |\Sigma_B|^{-\frac{1}{2}} exp \left(-\frac{\tau}{2} B^T \Sigma_B^{-1} B \right)$
• Joint posterior (1):
$\mathsf{P}(\tau, B | Y, X) = \mathsf{P}(\tau | Y, X) \mathsf{P}(B | Y, X, \tau)$
• Conditional posterior of B:
$\mathsf{P}(B | Y, X, \tau) = \frac{\mathsf{P}(B, Y | X, \tau)}{\mathsf{P}(Y | X, \tau)}$
Let's neglect the normalization constant for now:
$\mathsf{P}(B | Y, X, \tau) \propto \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B)$
Similarly, let's keep only the terms in B for the moment:
$\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B) exp((Y-XB)^T(Y-XB))$
We expand:
$\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B - Y^TXB -B^TX^TY + B^TX^TXB)$
We factorize some terms:
$\mathsf{P}(B | Y, X, \tau) \propto exp(B^T (\Sigma_B^{-1} + X^TX) B - Y^TXB -B^TX^TY)$
Importantly, let's define:
$\Omega = (\Sigma_B^{-1} + X^TX)^{-1}$
We can see that ΩT = Ω, which means that Ω is a symmetric matrix. This is particularly useful here because we can use the following equality: Ω − 1ΩT = I.
$\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Omega^{-1} B - (X^TY)^T\Omega^{-1}\Omega^TB -B^T\Omega^{-1}\Omega^TX^TY)$
This now becomes easy to factorizes totally:
$\mathsf{P}(B | Y, X, \tau) \propto exp((B^T - \Omega X^TY)^T\Omega^{-1}(B - \Omega X^TY))$
We recognize the kernel of a Normal distribution, allowing us to write the conditional posterior as:
$B | Y, X, \tau \sim \mathcal{N}(\Omega X^TY, \tau^{-1} \Omega)$
• Posterior of τ:
Similarly to the equations above:
$\mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau)$
But now, to handle the second term, we need to integrate over B, thus effectively taking into account the uncertainty in B:
$\mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \int \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) \mathsf{d}B$
Again, we use the priors and likelihoods specified above (but everything inside the integral is kept inside it, even if it doesn't depend on B!):
$\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} \tau^{N/2} exp(-\frac{\tau}{2} B^T \Sigma_B^{-1} B) exp(-\frac{\tau}{2} (Y - XB)^T (Y - XB)) \mathsf{d}B$
As we used a conjugate prior for τ, we know that we expect a Gamma distribution for the posterior. Therefore, we can take τN / 2 out of the integral and start guessing what looks like a Gamma distribution. We also factorize inside the exponential:
$\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} exp \left[-\frac{\tau}{2} \left( (B - \Omega X^T Y)^T \Omega^{-1} (B - \Omega X^T Y) - Y^T X \Omega X^T Y + Y^T Y \right) \right] \mathsf{d}B$
We recognize the conditional posterior of B. This allows us to use the fact that the pdf of the Normal distribution integrates to one:
$\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} exp\left[-\frac{\tau}{2} (Y^T Y - Y^T X \Omega X^T Y) \right]$
We finally recognize a Gamma distribution, allowing us to write the posterior as:
$\tau | Y, X \sim \Gamma \left( \frac{N+\kappa}{2}, \; \frac{1}{2} (Y^T Y - Y^T X \Omega X^T Y + \lambda) \right)$
• Joint posterior (2): sometimes it is said that the joint posterior follows a Normal Inverse Gamma distribution:
$B, \tau | Y, X \sim \mathcal{N}IG(\Omega X^TY, \; \tau^{-1}\Omega, \; \frac{N+\kappa}{2}, \; \frac{\lambda^\ast}{2})$
where $\lambda^\ast = Y^T Y - Y^T X \Omega X^T Y + \lambda$
• Marginal posterior of B: we can now integrate out τ:
$\mathsf{P}(B | Y, X) = \int \mathsf{P}(\tau) \mathsf{P}(B | Y, X, \tau) \mathsf{d}\tau$
$\mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}}}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \int \tau^{\frac{N+\kappa+3}{2}-1} exp \left[-\tau \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right) \right] \mathsf{d}\tau$
Here we recognize the formula to integrate the Gamma function:
$\mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}} \Gamma(\frac{N+\kappa+3}{2})}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right)^{-\frac{N+\kappa+3}{2}}$
And we now recognize a multivariate Student's t-distribution:
$\mathsf{P}(B | Y, X) = \frac{\Gamma(\frac{N+\kappa+3}{2})}{\Gamma(\frac{N+\kappa}{2}) \pi^\frac{3}{2} |\lambda^\ast \Omega|^{\frac{1}{2}} } \left( 1 + \frac{(B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY)}{\lambda^\ast} \right)^{-\frac{N+\kappa+3}{2}}$
We hence can write:
$B | Y, X \sim \mathcal{S}_{N+\kappa}(\Omega X^TY, \; (Y^T Y - Y^T X \Omega X^T Y + \lambda) \Omega)$
• Bayes Factor: one way to answer our goal above ("is there an effect of the genotype on the phenotype?") is to do hypothesis testing.
We want to test the following null hypothesis:
$H_0: \; a = d = 0$
In Bayesian modeling, hypothesis testing is performed with a Bayes factor, which in our case can be written as:
$\mathrm{BF} = \frac{\mathsf{P}(Y | X, a \neq 0, d \neq 0)}{\mathsf{P}(Y | X, a = 0, d = 0)}$
We can shorten this into:
$\mathrm{BF} = \frac{\mathsf{P}(Y | X)}{\mathsf{P}_0(Y)}$
Note that, compare to frequentist hypothesis testing which focuses on the null, the Bayes factor requires to explicitly model the data under the alternative. This makes a big difference when interpreting the results (see below).
$\mathsf{P}(Y | X) = \int \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau) \mathsf{d}\tau$
First, let's calculate what is inside the integral:
$\mathsf{P}(Y | X, \tau) = \frac{\mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B)}{\mathsf{P}(B | Y, X, \tau)}$
Using the formula obtained previously and doing some algebra gives:
$\mathsf{P}(Y | X, \tau) = \left( \frac{\tau}{2 \pi} \right)^{\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} exp\left( -\frac{\tau}{2} (Y^TY - Y^TX\Omega X^TY) \right)$
Now we can integrate out τ (note the small typo in equation 9 of supplementary text S1 of Servin & Stephens):
$\mathsf{P}(Y | X) = (2\pi)^{-\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} \frac{\frac{\lambda}{2}^{\frac{\kappa}{2}}}{\Gamma(\frac{\kappa}{2})} \int \tau^{\frac{N+\kappa}{2}-1} exp \left( -\frac{\tau}{2} (Y^TY - Y^TX\Omega X^TY + \lambda) \right)$
Inside the integral, we recognize the almost-complete pdf of a Gamma distribution. As it has to integrate to one, we get:
$\mathsf{P}(Y | X) = (2\pi)^{-\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} \left( \frac{\lambda}{2} \right)^{\frac{\kappa}{2}} \frac{\Gamma(\frac{N+\kappa}{2})}{\Gamma(\frac{\kappa}{2})} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{2} \right)^{-\frac{N+\kappa}{2}}$
We can use this expression also under the null. In this case, as we need neither a nor d, B is simply μ, ΣB is $\sigma_{\mu}^2$ and X is a vector of 1's. We can also defines $\Omega_0 = ((\sigma_{\mu}^2)^{-1} + N)^{-1}$. In the end, this gives:
$\mathsf{P}_0(Y) = (2\pi)^{-\frac{N}{2}} \frac{|\Omega_0|^{\frac{1}{2}}}{\sigma_{\mu}} \left( \frac{\lambda}{2} \right)^{\frac{\kappa}{2}} \frac{\Gamma(\frac{N+\kappa}{2})}{\Gamma(\frac{\kappa}{2})} \left( \frac{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda}{2} \right)^{-\frac{N+\kappa}{2}}$
We can therefore write the Bayes factor:
$\mathrm{BF} = \left( \frac{|\Omega|}{\Omega_0} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda} \right)^{-\frac{N+\kappa}{2}}$
When the Bayes factor is large, we say that there is enough evidence in the data to support the alternative. Indeed, the Bayesian testing procedure corresponds to measuring support for the specific alternative hypothesis compared to the null hypothesis. Importantly, note that, for a frequentist testing procedure, we would say that there is enough evidence in the data to reject the null. However we wouldn't say anything about the alternative as we don't model it.
The threshold to say that a Bayes factor is large depends on the field. It is possible to use the Bayes factor as a test statistic when doing permutation testing, and then control the false discovery rate. This can give an idea of a reasonable threshold.
• Hyperparameters: the model has 5 hyperparameters, $\{\kappa, \, \lambda, \, \sigma_{\mu}, \, \sigma_a, \, \sigma_d\}$. How should we choose them?
Such a question is never easy to answer. But note that all hyperparameters are not that important, especially in typical quantitative genetics applications. For instance, we are mostly interested in those that determine the magnitude of the effects, σa and σd, so let's deal with the others first.
As explained in Servin & Stephens, the posteriors for τ and B change appropriately with shifts (y + c) and scaling ($y \times c$) in the phenotype when taking their limits. This also gives us a new Bayes factor, the one used in practice (see Guan & Stephens, 2008):
$\mathrm{lim}_{\sigma_{\mu} \rightarrow \infty \; ; \; \lambda \rightarrow 0 \; ; \; \kappa \rightarrow 0 } \; \mathrm{BF} = \left( \frac{N}{|\Sigma_B^{-1} + X^TX|} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX (\Sigma_B^{-1} + X^TX)^{-1} X^TY}{Y^TY - N \bar{Y}^2} \right)^{-\frac{N}{2}}$
Now, for the important hyperparameters, σa and σd, it is usual to specify a grid of values, i.e. M pairs (σa,σd). For instance, Guan & Stephens used the following grid:
$M=4 \; ; \; \sigma_a \in \{0.05, 0.1, 0.2, 0.4\} \; ; \; \sigma_d = \frac{\sigma_a}{4}$
Then, we can average the Bayes factors obtained over the grid using, as a first approximation, equal weights:
$\mathrm{BF} = \sum_{m \, \in \, \text{grid}} \frac{1}{M} \, \mathrm{BF}(\sigma_a^{(m)}, \sigma_d^{(m)})$
In eQTL studies, the weights can be estimated from the data using a hierarchical model (see below), by pooling all genes together as in Veyrieras et al (PLoS Genetics, 2010).
• Implementation: the following R function is adapted from Servin & Stephens supplementary text 1.
```BF <- function(G=NULL, Y=NULL, sigma.a=NULL, sigma.d=NULL, get.log10=TRUE){
stopifnot(! is.null(G), ! is.null(Y), ! is.null(sigma.a), ! is.null(sigma.d))
subset <- complete.cases(Y) & complete.cases(G)
Y <- Y[subset]
G <- G[subset]
stopifnot(length(Y) == length(G))
N <- length(G)
X <- cbind(rep(1,N), G, G == 1)
inv.Sigma.B <- diag(c(0, 1/sigma.a^2, 1/sigma.d^2))
inv.Omega <- inv.Sigma.B + t(X) %*% X
inv.Omega0 <- N
tY.Y <- t(Y) %*% Y
log10.BF <- as.numeric(0.5 * log10(inv.Omega0) -
0.5 * log10(det(inv.Omega)) -
log10(sigma.a) - log10(sigma.d) -
(N/2) * (log10(tY.Y - t(Y) %*% X %*% solve(inv.Omega)
%*% t(X) %*% cbind(Y)) -
log10(tY.Y - N*mean(Y)^2)))
if(get.log10)
return(log10.BF)
else
return(10^log10.BF)
}
```
In the same vein as what is explained here, we can simulate data under different scenarios and check the BFs:
```N <- 300 # play with it
PVE <- 0.1 # play with it
grid <- c(0.05, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2)
MAF <- 0.3
G <- rbinom(n=N, size=2, prob=MAF)
tau <- 1
a <- sqrt((2/5) * (PVE / (tau * MAF * (1-MAF) * (1-PVE))))
d <- a / 2
mu <- rnorm(n=1, mean=0, sd=10)
Y <- mu + a * G + d * (G == 1) + rnorm(n=N, mean=0, sd=tau)
for(m in 1:length(grid))
print(BF(G, Y, grid[m], grid[m]/4))
```
• Binary phenotype: using a similar notation, we model case-control studies with a logistic regression where the probability to be a case is $\mathsf{P}(y_i = 1) = p_i$.
There are many equivalent ways to write the likelihood, the usual one being:
$y_i | p_i \; \overset{i.i.d}{\sim} \; Bernoulli(p_i)$ with the log-odds (logit function) being $\mathrm{ln} \frac{p_i}{1 - p_i} = \mu + a \, g_i + d \, \mathbf{1}_{g_i=1}$
Let's use $X_i^T=[1 \; g_i \; \mathbf{1}_{g_i=1}]$ to denote the i-th row of the design matrix X. We can also keep the same definition as above for $B=[\mu \; a \; d]^T$. Thus we have:
$p_i = \frac{e^{X_i^TB}}{1 + e^{X_i^TB}}$
As the yi's can only take 0 and 1 as values, the likelihood can be written as:
$\mathcal{L}(B) = \mathsf{P}(Y | X, B) = \prod_{i=1}^N p_i^{y_i} (1-p_i)^{1-y_i}$
We still use the same prior as above for B (but there is no τ anymore), so that:
$B | \Sigma_B \sim \mathcal{N}_3(0, \Sigma_B)$
where ΣB is a 3 x 3 matrix with $[\sigma_\mu^2 \; \sigma_a^2 \; \sigma_d^2]$ on the diagonal and 0 elsewhere.
As above, the Bayes factor is used to compare the two models:
$\mathrm{BF} = \frac{\mathsf{P}(Y | X, M1)}{\mathsf{P}(Y | X, M0)} = \frac{\mathsf{P}(Y | X, a \neq 0, d \neq 0)}{\mathsf{P}(Y | X, a=0, d=0)} = \frac{\int \mathsf{P}(B) \mathsf{P}(Y | X, B) \mathrm{d}B}{\int \mathsf{P}(\mu) \mathsf{P}(Y | X, \mu) \mathrm{d}\mu}$
The interesting point here is that there is no way to analytically calculate these integrals (marginal likelihoods). Therefore, we will use Laplace's method to approximate them, as in Guan & Stephens (2008).
Starting with the numerator:
$\mathsf{P}(Y|X,M1) = \int \exp \left[ N \left( \frac{1}{N} \mathrm{ln} \, \mathsf{P}(B) + \frac{1}{N} \mathrm{ln} \, \mathsf{P}(Y | X, B) \right) \right] \mathsf{d}B$
$\mathsf{P}(Y|X,M1) = \int \exp \left\{ N \left[ \frac{1}{N} \left( \mathrm{ln} \left( (2 \pi)^{-\frac{3}{2}} \, \frac{1}{\sigma_\mu \sigma_a \sigma_d} \, \exp\left( -\frac{1}{2} (\frac{\mu^2}{\sigma_\mu^2} + \frac{a^2}{\sigma_a^2} + \frac{d^2}{\sigma_d^2}) \right) \right) \right) + \frac{1}{N} \left( \sum_{i=1}^N \left( y_i \, \mathrm{ln} (p_i) + (1-y_i) \, \mathrm{ln} (1-p_i) \right) \right) \right] \right\} \mathsf{d}B$
Let's use f to denote the function inside the exponential:
$\mathsf{P}(Y|X,M1) = \int \exp \left( N \; f(B) \right) \mathsf{d}B$
The function f is defined by:
$f: \mathbb{R}^3 \rightarrow \mathbb{R}$
$f(B) = \frac{1}{N} \left( -\frac{3}{2} \mathrm{ln}(2 \pi) - \frac{1}{2} \mathrm{ln}(|\Sigma_B|) - \frac{1}{2}(B^T \Sigma_B^{-1} B) \right) + \frac{1}{N} \sum_{i=1}^N \left( y_i \, X_i^T B - \mathrm{ln}(1 + e^{X_i^TB}) \right)$
This function will then be used to approximate the integral, like this:
$\mathsf{P}(Y|X,M1) \approx N^{-3/2} (2 \pi)^{3/2} |H(B^\star)|^{-1/2} e^{N f(B^\star)}$
where H is the Hessian of f and $B^\star = [\mu^\star a^\star d^\star]^T$ is the point at which f is maximized.
We therefore need to find $B^\star$. As it maximizes f, we need to calculate the first derivatives of f. Let's do this the univariate way:
$\frac{\partial f}{\partial \beta} = - \frac{\beta}{N \, \sigma_\beta^2} + \frac{1}{N} \sum_{i=1}^N \left(\frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right) \frac{\partial p_i}{\partial \beta}$
where β is μ, a or d.
A simple form for the first derivatives of pi also exists when writing $p_i = e^{X_i^tB} (1 + e^{X_i^tB})^{-1}$:
$\frac{\partial p_i}{\partial \beta} = \left[ e^{X_i^tB} (1 + e^{X_i^tB})^{-1} + e^{X_i^tB} \left( -e^{X_i^tB} (1 + e^{X_i^tB})^{-2} \right) \right] \frac{\partial X_i^TB}{\partial \beta}$
$\frac{\partial p_i}{\partial \beta} = \left[ \frac{e^{X_i^tB} (1 + e^{X_i^tB}) - (e^{X_i^tB})^2}{(1 + e^{X_i^tB})^2} \right] \frac{\partial X_i^TB}{\partial \beta}$
$\frac{\partial p_i}{\partial \beta} = \left[ p_i (1 - p_i) \right] \frac{\partial X_i^TB}{\partial \beta}$
where $\frac{\partial X_i^TB}{\partial \beta}$ is equal to $1, \, g_i, \, \mathbf{1}_{g_i=1}$ when β corresponds respectively to $\mu, \, a, \, d$.
This simplifies the first derivatives of f into:
$\frac{\partial f}{\partial \beta} = - \frac{\beta}{N \, \sigma_\beta^2} + \frac{1}{N} \sum_{i=1}^N (y_i - p_i ) \frac{\partial X_i^TB}{\partial \beta}$
When setting $\frac{\partial f}{\partial \beta}(\beta^\star) = 0$, we observe that $\beta^\star$ is present not only alone but also inside the sum, in the pi's: indeed pi is a non-linear function of B. This means that an iterative procedure is required, typically Newton's method.
To use it, we need the second derivatives of f:
$\frac{\partial^2 f}{\partial \beta^2} = - \frac{1}{N \, \sigma_\beta^2} + \frac{1}{N} \sum_{i=1}^N \left[ (-p_i(1-p_i)\frac{\partial X_i^TB}{\partial \beta}) + (y_i-p_i)\frac{\partial^2 X_i^TB}{\partial \beta^2} \right]$
The second derivatives of $X_i^TB$ are all equal to 0:
$\frac{\partial^2 f}{\partial \beta^2} = - \frac{1}{N \, \sigma_\beta^2} - \frac{1}{N} \sum_{i=1}^N p_i(1-p_i)\frac{\partial X_i^TB}{\partial \beta}$
Note that the second derivatives of f are strictly negative. Therefore, f is globally convex, which means that it has a unique global maximum, at $B^\star$. As a consequence, we have the right to use Laplace's method to approximate the integral around its maximum.
finding the maximums: iterative procedure, update equations or generic solver -> to do
implementation: in R -> to do
finding the effect sizes and their std error: to do
• Link between Bayes factor and P-value: see Wakefield (2008)
to do
• Hierarchical model: pooling genes, learn weights for grid and genomic annotations, see Veyrieras et al (PLoS Genetics, 2010)
to do
• Multiple SNPs with LD: joint analysis of multiple SNPs, handle correlation between them, see Guan & Stephens (Annals of Applied Statistics, 2011) for MCMC, see Carbonetto & Stephens (Bayesian Analysis, 2012) for Variational Bayes
to do
• Confounding factors in phenotype: factor analysis, see Stegle et al (PLoS Computational Biology, 2010)
to do
• Genetic relatedness: linear mixed model, see Zhou & Stephens (Nature Genetics, 2012)
to do
• Discrete phenotype: count data as from RNA-seq, Poisson-like likelihood, see Sun (Biometrics, 2012)
to do
• Multiple phenotypes: matrix-variate distributions, tensors
to do
• Non-independent genes: enrichment in known pathways, learn "modules"
to do
• References:
• Servin & Stephens (PLoS Genetics, 2007)
• Guan & Stephens (PLoS Genetics, 2008)
• Stephens & Balding (Nature Reviews Genetics, 2009)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 85, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7019710540771484, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/tagged/tensor
|
## Tagged Questions
0answers
44 views
### Null vector fields given Bondi metric
I'm trying to understand how to compute the null future-directed vector fields if I have a given (Bondi) metric $g=-e^{2\nu}du^{2}-2e^{\nu+\lambda}dudr+r^{2}d\Omega$ with \$d\Omeg …
0answers
61 views
### Geometric interpretation of tracing
Let $(M,g)$ be a Riemannian manifold. Is it true that for any symmetric 2-tensor $\alpha$ we have: $Trace_g(\alpha)=1/\omega_n\int_{S^{n-1}}\alpha(V,V)dvol(V)$ where $w_n$ is the v …
0answers
40 views
### covarient derivative of electromagnetic field tensor
I'm trying to prove the energy momentum tensor in curved spacetime for Electromagnetic field is Divergence-less directly(Without using general lie derivative method which can prove …
0answers
120 views
### How to find the tensor product of modules that we don’t know a basis for them?
Hi I know how to calculate some easy tensor products like $\mathbb{Z}/m\mathbb{Z} \otimes_{\mathbb{Z}} \mathbb{Z}/n\mathbb{Z}\cong_{\mathbb{Z}} \mathbb{Z}/(m,n)\mathbb{Z}$ or \$F[ …
1answer
159 views
### How many flavors should a notational system offer for rank-1 tensors?
The notation for tensors is like the plumbing in a very old Vermont farmhouse. It may once have been intentionally designed, but after that it just evolved. As an example, it seems …
11answers
3k views
### Why are matrices ubiquitous but hypermatrices rare?
I am puzzled by the amazing utility and therefore ubiquity of two-dimensional matrices in comparison to the relative paucity of multidimensional arrays of numbers, hypermatrices. O …
1answer
532 views
### Representation theory of (anti)self-dual tensors
I am using usual physics notations and I guess the physics motivations of this question are obvious. Let a basis of the $SO(n,m)$ Lie algebra be denoted by $S^{\mu \nu}$ and the …
3answers
2k views
### Geometrical meaning of the Ricci Tensor and its Symmetry
Let $M$ be a smooth, pseudo-Riemannian manifold with $\dim(M) \ge 2.$ Let $\nabla$ be any affine connection on $M$. No reason for it to be the Levi-Civita connection. All we assume …
0answers
210 views
### A property on the Green-St Venant strain tensor
Green-St Venant strain tensor is defined by $E(u)={1\over 2}[\nabla u+(\nabla u)^T+(\nabla u)^T\nabla u]$, where $\nabla u$ is the displacement gradient. Show that \$u\in H^1(\Om …
0answers
205 views
### Tensor products not left exact [closed]
Is there a simple example that shows that the functor $B\otimes_R(-)$ is not left exact, given a ring $R$ and a right $R$-module $B$?
2answers
830 views
### Who coined the name tensor and why?
Who coined the name "tensor" and why? What does the word "tensor" really mean, not the mathematical definition?
0answers
139 views
### Tensor analysis with alpha beta and i j coordinates. [closed]
The covariant differentiation or the levi civita connection represented by the Christoffel symbol, \Gamma_{\alpha,\beta}^{\gamma} \frac{dx^k}{dy^{\gamma}} =?= \Gamma_{i,j} \fra …
0answers
97 views
### Density of divergence in $L^2$ of vector bundles.
Let $V$ be a vector bundle over a smooth, complete Riemannian manifold $M$. In general, the manifold is not compact. Further, denote by $g$ the metric on $M$, the volume measure b …
2answers
414 views
### Tensor algebra question [closed]
1)Why embedding of ( not necessarily finite-dimensional) vector spaces $V\rightarrow W$ produces embedding of tensor algebras $T(V)\rightarrow T(W)$. I can prove it using Hamel ba …
0answers
220 views
### tensor/hypermatrix analogues of $GL(n,\mathbb{C})$?
Please excuse me if this question turns out to be incredibly silly for one reason or another. Are there tensor/hypermatrix analogues of $GL(n,\mathbb{C})$ that are interesting? Wh …
15 30 50 per page
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8433321118354797, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/34241/laplacian-operator-and-relation-to-the-laplace-transform
|
## Laplacian operator and relation to the Laplace Transform
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm trying to understand why the Laplacian operator is used in blob detection in image analysis. I must admit that in trying to figure out why the Laplacian is useful in this application, I've really confused myself with the different uses of the word 'Laplace.' For instance, Wikipedia has many articles on this, and the ones I'm having trouble unifying conceptually are the Laplace Transform and the Laplace Operator.
From co-workers and some reading on the internet, I have come to very shallowly think of my Laplacian convolutions on images as performing something similar to the second derivative, where the most quickly changing areas on the image are what become highlighted in the new, convolved image. From the page on the Laplace Operator this makes a lot of sense. This doesn't make sense to me from the page on the Laplace Transform. My question then, I think, is how are the Laplace Operator and the Laplace Transform related? If I can see, from the definition, that the Laplace Operator is basically doing the second derivative, I would think I should be able to see something similar from the Laplace Transform. But I don't. Am I mistaken in thinking that the Laplace Transform and the Laplace operator are the same thing? How are they related?
-
## 2 Answers
They are certainly not the same thing.
You might sometimes see them appear in the same context because transforms of Laplace-Fourier type are immensely useful for analyzing linear differential operators like the Laplacian. But the Fourier transform has better analytic properties, so that's the one you are more likely to see used.
Here's some intuition you might find helpful.
The discrete Laplacian computes the difference between a node's averaged neighbors and the node itself. It's often used in image processing and that gives an easy way to visualize it. The 1D case where the kernel is [1 -2 1] is especially simple:
In an area of constant color the Laplacian is zero. Indeed, even if you have linear variation it remains zero, e.g. in the neighborhood [1 2 3] the Laplacian's value at the center point is
$$1 \cdot 1 + (-2) \cdot 2 + 3 \cdot 1 = 0.$$
But quadratic and higher-order variation excites the Laplacian and results in non-zero values. Thus it's especially useful for detecting 'jumps'. That's why it's the weapon of choice in edge detection. It's often combined with a Gaussian to pre-filter out any small-scale features or noise that might cause spurious edges to be detected.
I should mention that the Laplacian in two dimensions and higher is significantly richer than the one-dimensional case might suggest. For one, not all two-dimensional images with a uniformly zero Laplacian are linear. But qualitatively a lot of the same intuition holds true as to how the Laplacian reacts to variation.
-
so when you do a Laplacion convolution, you're not actually doing a Laplacian Transform of the image (similar to how you do a Fourier transform) but instead are convolution with the Laplacian operator? So convolution with the laplacian operator is different then applying a Laplacian Transformation to the image? – Nick Aug 2 2010 at 12:06
Strictly speaking, the Laplacian is only a convolution operator in the discrete case. But yes, it is absolutely not the same thing as the Laplace transform (which is never called the Laplacian transform, by the way). – Per Vognsen Aug 2 2010 at 12:10
ah, thank you very much. – Nick Aug 2 2010 at 12:15
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is no relation on the basic level between the Laplace operator and Laplace transform. From the point of view of learning about them, put the coincidence of names out of your mind.
-
so the operator is not actually derived from the equation, and the thing they share in common is that they were discovered/used by Laplace? – Nick Aug 2 2010 at 12:03
What they certainly share in common is that both are named after Laplace. Whether he discovered them is a tougher question. – Michael Hardy Aug 2 2010 at 21:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474190473556519, "perplexity_flag": "head"}
|
http://motls.blogspot.cz/2013/03/lhcb-7-sigma-and-9-sigma-anomalies-in.html?m=1
|
# The Reference Frame
Our stringy Universe from a conservative viewpoint
## Monday, March 04, 2013
### LHCb: $$7$$-$$\sigma$$ and $$9$$-$$\sigma$$ anomalies in CP-violation
First, let me start with Tommaso Dorigo. He reviewed a recent paper by Carena et al. that tried to find the best parameters of the MSSM Higgs sector that are compatible with the LHC experiments so far. Dorigo reprints this graph
for a particular "low $$M_H$$" scenario (subset of MSSM possibilities) where the green bands (especially the dark green band) are still allowed. He concludes by saying
All in all one gets the impression that the "window of opportunity" for the MSSM is closing down. But if you read the paper (written by MSSM enthusiasts) you might get a different idea!
What a dumb criticism of the paper!
The actual reason why you might get a different idea is that neither the authors of the Carena et al. paper nor you are imbeciles controlled by deluded impressions. The right interpretation is that aside from the binary uncertainty (the fact that the allowed regions survive at two places), the LHC is measuring the parameters $$\mu$$ in the low-$$M_H$$ scenario more accurately than before. The scenario doesn't have to be right but the data are encouraging for its future.
The ratio of the two Higgs vevs is about $$4\leq \tan\beta\leq 8$$ while $$\mu$$ either belongs to $$(1200\GeV,1600\GeV)$$ or $$(2500\GeV,3000\GeV)$$ or so. If Dorigo had at least some memory, he would know that just a year (or a year and a half) ago, the Higgs boson was almost exactly in the same situation. For example, in March 2012, the Tevatron confirmed that the Higgs could only exist in the 115-135 GeV interval. The LHC experiments were able to confine the "window of opportunity" to a similar interval several months earlier.
Were the windows of opportunity closing down? No. In March 2012, we actually already knew that the $$124-126\GeV$$ Higgs would be officially discovered within half a year and it turned out to be the case, indeed. It's not surprising that the wrong values are excluded before the right value is officially discovered. After all, we are used to excluding values at the $$2$$-$$\sigma$$ level but discovering things at the $$5$$-$$\sigma$$ level. These apparent "double standards" are followed for a good reason – because only extraordinary claims (about a new particle/effect) require extraordinary ($$5$$-$$\sigma$$) evidence; claims that no new effect with certain values of parameters exists isn't extraordinary so $$2$$-$$\sigma$$ evidence supporting this original "null hypothesis" is considered enough. So most typically, we first exclude almost all the wrong values of the parameters and then we discover the new thing and the right values of the parameters.
It was the case of the Higgs boson and only a foolish victim of circular reasoning could assume that it can't happen again with the supersymmetric particles. Dorigo established that the "windows are closing down" because he has assumed that the windows should close down from the beginning. But nothing like that is indicated by the actual data which show that the low $$M_H$$ scenario of the MSSM is alive and kicking and it has become highly predictive. In fact, the graph above shows how incorrect are the claims of the anti-supersymmetric crackpots who often say that all the parameters are being constantly sent somewhere to the infinity where the new effects are invisible. It ain't the case. The LHC experiments indicate that the right place is in the left middle or right middle of the graph.
But let me talk about someone who is less deranged than Tommaso Dorigo.
Adam Davis of the U.S. LHC blogs dehibernated himself and wrote about
A Puzzling Asymmetry.
In Fall 2012, the LHCb collaboration presented preliminary results at a CKM conference. They investigated three-body decays of B-mesons:\[
\eq{B^\pm &\to \pi^\pm \pi^+\pi^-\\
B^\pm &\to \pi^\pm K^+ K^-
}
\] For each possible momenta of the three final decay products, they measured the CP-asymmetry (proportional to the difference between the decays of one type and the CP-transformed process). And all these decay events were captured by the Dalitz plot, a useful visualization technique invented by R.H. Dalitz 60 years ago.
When a particle decays to two final products, the energy and (the magnitude of) the momentum of each decay product (measured in the rest frame of the initial particle) is determined by the energy-momentum conservation. Imagine a Higgs boson going to two photons: each photon inherits one-half of the Higgs latent energy. However, for three-body decays (such as the three-meson decays of the B-mesons above), there is some freedom how the energy and momentum may be distributed.
Let me do the counting of parameters. For two decay products, there are $$2\times 4$$ components in the final energy-momentum vectors, $$2$$ on-shell conditions, and $$4$$ components of the energy-momentum conservation law. We get $$2\times 4-2-4=2$$ parameters left – something that only determines the point on the sphere (direction of one particle; the other one moves oppositely; the rotational symmetry around the axis of their motion is unbroken) which is irrelevant due to the rotational symmetry, anyway. Both $$2$$ parameters are sacrificed for the rotational symmetry and no interesting freedom is left.
For three decay products, this counting is $$3\times 4 - 3 - 4=5$$ but $$3$$ parameters out of the $$5$$ describe a general $$SO(3)$$ rotation of the three bodies so they don't change the situation qualitatively. Only $$5-3=2$$ parameters label inequivalent energy-momenta of the decay products. They may be written as $$m^2_{\pi^+ K^-}$$ and $$m^2_{K^-K^+}$$ in the case of the decay involving the kaons. And these two parameters are axes in the Dalitz plot. The color in the Dalitz plot may denote the relative contribution of decay events with some particular "geometry of the momenta and distribution of the energies" to the asymmetry or something of the sort.
The resulting pictures look like this:
The Dalitz plots tell us more than just "one overall number" quantifying the asymmetry. They inform us which final geometries of the energy-momentum vectors of the three decay products are dominant in the asymmetry.
I don't want to bore you with precise numbers but there are puzzling regions in which the asymmetry exceeds $$|A|\gt 0.6$$ and the statistical significance of these deviations is over $$7$$-$$\sigma$$ and $$9$$-$$\sigma$$, respectively! This is clearly way too large even after you punish these statistical significances by the look-elsewhere reduction.
The invariant masses of the pairs of mesons where this shocking thing occurs are either several $$\GeV$$s or around $$15\GeV$$. Bizarre. Next time when you hear that the LHCb excludes all new physics, don't forget that quite some breathtaking results have been swept under the rug if not fully censored when such a bold negative statement was made.
In the aggressive case, these asymmetries could be signs of some new – perhaps supersymmetric – particles. Perhaps the same particles that make up the dark matter and that could be announced this or next week. Perhaps sgluinos in some extended gauge sector. Adam Davis proposes a conservative competing theory involving "just a partial interference" of the quantum amplitudes for different processes. I am not sure I understand what it means even though I also believe that the deviation is pretty likely to be due to some incorrect calculation of the predictions, a forgotten subtlety.
It could be fun – and an example of a discovery that could force us to say, sometime in the future, that "we must have been blind and stupid if we were overlooking these complete and unquestionable $$7$$-$$\sigma$$ and $$9$$-$$\sigma$$ proofs".
#### 4 comments:
1. Dilaton
Oh Lumo,
you are very brave to still having the stomach to keep clicking and reading Tommaso Dorigo ;-)
I have definitively and completely stopped clicking him some time around Easter last year, after a very scornful and sarcastic post had appeared on his blog below a title involving the words "Easter lamb" and "SUSY", which clearly revealed Tommaso Dorigo's true colors and similarity (or indistinguishability) to the Trollking and attracted the same scum in the comments etc ...
So I am not in the slightes surprised by his sourballish attitudes concerning the the results presented in the new SUSY paper. But every dimwit who is not an abominable sourball, notes the similarity of the narrowing down of the allowed parameter space to what happend before the discovery of the higgs.
It would be great fun, if these CP violation signals were due to some new cool physics. But I rather think too that it is more likely due to some more conservative issues ...
Maybe I should reconsider some stuff I have read, but I am not sure if I have encountered "sgluinos" before :-/ ... ? I mean the gluino is the superpartner of the gluon, but what exactly are sgluinos?
Maybe I am just confused because it is too late ...
2. anna v
"It was the case of the Higgs boson and only a foolish victim of circular
reasoning could assume that it can't happen again with the
supersymmetric particles".
In the past we were not in the habit of making exclusion diagrams at all. Had we made exclusion diagrams on the road to all the discoveries leading to the standard model we would have been also excluding huge tracts of phase space. Some people should read what they are going to put up in a blog before they do so.
3. Exactly, Anna, it's so unremarkable to exclude possible theories - even big chunks of theories - simply because a vast majority of random things with random precise values of parameters we may invent don't exist!
4. Right, Dilaton! I still don't understand where this sourballism is coming from. SUSY, and some other ideas, is a wonderful, beautiful, and viable skeleton so closely resembling - but with some extra modern flavor - so many other cool things in the history of physics that turned out to be right.
What leads the sourballs to "wish" that those things are shown wrong? Of course that things may be wrong before they're fully proved. But there's a very strong case here and if true, this is a huge revolution in physics.
## Who is Lumo?
Luboš Motl
Pilsen, Czech Republic
View my complete profile
← by date
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 44, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399716258049011, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/64652-volume-integral.html
|
# Thread:
1. ## volume by integral
if we rotate the area between the curve $f(x)=2x^2-x^3$ and the line $y=0$ around the y-axis,how can we find the volume of the shape by silindirical method?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8897232413291931, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/2581/cutting-sticks-puzzle/4059
|
# Cutting sticks puzzle
This was asked on sci.math ages ago, and never got a satisfactory answer.
Given a number of sticks of integral length $\ge n$ whose lengths add to $n(n+1)/2$. Can these always be broken (by cuts) into sticks of lengths $1,2,3, \ldots ,n$?
You are not allowed to glue sticks back together. Assume you have an accurate measuring device.
More formally, is the following conjecture true? (Taken from iwriteiam link below).
Cutting Sticks Conjecture: For all natural numbers $n$, and any given sequence $a_1, .., a_k$ of natural numbers greater or equal $n$ of which the sum equals $n(n+1)/2$, there exists a partitioning $(P_1, .., P_k)$ of $\{1, .., n\}$ such that sum of the numbers in $P_i$ equals $a_i$, for all $1 \leq i \leq k$.
Some links which discuss this problem:
-
11
My interpretation: all sticks' length are ≥ n, and the number of stick is variable. For example, if n = 5, then the stick lengths may be {15}, {10,5}, {9,6}, {8,7} or {5,5,5}. Then we try to break these sticks into {1,2,3,4,5}. – KennyTM Aug 16 '10 at 13:30
1
@Kenny, Yes, that is correct. – deinst Aug 16 '10 at 14:16
2
– anonymous Aug 16 '10 at 15:16
2
You could check MathOverflow – Casebash Aug 16 '10 at 21:04
3
I wonder if there is an analytic way of approaching this, similar to the use of the circle method in Waring's problem? The number of ways of partitioning the set $\{1,\ldots,n\}$ to give lengths $a=(a_1,\ldots,a_k)$ can be written as $$\int_{[0,1]^k}e^{-2\pi ia\cdot x}\prod_{r=1}^n(e^{2\pi irx_1}+\cdots+e^{2\pi irx_k})\,dx_1\ldots dx_k.$$ Maybe it is possible to approximate this integral and show that it is nonzero for $a$ satisfying the required conditions? – George Lowther Sep 18 '11 at 13:53
show 9 more comments
## 5 Answers
This is not a solution, just something I found that might be relevant.
On the page linked to in the question, a reduction and various strategies are considered. I'll briefly reproduce the reduction, both because I think it's the most useful part of that page and perhaps not everyone will want to read that entire page, and also because I need it to say what I found.
Let a counterexample with minimal $n$ be given. If one of the sticks were of length $n$, we could use that stick as the target stick of length $n$ and cut the remaining sticks into lengths $1$ through $n-1$, since otherwise they would form a smaller counterexample. Likewise, if one of the sticks had length greater than $2n-2$, we could cut off a stick of length $n$ and the remaining sticks would all be of length $\ge n-1$, so again we could cut them into lengths $1$ through $n-1$ because otherwise they would form a smaller counterexample. Thus,
the lengths of the sticks in a counterexample with minimal $n$ must be $\gt n$ and $\lt 2n-1$.
Problem instances that satisfy these conditions for a potential minimal counterexample are called "hard" on that page; I suggest we adopt that terminology here.
The strategies discussed on that page include various ways of forming the target sticks in order of decreasing length. It was found that there are counterexamples both for the strategy of always cutting the next-longest target stick from the shortest possible remaining stick (counterexample $\langle11,12,16,16\rangle$) and for the strategy of always cutting the next-longest target stick from the longest remaining stick unless it already exists (counterexample $\langle10,10,12,13\rangle$), whereas if the stick to cut from was randomized, it was always possible to form the desired sticks up to $n=23$.
I've checked that all hard problem instances up to $n=30$ are solvable, and I found that they remain solvable independent of which stick we cut the target stick of length $n$ from. This is equivalent to saying that a problem instance for $n-1$ can always be solved if all stick lengths except one are $\gt n$ and $\lt 2n-1$ and one is $\lt n-1$, since all of these instances can result from cutting a stick of length $n$ from a hard problem instance for $n$.
I thought that this might be generalized to the solvability of an instance being entirely determined by whether the sticks of length $\le n$ can be cut to form distinct integers, but that's not the case, since it's possible to leave only a few holes below $n$ such that the few remaining sticks above $n$ can't fill them.
-
You are saying that -- empirically, at least -- sticks with total length $n(n+1)/2$ can always be cut into sticks of length $1,2,...,n$ as long as at most one of them is shorter than $n$. Is that right? Unfortunately this doesn't seem to lend itself to a proof by induction any more than the hypothesis in the OP... – TonyK Sep 18 '11 at 21:49
...It might help to investigate the following: Given $n$, precisely which subsets $S \subset\{1,2,...,n-1\}$ have the property that if the stick lengths consist of all members of $S$ together with any other lengths $\ge n$, then they can always be cut into sticks of length $1,2,...,n$? This might suggest the right inductive hypothesis. Can you tweak your program to crank these out? – TonyK Sep 18 '11 at 21:49
OK, I've done it myself -- see my separate answer. – TonyK Sep 19 '11 at 10:59
I have implemented the suggestion I made in a comment to joriki's answer. For $3 \le n \le 18$, I have generated a list of subsets $S \subset \{1,2,...,n-1\}$ with the property that if a set of sticks with total length $n(n+1)/2$ takes all the lengths in $S$, together with any other lengths ≥n, then the sticks can always be cut into sticks of length $1,2,...,n$. It is available at this link (it's about 900K).
I stared at it for a while, but nothing jumped out at me.
Edited to add: I have changed the program to output the sets in a more human-friendly order: part 1 (n = 1 to 17) and part 2 (n = 18).
-
I've uploaded your file to pastebin in two parts (due to 500kb cap): n=1 thru 17 and n=18. – anon Sep 19 '11 at 12:25
@anon: Thank you – TonyK Sep 19 '11 at 12:53
Great! I was actually going to do the same thing but hadn't gotten around to it. (The German version of "great minds think alike" is "two idiots, one thought", which is a bit easier to use without sounding conceited. :-) However, we might be able to use bigger gaps between the remaining target sticks and the remaining long sticks, since we can start by making the longest target sticks first, and then making each target stick will generate at most one shorter stick while increasing the gap between the remaining targets and the remaining long sticks by one. I'll have a go at that some time. – joriki Sep 24 '11 at 23:58
There are variations on the problem where the division is always possible and a proof using complete induction.
The first variation is: [..original problem ...] where at least one of the sticks is >=2n.
The second variation is: [..original problem ...] where it is allowed to glue two of the sticks together and break once at an arbitrary position.
Proof for the variations:
The case n=1 is trivial - one stick of length 1. Then we suppose the assumption holds for n and consider the case n+1. I.e. given a number of sticks of integral length >= n+1 whose lengths add to (n+1)(n+2)/2 - can these be divided into sticks of lengths 1,2,3,…,n+1?
Break off from one of the sticks a length n+1. In the first variation we use the stick with lenght >= 2n+2. This part will be the required stick of length n+1 in the solution. Because we broke off a length n+1 the remaining total length is (n+1)(n+2)/2 - (n+1) = n(n+1)/2. In the first variation the inducution step is valid, in the second variation we may not yet use the induction step because the other part of the broken stick may now be smaller then n+1. In that case the second variation allows us to glue together this part with one of the other sticks. Now the induction step is valid and we break the rest of the collection of sticks in {1,..n} and have succeeded in the division {1,.. n+1}.
-
Can you please clarify the induction? Are you assuming that all the sticks are of length >= n as the question stated? – Tomer Vromen Sep 5 '10 at 11:21
n−1 divides n(n+1)/2 if and only if n=2 or n=3. However, I do not think that the first case of your inductive step is valid. Suppose that there is a stick of length n+2. After breaking this stick into two sticks of lengths 1 and n+1, the assumption that all sticks have lengths ≥ n is no longer satisfied. – Tsuyoshi Ito Sep 5 '10 at 11:24
5
I'm not sure we are allowed to glue two sticks together. I can see that this is necessary for the inductive step, but I'd allow it only if we are going to break it again in the same spot. – Cristina Sep 6 '10 at 7:53
1
Yes, this is incorrect. You might glue it, but you have not ensured that the resulting split happens at the spot where you glued. – Aryabhata Sep 6 '10 at 7:58
1
Actually, doesn't the first variation still suffer from induction issues? The hypothesis there is 'the problem with all lengths >=n and one length >=2n' has a solution, but after the breaking step on the n+1 problem (using the stick of length 2n+2), we're no longer guaranteed that one stick has length >=2n, just that all the sticks have length >=n - but this is only the original problem, not the 'restricted' problem. – Steven Stadnicki Sep 16 '11 at 20:08
show 2 more comments
It is possible to brute force a list of possible solutions for every possible combination of sticks. Ignoring permutations... For n=1 there is a single solution. Ditto for n=2. For n=3 there are two solutions for one combination of sticks. For n=4 there are three solutions for two combinations of sticks. For n=5 I think there are ten solutions for four combinations of sticks. For n=6 I think there are 24 solutions. A completely baseless and uniformed search of the OEIS suggests this sequence might continue with 130 solutions for n=7 and 504 solutions for n=8. This trend suggests that there are increasingly more solutions for any such n, and a non-decreasing number of solutions for the "hardest" combination of sticks for a given n, and the pattern by which such solutions can be enumerated may suggest a proof of the original question.
-
I like the idea of solving using OEIS :-) You haven't explained what you are counting as a "solution" above. For example when n=3 there are two possible starting positions: (3,3) and (6). Depending on how you are counting solutions there is either 1 solution {1,2,3} (unordered), or maybe solutions should be counted by where each end stick came from - e.g. (3,3) has a solution where you break 2 off the first stick, and another solution where you break 1 off the second stick, etc. – tttppp Sep 27 '10 at 8:30
I am also confused by your suggestion that the number of solutions for the hardest combination is increasing. The link that Chandru1 posted contains a table of the number of solutions for the "hardest" combination. Note that for any even n there is a starting position which has only one solution (up to initial stick reordering and reordering of segments within each initial stick). This is the starting position where there are n/2 sticks of length n+1. – tttppp Sep 27 '10 at 8:36
I was considering unordered solutions for the first statement, but ordered solutions for the second. This was not intentional. – Sparr Sep 28 '10 at 3:06
This is not a solution, but a potential restatement of the problem.
Consider a single stick of length n(n+1)/2 that can be broken in any single place, leaving two pieces of length >=n. The resulting two sticks can be broken into 1...n easily, possibly in multiple ways. Can you show that for any position of that break that leaves a stick of length >=2n, there exists a subdivision of the stick such that the longer stick can be broken into any possible two pieces of length >=n? If so, I think that you may have the desired proof.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432724118232727, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/114605/solve-equation-with-matrix-variable
|
## Solve equation with matrix variable
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I want to solve a matrix $\Omega$ from a equation $\sum_k (\Omega + \Theta_k)^{-1} = Q$. The $Q$ and $\Theta, \forall k=1...K$ are known, and are positive definite matrices. $\Omega$ also has to be positive definite. all matrices are large (a few thousands of columns and rows). My questions are:
(1) Is there a closed-form solution? How do I simplify the sum of the inverse of two matrix sum?
(2) I'm OK to go for a numerical solution. But how do I define this problem? An optimization problem to minimize something like $f(\Omega) = ||\sum_k (\Omega + \Theta_k)^{-1} - Q||$? Do I need to minimize the frobenius norm, (just like minimizing the L-2 norm in a least square problem)? Considering the constraint that $\Omega$ is positive definite, can I solve it by semi-definite programming? How do I redefine the problem in a linear/semi-definite programming? I don't have much knowledge of linear programming. I would prefer a general gradient descent rather than LP. But I'm OK to use LP if I know how to do.
This problem comes from the estimation of inverse covariance matrix of multi-variate Gaussian distribution.
EDIT: Both $\Theta_k$ and $\Omega$ are sparse, if that helps.
-
What is known about $\Theta_{k}$? Are they, perhaps, of low rank? – Felix Goldberg Nov 27 at 0:03
There might not always be a positive definite solution; are you ok with negative solutions? – S. Sra Nov 27 at 1:00
1
How closed form would you want it to be? Even in the $1\times1$ case of real variables, this reduces to finding roots of polynomials, for which there is no closed form past degree 5. – Ralph Furmaniak Nov 27 at 1:43
Felix, $\Theta_k$ are not necessarily low rank, but they are sparse matrices. And I just edited the question for this information. – Wei Liu Nov 27 at 3:44
Hi Suvrit, by definition, $\Omega$ is the inverse covariance matrix of multivariate Gaussian. So I need it to be positive definite. – Wei Liu Nov 27 at 3:47
show 3 more comments
## 1 Answer
Here is a partial solution to the first question in the original post. Let's look at the equation \begin{equation}\tag{1} \sum\nolimits_{i=1}^m (X+ \Theta_i)^{-1} = Q. \end{equation}
Lemma (Existence). If all $\Theta_i$ are (strictly) positive definite, then (1) has a positive semidefinite solution only if $Q \preceq \sum_i \Theta_i^{-1}$.
Proof. Suppose $Q=\sum_i \Theta_i^{-1}$, then clearly $X=0$ is the solution. Since, $(X+\Theta_i)^{-1} \preceq \Theta_i^{-1}$ for any $X \succeq 0$, on summing up we see that $Q \preceq \sum_i \Theta_i^{-1}$ must hold. Moreover, in this case if there is a solution, then it must be strictly positive definite. A little extra argument shows that in this case, there must exist a unique positive definite solution.
This lemma shows that in case $Q$ does not satisfy the requirement, the original equation has no solution, and it might be preferable to minimize $\|\sum_i (X+\Theta_i)^{-1}-Q\|_F^2$ instead.
Lemma (Bounds). Any feasible solution to (1) must lie in the set $\Omega := [0, mQ^{-1}]$.
Proof. The lower bound $X \succeq 0$ is obvious. Following an argument similar to the previous lemma, we see that $Q=\sum_i (X+\Theta_i)^{-1} \preceq \sum_i X^{-1}$, which implies that $m X^{-1} \succeq Q$, or equivalently, $X \preceq m Q^{-1}$.
Idea Now that we have a compact set $\Omega$, we just need to setup a strictly contractive nonlinear map $G : \Omega \to \Omega$. I have not proved strict contraction of the map below, but numerically it seems to work. As one might suspect from the above lemmas, the rate of convergence depends on $\|Q-\sum_i \Theta_i^{-1}\|$, so that for small values of this quantity, the iteration converges more slowly.
Suppose, that $X \succ 0$. Denote by $S^{++}$ the set of $n\times n$ strictly positive definite matrices. Then, define the nonlinear map $\mathcal{G} : S^{++} \to S^{++}$ as \begin{equation*} \mathcal{G} = X \mapsto X^{1/2}\left(\sum\nolimits_{i=1}^m Q^{-1/2}(X+\Theta_i)^{-1}Q^{-1/2}\right)X^{1/2}. \end{equation*}
TODO If I get time, I might think about proving that the above map generates convergent solutions. Or one can come up with some other fixed point iteration.
-
1
I think that the technique in doi:10.1088/0951-7715/21/4/011 and doi:10.1016/j.laa.2008.10.034 can be applied to this problem. Short summary: composing sums, inversions and similarities should yield a contraction in the Finsler logarithmic metric on positive definite matrices; therefore, your matrix equation can be rewritten as $X=f(X)$ with $f$ a Finsler contraction; bingo. – Federico Poloni Nov 27 at 9:08
I would have written a Thompson metric contraction, but the presence of the $X^{1/2}$ in my iteration prevents that. Perhaps the papers that you cite still have a way around that (Also, it seems to not be the case that my iteration actually maps $\Omega \to \Omega$, but that is not so problematic, as we can always increase $\Omega$ in size until this gets ensured). But nevertheless, some version of the above idea can be made to work without too much difficulty (from its appearance it might be a contraction in some other less structured metric than the Finslerian class) – S. Sra Nov 27 at 18:06
Thanks Suvrit. The answer is so helpful. I'm not sure the Q satisfy the condition you gave, but at least the fixed point iteration gives me a starting point. It looks there is a 'nonlinear semidefinite problem' but it is far less explored. I'll give your solution a try, with the sparsity property of my matrices in mind. – Wei Liu Dec 7 at 17:44
@Wei -- as I showed above, if the $Q$ does not satisfy the conditions I mentioned, then your nonlinear equation has no solution, and you should rather solve a minimization problem---perhaps a technique similar to what I wrote above applies even here! – S. Sra Dec 10 at 5:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196630120277405, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/236625/find-groups-g-h-and-a-surjective-homomorphism-alpha-g-to-h-such-that/236706
|
# Find groups $G$, $H$ and a surjective homomorphism $\alpha: G\to H$ such that $\alpha(Z(G)) \neq Z(H)$
Question:
Find groups $G$ and $H$ and a surjective homomorphism $\alpha: G \to H$ such that $\alpha(Z(G)) \neq Z(H)$
My answer:
Let $G$ and $H$ both be cyclic groups of order 4.
Define $\alpha: G \to H$ such that $g \in G \to I_h \in H$.
So now the center of $G$, $Z(G)$ is all of $G$ and it is getting mapped by $\alpha$ to $I_h$. $Z(H) =$ all of $H = \{I_h, a, a^2, a^3\}$ which is not equal to $\{I_h\}$.
Does that look correct?
* EDIT: Revised answer, does this look correct? *
Let $G$ be the dihedral group or order 8:
$\{I_G, a, a^2, a^3, x, ax, a^2x, a^3x \}$
Let $H$ by the cyclic group of order $8$.
$\{I_G, b, b^2, b^3, b^4, b^5, b^6, b^7 \}$
Define a surjective homomorphism $\alpha:G \to H$ as
$I_G \to I_H$
$a\to b$
$a^2\to b^2$
$a^3\to b^3$
$x\to b^4$
$ax\to b^5$
$a^2x\to b^6$
$a^3x\to b^7$
Now $Z(G) = \{I_G, a^2\}$ and $Z(H) = H$.
Therefore, $\alpha(Z(G)) = \{I_H, b^2\} \neq H = Z(H)$
Does that look correct?
-
That's a long way from being surjective. – Chris Eagle Nov 13 '12 at 19:35
1
Your map $\alpha$ is not surjective. Also, $I_h$ is not a sensible notation for the identity of $H$. You could use $I_H$, though the standard would be $e_H$ or $\text{id}_H$. Finally, as a hint, you should be able to prove that if $G$ and $H$ are abelian and $\alpha$ is surjective, then $\alpha(Z(G)) = Z(H)$ (immediate consequence of the defintions). So your counterexample will have to come from outside the realm of abelian groups. – Michael Joyce Nov 13 '12 at 19:35
I don't think your approach is right. Because the groups you took are both finite and abelian and so every surjective homomorphism is necessarily injective and then your desire result in the title would not be concluded. – Babak S. Nov 13 '12 at 19:37
1
@sonicboom There are no surjective homomorphisms between these two groups. If there were, such a map would also be injective (since the groups are the same order), and thus an isomorphism. But these groups are not isomorphic. – Brett Frankel Nov 13 '12 at 20:40
2
$\alpha$ is surjective but it is not a homomorphism. – Martino Nov 14 '12 at 9:02
show 4 more comments
## 5 Answers
There are plenty of examples:
• The determinant map $\text{GL}_n(\Bbb R)\rightarrow{\Bbb R}^\times$ when $n$ is even,
• The sign of the permutation map $S_n\rightarrow\{\pm 1\}$
• The orientation map $D_n\rightarrow\{\pm 1\}$ which to a plane isometry $\phi$ in the dihedral group $D_n$ assigns $1$ if and only if $\phi$ preserves orientation.
You should really work out these examples and convince yourself that they satisfy the condition requested.
-
No, your homomorphism is not surjective, and so your answer is not correct. In fact, every group of order 4 is abelian, so any surjective homomorphism between groups of order 4 will take the center (the whole group) to the center (the whole group).
-
As other people have already pointed out you're not going to get there with abelian groups. But hopefully you can get there with nonabelian groups. You might want to think about groups that don't have a lot of normal subgroups. In particular I would suggest looking at $S_5$, the symmetric group on $5$ elements.
-
Hint: The free group on at least $2$ letters has trivial center.
-
The easiest example is probably $\alpha: S_3\times C_2\rightarrow C_2$, where you kill the subgroup $A_3\times C_2$.
-
2
Why not just $S_3 \to C_2$? – Derek Holt Nov 13 '12 at 21:39
Sorry, I was thinking of the harder problem to show the center is not a fully invariant subgroup (to which this is the smallest counterexample I think), and is finished by letting $C_2=(1,2)\in S_3$. – Steve D Nov 13 '12 at 23:50
$S_n$ with $n \neq 2$ has no center so I don't see how this can be valid? – sonicboom Nov 15 '12 at 2:29
@sonicboom: DO you mean in my answer? Never did I use the group $S_n$ on its own. – Steve D Nov 15 '12 at 19:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457096457481384, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/tagged/algorithm-design
|
# Tagged Questions
The algorithm-design tag has no wiki summary.
1answer
51 views
### How to repeat a mapreduce process? [closed]
How can I repeat the map and reduce process and feed the output of reduce into map function (in the next round)? My second question is how to define a global variable with is accessible among all map ...
1answer
63 views
### How partitioning in map-reduce work?
Assume a map-reduce program has $m$ mappers and $n$ reducers ($m > n$). The output of each mapper is partitioned according to the key value and all records having the same key value go into the ...
0answers
40 views
### Shortest path in unipathic graph [duplicate]
Possible Duplicate: Find shortest paths in a weighed unipathic graph A directed graph $G = (V,E)$ is unipathic if for any two vertices $u,v \in V$ there is at most one simple path from ...
4answers
778 views
### What is the novelty in MapReduce?
A few years ago, MapReduce was hailed as revolution of distributed programming. There have also been critics but by and large there was an enthusiastic hype. It even got patented! [1] The name is ...
1answer
85 views
### Can bottom-up architectures be effectively programmed in top-down paradigms?
The subsumption architecture, proposed by Rodney Brooks in 1986, is a "bottom-up" approach, in which robots are designed using simple hierarchical models. These models build upon and subsume the ...
2answers
444 views
### When can I use dynamic programming to reduce the time complexity of my recursive algorithm?
Dynamic programming can reduce the time needed to perform a recursive algorithm. I know that dynamic programming can help reduce the time complexity of algorithms. Are the general conditions such that ...
1answer
115 views
### Preprocess an array for counting an element in a slice (reduction to RMQ?)
Given an array $a_1,\ldots,a_n$ of natural numbers $\leq k$, where $k$ is a constant, I want to answer in $O(1)$ queries of the form: "how many times does $m$ appear in the array between indices $i$ ...
1answer
159 views
### Efficiently selecting the median and elements to its left and right
Suppose we have a set $S = \{ a_1,a_2,a_3,\ldots , a_N \}$ of $N$ coders. Each Coders has rating $R_i$ and the number of gold medals $E_i$, they had won so far. A Software Company wants to hire ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390395283699036, "perplexity_flag": "middle"}
|
http://pgraycode.wordpress.com/2009/09/03/looking-for-the-lost-harmony/?like=1&_wpnonce=07995c002f
|
# Code and Bugs
Coding things
### Looking for the lost harmony
#### by Mithrandir
In fact, this is only a successful attempt at trying the Harmony Search for a toy problem.
I began by creating a library for random numbers, taking into account the things gathered from the Monte Carlo articles and other sources of inspiration. Though it’s not fully tested yet, I can say for sure that it is random enough for this article. And the interface gives no detail about the implementation, which is just a good thing. Here are the function headers:
``` /*! initializes the random number generator with a specific seed */
inline void init_seed(int seed);
/*! initializes the random number generator with a time specific seed */
inline void init();
/*! gets a random double value in [0, 1)*/
double random_double_unit();
/*! gets a random double value in [0, M)*/
double random_double(double M);
/*! gets a random int value in [0, M)*/
int random_int(double M);
/*! gets a random double in range [a, b)*/
double random_range_double(double a, double b);
/*! gets a random int in range [a, b)*/
int random_range_int(int a, int b);
```
Then, I took the toy problem into consideration. Not wanting to use one of those hardcore examples (for debugging purposes), I've chosen the following: let $f:\{1,2,3,4,5\}^3 -> N$ where $f(a, b, c) = |a-3| + |b-4| + |c-1|$ be one function. We are interested in it's minimum value.
As usual, we have to define some parameters and some datatypes and functions, before dwelling into code.
``` #define K 10
#define VS 3
#define PHMCR 0.9
#define PPAR 0.1
#define ITCount 10
#define DBGE 1
#define DBGCount 5
typedef struct HVector{
int vector[VS];
int value;
} HVector;
HVector memory[K];
int value(int vector[])
{
return abs(vector[0]-3) + abs(vector[1] - 4) + abs(vector[2] - 1);
}
```
Now, we can successfully write the rest of the code.
``` void initialize_memory()
{
int i, j;
for (i = 0; i < K; i++){//each vector
for (j = 0; j < VS; j++)
memory[i].vector[j] = random_range_int(1, 5);
memory[i].value = value(memory[i].vector);
}
qsort(memory, K, sizeof(memory[0]), cmp);
}
void print_memory()
{
int i, j;
printf("Memory:\n");
for (i = 0; i < K; i++){
printf("\t");
for (j = 0; j < VS; j++)
printf("%d ", memory[i].vector[j]);
printf(" [%d]\n", memory[i].value);
}
}
void evolve_once()
{
int newvector[VS];
int i, val, j, k;
float p;
for (i = 0; i < VS; i++){
p = random_double_unit();
if (p < PHMCR){
newvector[i] = memory[random_int(K)].vector[i];
p = random_double_unit();
if (p < PPAR){
p = random_double_unit();
if (p < 0.5 && newvector[i] > 1)
newvector[i]--;
if (p > 0.5 && newvector[i] < 4)
newvector[i]++;
}
} else {
newvector[i] = random_range_int(1, 5);
}
}
val = value(newvector);
if (val < memory[K-1].value){
for (j = K-1; j >= 0; j--)
if (val > memory[j].value)
break;
for (k = K-2; k > j; k--){
for (i = 0; i < VS; i++)
memory[k+1].vector[i] = memory[k].vector[i];
memory[k+1].value = memory[k].value;
}
for (i = 0; i < VS; i++)
memory[j+1].vector[i] = newvector[i];
memory[j+1].value = val;
}
}
void evolve()
{
int i;
for (i = 0; i < ITCount; i++){
evolve_once();
if (DBGE && i && i % DBGCount == 0){
printf("Current status %d/%d\n", i/DBGCount, ITCount/DBGCount);
print_memory();
}
}
}
int main()
{
init();
initialize_memory();
print_memory();
printf("Starting evolution..\n");
evolve();
printf("Done\n");
print_memory();
return 0;
}
```
Here’s the result:
``` $ make test
time ./main
Memory:
3 4 2 [1]
2 4 1 [1]
2 3 1 [2]
3 3 4 [4]
1 2 1 [4]
3 2 4 [5]
1 2 2 [5]
3 1 4 [6]
1 2 3 [6]
1 2 4 [7]
Starting evolution..
Current status 1/2
Memory:
2 4 1 [1]
3 4 2 [1]
2 4 1 [1]
3 2 1 [2]
2 3 1 [2]
3 1 1 [3]
3 2 2 [3]
3 2 3 [4]
3 3 4 [4]
1 2 1 [4]
Done
Memory:
3 4 1 [0]
2 4 1 [1]
3 4 2 [1]
2 4 1 [1]
2 4 2 [2]
3 2 1 [2]
2 3 1 [2]
2 3 2 [3]
3 1 1 [3]
3 2 2 [3]
0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+139minor)pagefaults 0swaps
```
I didn’t implement a library for the Harmony Search because there is a need that some parameters of the problem be interleaved with some procedures of the search. Of course, we can do that via some function calls but sometimes there would be too many function calls. And I was lazy right now.
Now, seeing that the code snippets in wordpress are a headache I will turn my head to something new. Expect another article soon.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933923065662384, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/71415/list
|
2 edited tags
1
# Manifolds and Polynomials
Given a compact smooth manifold $M \subset R^k$ there is a Polynom $f\in R[x_1,..x_n]$ such that the zero set of $f$ is diffeomorphic to $M$. Can the coefficients of $f$ be pertubated slightly to a Polynomial $g \in Q[x_1,..x_n]$ such that the zero set of $g$ is diffeotopic to $M$? Are their conditions on the homology or homotopy on $M$ such that such a pertubation process is possible / not possible? What happens if Q is replaced by an arbitrary number field K?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482645988464355, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/294973/is-the-quotient-of-the-injective-envelope-a-torsion
|
# Is the quotient of the injective envelope a torsion?
We know that every module $M$ is embedded in an injective module $D$. Is it true that the module $D/M$ is torsion?
-
## 1 Answer
No, let $A = \frac{k[x, y]}{x^2, y^2}$ where $k$ is a field of characteristic $2$. This is a Frobenius algebra, hence it's self injective which means that $A$ is injective as a module over itself. As a module, the socle of $A$ is the principal ideal generated by $xy \in A$. This is $1$-dimensional so $A$ is indecomposable and hence the injective envelope of it's socle.
The quotient, $\frac{k[x, y]}{(x, y)^2}$, of $A$ by its socle is not a torsion module.
Remember: For an element $m \in M$ in a module to be torsion we must have $rm = 0$ for some nonzero $r \in A$ which is, additionally, not a zero-divisor.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230535626411438, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/78223-exponential-equations-print.html
|
Exponential Equations
Printable View
• March 11th 2009, 03:50 PM
hotblonde
Exponential Equations
Can anyone please help me figure out how to solve the following problem? I believe you take the natural log of both sides, but I'm still left with 2 x's.
Solve the problem. Round answers to the nearest hundredth.
70.47x = 30.58x
• March 11th 2009, 03:59 PM
skeeter
Quote:
Originally Posted by hotblonde
Can anyone please help me figure out how to solve the following problem? I believe you take the natural log of both sides, but I'm still left with 2 x's.
Solve the problem. Round answers to the nearest hundredth.
70.47x = 30.58x
you equation is unclear ... do you mean $70.47^x = 30.58^x$ ???
if so, then x = 0 is the only solution.
• March 11th 2009, 04:19 PM
josh_amsterdam
Quote:
Originally Posted by skeeter
you equation is unclear ... do you mean $70.47^x = 30.58^x$ ???
if so, then x = 0 is the only solution.
Yea, and else log has nothing to do with it.. :p
All times are GMT -8. The time now is 11:21 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299221038818359, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4214035
|
Physics Forums
## Graphical to Mathematical representation of changing the order of some elements
I have a question that is a little hard to explain, since i don't know the name of this method, but I'll try my best, if anyone knows the name please do tell me.
So let's say we have three numbers, 1 2 3 (in this order)
and we have a container for this numbers: C123
and we have some operations: O12, O13 and O23
each of these operations act on those numbers changing their positions.
For example O12 will change the position of the first and second elements.
So lets say: O12 . C123 will equal: C213
And if we want to find out what operations to use when we have the original Container and the target Container we can do it easily graphically.
For example:
Original: C123
Target: C231
This can be done graphically:
The point where the lines intercept represent the operation between those two numbers. And the order is important, since these operations are not commutable.
So that's the same as: O12 . O23 . C123 = C231
One last example:
The container doesn't need to hold all of the numbers of the three number-space
Original: C12
Target: C31
Or: O23 . O12 . C12 = C31
So graphically its easy to find out the operations of any N number-space.
But how do we express that in a mathematical general expression?
I'm not exactly sure what you're asking. These are just basic permutations, so cycle notation should communicate everything that you need.
Thanks for your reply, i'm going to read about that. I'm trying to find out the permutations needed to do mathematically for any N number group, knowing only the original and the final state. Ideally something of the format: O1i . O2j . C12 = Cij But for a N number group instead of just a this small example that might not even be correct.
## Graphical to Mathematical representation of changing the order of some elements
Quote by arsenal_51 Thanks for your reply, i'm going to read about that. I'm trying to find out the permutations needed to do mathematically for any N number group, knowing only the original and the final state. Ideally something of the format: O1i . O2j . C12 = Cij But for a N number group instead of just a this small example that might not even be correct.
Your notation is somewhat unconventional, but I think you mean the following: Given an ordered set of the first N counting numbers, i.e. ##S = (s_1\,s_2\,s_3\,\ldots\,s_N)##, you want to find ##X## "swappings" of the form ##(f_1\,t_1),\ldots,(f_X\,t_X)## such that their product will take the ordered set ##(1\,2\,3\,\ldots\,N)## to ##S##, i.e.
(1\,2\,3\,\ldots\,N)\cdot(f_1\,t_1)\cdot\ldots \cdot (f_X\,t_X)=(s_1\,s_2\,s_3\,\ldots\,i_N).
This is possible, and you can construct the pairs ##(f_k\,t_k)## quite easily. I'll just hint by saying this much: choose ##(f_1\,t_1)## so that it swaps the elements ##1## and ##s_N##, and then let ##(f_2\,t_2)=(1\,N)##. This means that those two swapping put ##s_N## at position ##N##. In the next step you place ##s_{N-1}## into position ##N-1## etc. till you end up with the ordered set ##S## you wanted.
This way you may need ##X=2N-1## swappings, and it's actually possible to get from ##(1\,2\,3\,\ldots\,N)## to any ##(s_1\,s_2\,s_3\,\ldots\,s_N)## with only ##N-1## swappings, but not as easily as by my method (one element at a time goes into position ##1## and then to its proper place).
Thank you, i will try out your method, it seems pretty clear. After i try that out i would like to check that other more efficient method you were talking about where you only need N - 1 swapping operations. Do you know where i can read more about that other method or the name of it? Thanks again.
Thread Tools
| | | |
|------------------------------------------------------------------------------------------------------|---------------------------|---------|
| Similar Threads for: Graphical to Mathematical representation of changing the order of some elements | | |
| Thread | Forum | Replies |
| | Calculus | 1 |
| | Advanced Physics Homework | 1 |
| | General Math | 3 |
| | Linear & Abstract Algebra | 8 |
| | Beyond the Standard Model | 0 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8848393559455872, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/magnetic-monopoles+electromagnetism
|
# Tagged Questions
1answer
85 views
### What is the action for an electromagnetic field if including magnetic charge
Recently, I try to write an action of an electromagnetic field with magnetic charge and quantize it. But it seems not as easy as it seems to be. Does anyone know anything or think of anything like ...
2answers
224 views
### What happens to the magnetic field in this case?
As far as I know, it's possible to create a radially polarised ring magnet, where one pole is on the inside, and the field lines cross the circumference at right angles. So imagine if I made one ...
2answers
288 views
### Basic question on magnetism regarding north and south pole
I am currently busy with some magnetism and quite shockingly (to me at least) I haven't yet read anything about the difference between the north pole and the south pole of a magnet. Before I started ...
2answers
98 views
### Can the poles of a magnet have varying intensity?
In re-reading Is it possible to separate the poles of a magnet? (amongst others) the question mentioned in the title here just occurred to me. It may not be possible, at our current levels of ...
2answers
302 views
### Can you put a magnetic ball into a hollow magnetic sphere?
if all magnets have to have two poles(one north one south), is it possible to construct a hollow sphere where the inside face of the sphere was one pole, and the outside face another pole? is it also ...
0answers
26 views
### Can we create a magnet with only one Pole? [duplicate]
Possible Duplicate: How to make a monopole magnet? No matter how many times you cut a magnet, we always end up with 2 poles. Is there any possibility of creating a monopole magnet?
0answers
35 views
### Does the universe appear to be a monopole to a ferromagnetic object within a solenoid?
Just what the title states, please. To a ferromagnetic object placed at the centre of a solenoid (E.g. car-starter), does it appear that the universe around it is a monopole? p.s. Preferably in ...
0answers
104 views
### Does a rotating magnetic monopole have electric and magnetic moment in classical view?
Would a rotating sphere of magnetic monopole charge have electric moment ? In a duality transformation E->B.c etc. how is the magnetic moment translated m = I.S -> ? Mel = d/dt(-Qmag/c).S ? A more ...
1answer
139 views
### Why are electric charges allowed to be so light but magnetic monopoles have to be so heavy?
My question is in two parts. What is the origin of the electric field from an electric charge and why electron can have so small mass? While on the other hand for a magnetic monopole to create a ...
1answer
186 views
### Why magnetic monopole found in spin ice don't modify the Maxwell's Equations?
Magnetic monopole predicted by Dirac nearly a century ago was found in spin ice as quasi-particle(2). My question is Why magnetic monopole found in spin ice don't modify the Maxwell's Equations? (I ...
1answer
176 views
### Gravimagnetic monopole and General relativity
Review and hystorical background: Gravitomagnetism (GM), refers to a set of formal analogies between Maxwell's field equations and an approximation, valid under certain conditions, to the Einstein ...
1answer
218 views
### Dirac's quantization rule
I first recall the Dirac's quantization rule, derived under the hypothesis that there would exit somewhere a magnetic charge: $\frac{gq}{4\pi} = \frac{n\hbar}{2}$ with $n$ natural. I am wondering ...
2answers
434 views
### Does existence of magnetic monopole break covariant form of Maxwell’s equations for potentials?
Absence of magnetic charges is reflected in one of Maxwell's fundamental equations: $$\operatorname{div} \vec B = 0 \text{ (1).}$$ This equation allows us to introducte concept of vector potential: ...
2answers
165 views
### Orientation of Magnetic Dipoles
Does a magnetic dipole (in a permanent magnet) tend to align with the B-field or with the H-field? The current loop (Ampère) model of the magnetic dipole suggests the former, while the ...
1answer
193 views
### Effect of introducing magnetic charge on use of vector potential
It is well known that Maxwell equations can be made symmetric w.r.t. $E$ and $B$ by introducing non-zero magnetic charge density/flux. In this case we have $div B = \rho_m$, where $\rho_m$ is a ...
4answers
843 views
### What is the magnetic field inside hollow ball of magnets
Setup: we have a large number of thin magnets shaped such that we can place them side by side and eventually form a hollow ball. The ball we construct will have the north poles of all of the magnets ...
5answers
450 views
### How would I go about detecting monopoles?
A question needed for a "solid" sci-fi author: How to detect a strong magnetic monopole? (yes, I know no such thing is to be found on Earth). Think of basic construction details, principles of ...
5answers
1k views
### Why do physicists believe that there exist magnetic monopoles?
One thing I've heard stated many times is that "most" or "many" physicists believe that, despite the fact that they have not been observed, there are such things as magnetic monopoles. However, I've ...
8answers
2k views
### Is it possible to separate the poles of a magnet?
It might seem common sense that when we split a magnet we get 2 magnets with their own N-S poles. But somehow, I find it hard to accept this fact.(Which I now know is stated by Gauss's Law) I have ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279049634933472, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/179024/why-does-a-langle-m-1-m-2-m-3-rangle-lm-1-cap-lm-2-ne-lm-3-isn?answertab=votes
|
# Why does A=$\{\langle M_1,M_2,M_3 \rangle : L(M_1) \cap L(M_2) \ne L(M_3)\}$ isn't in $RE$.
I'm trying to figure out what's wrong with this following Turing machine which determinate that the following language
A=$\{\langle M_1,M_2,M_3 \rangle : L(M_1) \cap L(M_2) \ne L(M_3)\}$ is in $RE$.
I said that we can build a Turing machine that run all inputs in lexicography order,in parallel:
For an input $x$:
we run it on $M_1$ and $M_2$ if one of them rejected the input, we skip this input, and don't use it.
If both of them accepted:
we run $x$ on $M_3$ if it rejected we return $true$, if it accepted we skip this input and don't use it.
If we are in infinite: both $M_1$ and $M_2$ are in loop for checking $x$, or one of them accepted and the other in an infinite loop, or $M_3$ in infinite loop for checking $x$, if we reached this part, so the machine is infinite loop.
What is not correct? I accept if I reached an $x$ which satisfies the condition or I'm in infinite loop.
-
## 1 Answer
Suppose $M_1$ and $M_2$ are two Turing Machine that does not halt on anything. Suppose $M_3$ is a Turing Machine that halts only on some $x$.
If you run your algorithm and try test $\langle M_1, M_2, M_3 \rangle$ by running $M_1$ and $M_2$ on $x$. Here you algorithm would would not halt because $M_1$ and $M_2$ would not halt. Because $L(M_1) = L(M_2) = \emptyset$ and $L(M_3) = \{x\}$, you can not skip this step because this the only difference in the language.
The essential problem is the above and the last line where you said "I accept if I reached an x which satisfies the condition or I'm in infinite loop". How do you ever know that you are in an infinite loop. After running some Turing Machine for 1000 steps, how do you know that it wont halt on the 1001 step.
However, the above only shows that your algorithm does not prove that show that $A$ is RE. You have not proven that $A$ is not RE.
To prove $A$ is not RE, one possible method is to reduce a language known to be not RE to $A$. Let $K$ denote the Halting Problem which is not computable. Let $\bar{K}$ be the complement of $K$. Hence $\bar{K}$ is not RE. Now reduce $\bar{K}$ to $A$. (I leave the detail to you.) Now if $A$ was RE, this reduction would prove that $\bar{K}$ is RE. Contradiction.
-
I wanted to show that the language is in $RE$, but I figured out that it doesn't so I showed my algorithm and asked how come it is not correct, since I believed it is correct and showing that the language is in RE because I showed an RE Turing machine, but I understand my mistakes. Regarding the "How do you know that you in an infinte loop"- I don't need to know that, if it accepts suitable inputs and infinte loop on unsuitable, it's fine for me, but I can see that it doesn't return true for your example, which it has to. Thanks a lot! – Joni Aug 5 '12 at 7:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9653810262680054, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/182101/appropriate-notation-equiv-versus/182120
|
# Appropriate Notation: $\equiv$ versus $:=$
With respect to assignments/definitions, when is it appropriate to use $\equiv$ as in
$$M \equiv \max\{b_1, b_2, \dots, b_n\}$$
which I encountered in my analysis textbook as opposed to the "colon equals" sign, where this example is taken from Terence Tao's blog :
$$S(x, \alpha):= \sum_{p\le x} e(\alpha p)$$
Is it user-background dependent, or are there certain circumstances in which one is more appropriate than the other?
-
4
The colon-equals should be used, if at all, only for definitions. I don’t use it; I think that it’s always pretty clear from context when an equals sign is a definition and when it’s a statement. (On the very rare occasions when I use a special symbol, I use $\triangleq$.) – Brian M. Scott Aug 13 '12 at 15:20
Thanks for weighing in Prof. @BrianM.Scott – JJR Aug 13 '12 at 15:25
As the book of Walter Rudin teaches, the best idea is always to write in the correct language what you are doing. Too many symbols mean a waste of time. Use words whenever possible. – Siminore Aug 13 '12 at 15:41
2
@BrianM.Scott: The problem is, sometimes you're not sure if the author is making a local definition, or just using a symbol you're not familiar with (or a previous local definition you have since forgotten) and affirming an equality, which can be quite confusing. I sometimes wish people used $:=$ more often. In any event, the symbol is also used for variable assignment in Pascal and pseudocode (and maybe others, I'm not much of a programmer). – tomasz Aug 13 '12 at 16:02
1
– JJR Aug 14 '12 at 12:33
show 1 more comment
## 6 Answers
An "equality by definition" is a directed mental operation, so it is nonsymmetric to begin with. It's only natural to express such an equality by a nonsymmetric symbol such as $:=\, .\$ Seeing a formula like $e=\lim_{n\to\infty}\left(1+{1\over n}\right)^n$ for the first time a naive reader would look for an $e$ on the foregoing pages in the hope that it would then become immediately clear why such a formula should be true.
On the other hand, symbols like $=$, $\equiv$, $\sim$ and the like stand for symmetric relations between predeclared mathematical objects or variables. The symbol $\equiv$ is used , e.g., in elementary number theory for a "weakened" equality (equality modulo some given $m$), and in analysis for a "universally valid" equality: An "identity" like $\cos^2 x+\sin^2 x\equiv1$ is not meant to define a solution set (like $x^2-5x+6=0$); instead, it is expressing the idea of "equal for all $x$ under discussion".
-
2
"enforced equality" - i.e., an identity as opposed to a mere equation. – J. M. Aug 13 '12 at 16:10
I guess the latter is what people have in mind when using $\equiv$ for definitions: The name is, by definition, equivalent to the expression in all and any circumstances. – celtschk Aug 13 '12 at 16:58
@ J.M. and @celtschk: My schoolboy's English left me alone: What I actually meant was "universal". See my edit. – Christian Blatter Aug 13 '12 at 17:14
Which doesn't affect the statement in my comment. – celtschk Aug 13 '12 at 17:19
Whether "universal" or "enforced", it still works out. I was just pointing out the more customary synonym. :) – J. M. Aug 14 '12 at 3:49
$x:=y$ means $x$ is defined to be $y$.
The notation $\equiv$ is also (sometimes) used to mean that, but it also have other uses such as $4\equiv0$ (mod 2).
I encountered $:=$ a lot more than $\equiv$ , and it is my personal favourite.
There is also the notation $\overset{\Delta}{=}$ to mean "equal by definition"
By the way, some people also use the notaion $x=:y$ to mean $y$ is defined to be $x$
-
6
Yeah, $:=$ and $=:$ have the benefit of being clearly asymmetrical, so there's no room for confusion and you can use them in reverse when needed... – tomasz Aug 13 '12 at 15:59
1
@tomasz - I agree, that is one of the reasons this is my favourite. – Belgi Aug 13 '12 at 16:05
It's entirely up to the whim of the author. Other symbols that can mean the same thing are $\triangleq$ and $=_{def}$. I think that only a minority of authors use any special notation, however; the majority just use a regular equals sign.
-
6
Sometimes, I even see $\stackrel{\text{def}}{=}$... – J. M. Aug 13 '12 at 15:54
I prefer that notation, as the delta equals symbol seems to not be universal, and the colon equals symbol can get munged with assignment in Maple/Pascal, but this symbol is unambiguous. – Arkamis Aug 13 '12 at 16:03
1
But many languages use $=$ for assignment. Should I therefore avoid $=$ in equations because it could be mistaken for a C/Java/Perl/Python/... assignment? – celtschk Aug 13 '12 at 17:29
The notation $x:= y$ is preferred as $\equiv$ has another meaning in modular arithmetic (though it is almost always clear from context as to which is meant). However, there is one big advantage to using the $:=$. That is, it is not graphically symmetric and hence allows for strings such as $$y:= f(x) \leq g(x) =: L$$ where here we are defining both $y$ and $L$. This statement would be much more cumbersome using $\equiv$, and it would not make sense if one simply wrote $$y \equiv f(x) \leq g(x) \equiv L.$$
-
Upvoting the other answers and comments... and: conventions vary. The only way to know with reasonable confidence is from context. However, one can't know whether an author "believes in" setting context. The most important criterion may be whether or not one is tracking things well enough to reasonably infer which equalities are assignments, and which are assertions. If this seems to be an issue, likely one should back-track a bit, anyway.
In practice, assignment will be clear because the left-hand side is a single symbol (even if composed of several marks), and is appearing for the first time. The first-time-appearance criterion is obviously more easily applied if all first-time appearances are highlighted by a consistent convention (rather than being buring in-line, without emphasis or fanfare).
-
The $\equiv$ symbol has different standard meanings in different contexts:
• Congruence in number theory, and various generalizations;
• Geometric congruence;
• Equality for all values of the variables, as opposed to an equation in which one seeks the values that make the equation true;
• $x$ is defined to be $y$;
• probably a bunch of others.
But I suspect "$:=$" is not used for anything other than definitions.
So the latter at least avoids ambiguity. But if you're reading something written by someone who doesn't see it that way, you still want to understand what is being said, so you should be aware of usage conventions that you might reasonably consider less than optimal.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449231028556824, "perplexity_flag": "middle"}
|
http://jdueck.net/article/shoveling-less-with-math
|
The Local Yarn
Wednesday January 16, 2013
Shoveling Less With Math
Having been born and raised in Minnesota, I’ve had a lot of time to think about the most efficient way to shovel snow. Being as it’s now the dead of winter, I thought I’d formalize my findings on the subject so far. If you are also a long-time resident of the snowy climates, these findings might be painfully obvious to you; but if you’re one of those who find themselves suddenly and inexplicably transplanted to a snowy state, you might find it very useful.
In fig. 1 we see a common shoveling pattern. The shoveler begins in the center of the driveway on the end nearest the house (bottom of diagram) and starts shoveling outward, reaching the edge of the drive in three strokes (arrows in diagram). He or she then walks back to the middle of the driveway (dotted line) and repeats the proceedure.
This is a pretty shabby way to shovel, because you end up walking over the whole driveway twice: you walk over each area once to shovel it, and once while retracing your steps to start the next row.
With a little math, we can see precisely why this method is not optimal, and compare it to other possible methods. Let’s call $L$ the length of the driveway, $W$ the width, and (for simplicity) use $s$ as our unit of measure, equal to the width of your shovel. So the total distance you would travel when shoveling using the above method is $2(LW)$, which on a driveway measuring $30s \times 6s$ means you will walk $360s$ — quite a lot of wasted distance when you consider the area of the driveway only contains $180s$ of distance to begin with.
A first attempt at an improvement might have you start in one of the corners, shovel in one direction to the edge of the driveway, then turn around and shovel in the other direction (fig. 2). This would be an optimal pattern for plowing a field, since the distance covered is exactly $LW$. In practice, however, this is often impractical for snow shoveling. In the first couple of strokes, you would have to throw the snow nearly the entire length of the row ($W$) in order to clear it from the driveway. This is wasted effort, since none of the snow is more than $\frac{1}{2}W$ from at least one edge of the drive. For this reason, this pattern is optimal only when there’s so little snow that can push your shovel from one edge of the driveway to the other and accumulate no more than a shovelfull of snow.
Figure 3 shows a pattern that is optimal for all non-trivial amounts of snowfall. You begin by clearing a path $1s$ wide down the center of the driveway (A). You then turn around and begin working back towards the other end of the driveway, clearing the next $1s$ column with single shovel strokes (B). At the end of this second column, you again walk across to the opposite edge of the cleared area and clear the next column (C) and so on, moving in a square that widens out from the middle until the driveway is cleared (D).
Using this method, there is very little wasted travel, and you never have to throw snow further than $\frac{1}{2}W$. The distance traveled is roughly $L \times W$ — plus only a little extra to walk across the end of the driveway once each column is finished. We can discover the precise amount of this extra travel as follows: at the end of the first column you must walk $1s$ horizontally to start the next column, and after the second column you must walk $2s$, and so on up to $(W - 1)s$:
$$1s + 2s + \dotsb + (W-1)s$$
(Note that we stop at $W-1$ since at the end of the final column the shoveling is considered complete.) The formula to find the sum of all numbers between 1 and a given number $N$ is $(N+1)\frac{N}{2}$ (see example in the footnotes1). To find the sum of all numbers from $1$ to $W-1$ we use $(W\color{gray}{-1+1})\frac{W-1}{2}$, making the complete distance traveled equal to
$$LW + \left(W\frac{W - 1}{2}\right)s$$
In our example driveway where $L = 30s, W = 6s$, this works out to
$$(30s \times 6s) + 6\frac{5}{2}s = \color{red}{188\frac{1}{2}s}$$
which represents a vast improvement over the distance of $360s$ needed to finish the driveway using the original method.
1 E.g. for $N=4$ then $(4+1)\frac{4}{2} = 10 = 1+2+3+4$ ↩
This page uses MathJax for math typesetting. Commenters who wish to include equations in their own notes are encouraged to make use of standard LaTeX math code — enclose equations inside single dollar-signs $for inline math, or inside double dollar-signs$\$ to set the equations on their own centered paragraph.
Further Notes
Would it not even get closer to $180s$ if instead of crossing the driveway and switching sides with each traversal, one would stay on the same side going up and down until complete? Then, cross the driveway adding only 3s to complete the remainder. This would total $183s$.
Since this old-fashioned man still shovels and doesn’t care to own a snowblower (which stinks anyway and invariably goes slower than I can shovel by hand), I have actually done the above but never applied mathematics to it except in a hand-waving sense.
[THE EDITOR RESPONDS: In order to shovel so that your feet are always walking on cleared driveway instead of wading through the snow, you would need to switch between right-handed and left-handed shoveling at the end of each column. So your solution works well if one can shovel ambidextrously without great loss of efficiency. The formula for distance traveled then is $LW + \frac{1}{2}W$ On reflection, my approach here has been colored by erstwhile access to a snowblower; switching shovel-hands in this context is analogous to rotating the exhaust chute 180° when snowblowing, a time-consuming manoeuver on all but the most advanced units.]
— Ted Sands
Add to the original piece by submitting your notes below. Fields marked * are required. You must preview your note first before finally posting. All notes are reviewed by the editor(s) before being published, and must be well-formed and relevant – read guidelines below the form if you're new here.
(Will not display)
Guidelines. This site has the squelch filter set higher than most, in order to eliminate low-substance noise and create a curated exchange of ideas.
• Is your note relevant to this specific page, or is there a better place for it? If you just want to leave a general note of appreciation or complaint, use our feedback form instead.
• Don’t submit anything whose relevance will obviously expire with time. (e.g. “Just read this, will come back to post my thoughts once it has sunk in”)
• Cite your sources wherever possible. Sources make notes interesting.
Your note will be reviewed before being published. Notes will be discarded if they lack relevance, substance, clarity or civility. If published, they may be edited (transparently, and with restraint). These decisions will not be reconsidered and we do not answer inquiries about them. You always have the option, of course, to respond on your own website if you would prefer to do so with perfect freedom.
or or
The Local Yarn is a publication by Joel & Jessica Dueck of original writing, commentary, and art, and is only partially fictitious.
If you like what you see, we are also available for freelance services – manuscript reviews, editing, and web design for creative enterprises.
· Since 1998 ·
Colophon, Errata
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378553628921509, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/104576/a-linear-program-related-question
|
## A linear program related question
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Dear all, recently, I encountered the following problem. It is closely related to the order of growth for UMD constants of all $n$-dimensional Banach lattice.
Let $\alpha^k \in (\alpha_1^k, \alpha_2^k, \alpha_3^k, \alpha_4^k) \in [0,1]^4$ for $k = 1, 2, 3, 4$. For $x = (x_1, x_2, x_3, x_4) \in [0,1]^4$, define $$\| x\|_k = \sum_i \alpha_i^k x_i.$$
Then define $$\| x\| = \max(\| x \|_1, \| x \|_2, \|x \|_3, \| x \|_4).$$
Define $$\Large F(x,y, \alpha, \beta, \gamma, \delta) := \frac{\|(\frac{x_1+ y_1}{2}, \frac{x_2+y_2}{2}, x_3, x_4)\| +\|(\frac{x_1+ y_1}{2}, \frac{x_2+y_2}{2}, y_3, y_4)\| }{2 \max (\|(x_1, x_2, x_3, x_4)\|, \|(y_1, y_2, y_3, y_4)\|)}$$
I want to solve the following: find $(x_0, y_0, \alpha_0, \beta_0, \gamma_0, \delta_0)$, $$F(x_0, y_0,\alpha_0, \beta_0, \gamma_0, \delta_0) = \Large \max F(x,y, \alpha, \beta, \gamma, \delta)$$ $$\Large \textbf{s.t.} \alpha, \beta, \gamma, \delta\in [0,1]^4, \quad x, y \in [0, 1]^4, x \ne 0.$$
This is equivalent to $$\max\frac{\|(\frac{x_1+ y_1}{2}, \frac{x_2+y_2}{2}, x_3, x_4)\| +\|(\frac{x_1+ y_1}{2}, \frac{x_2+y_2}{2}, y_3, y_4)\| }{2}$$ $$\Large \textbf{s.t.} \alpha, \beta, \gamma, \delta\in [0,1]^4, \quad x, y \in [0, 1]^4, x \ne 0$$ $$\Large\|(x_1, x_2, x_3, x_4)\| \le 1$$ $$\Large\|(y_1, y_2, y_3, y_4)\| \le 1$$
Probably, this problem is classical in convex program, and maybe we even have software for computing this kind of problem. Are there anyone who can do this by computer (or maybe even by hand)?
$\textbf{Remark}:$ I have tried some example by hand, and I know that $\max F(x,y) \ge \frac{3}{2}$. Theoretically, I also know that $\max F(x,y) \le 2$. The most interesting question is: Do we have $$\max F(x,y) = 2 ?$$ If it is, I can get an optimal estimate of the supremum of (say) $\text{UMD}_2$ constant for all $n$-dimensional Banach lattice.
-
Is there some reason that this is convex? – Igor Rivin Aug 12 at 22:09
Firstly, I made a mistake in stating the problem. The reason this is convex is probably clear now. – Yanqi QIU Aug 13 at 6:35
Seems like a quadratic program in less than 30 variables. I would've suggested an SDP relaxation, but you might as well just use a quadratic program solver, of which there are many, check your matlab distribution. – Sasho Nikolov Aug 13 at 9:05
Thank you for the suggestion. The reason that asked this question is because I have never learned matlab or anything on computer program... – Yanqi QIU Aug 13 at 12:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9589887261390686, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/183154-solving-equation-parts-sum-sum.html
|
# Thread:
1. ## Solving equation with parts of a sum and a sum
Is it possible to solve any of these equations and thus find $\Delta x_{n}$ or $\sum_{0}^{n}\Delta x_{n}$?
Define
$\Delta x_{0}=V\frac{l}{v_{0}}$
and
$\Delta x_{1}=V\frac{l-\Delta x_{0}}{v_{0}+V}$
and
$\Delta x_{n+1}=V\frac{l-\sum_{0}^{n}\Delta x_{n}}{v_{0}+nV}$.
Since $V\ll v_{0}$ this can be simplified(?) to
$\Delta x_{n+1}-\Delta x_{n}=\frac{V \Delta x_{n}}{v_{0}+nV}$.
Maybe this euation is easier to solve. I cannot get any further.
2. ## Re: Solving equation with parts of a sum and a sum
Recurrence relations can sometimes be solved using a $Z$ transform.
3. ## Re: Solving equation with parts of a sum and a sum
Originally Posted by fysikbengt
Is it possible to solve any of these equations and thus find $\Delta x_{n}$ or $\sum_{0}^{n}\Delta x_{n}$?
Define
$\Delta x_{0}=V\frac{l}{v_{0}}$
and
$\Delta x_{1}=V\frac{l-\Delta x_{0}}{v_{0}+V}$
and
$\Delta x_{n+1}=V\frac{l-\sum_{0}^{n}\Delta x_{n}}{v_{0}+nV}$.
Since $V\ll v_{0}$ this can be simplified(?) to
$\Delta x_{n+1}-\Delta x_{n}=\frac{V \Delta x_{n}}{v_{0}+nV}$.
Maybe this euation is easier to solve. I cannot get any further.
Dear fysikbengt,
$\Delta x_{n+1}=V\left(\frac{l-\displaystyle\sum_{0}^{n}\Delta x_{n}}{v_{0}+nV}\right)$
Therefore, $\Delta x_{n}=V\left(\frac{l-\displaystyle\sum_{0}^{n-1}\Delta x_{n}}{v_{0}+(n-1)V}\right)$
$\Delta x_{n+1}-\Delta x_{n}=V\left(\frac{l-\displaystyle\sum_{0}^{n}\Delta x_{n}}{v_{0}+nV}\right)-V\left(\frac{l-\displaystyle\sum_{0}^{n-1}\Delta x_{n}}{v_{0}+(n-1)V}\right)$
Since, $V<<v_0~;~$ $v_0+nV\approx{v_0+(n-1)V}$
$\Delta x_{n+1}-\Delta x_{n}=V\left(\frac{l-\displaystyle\sum_{0}^{n}\Delta x_{n}}{v_{0}+nV}\right)-V\left(\frac{l-\displaystyle\sum_{0}^{n-1}\Delta x_{n}}{v_{0}+nV}\right)=-\left(\frac{V \Delta x_{n}}{v_{0}+nV}\right)$
So I think you should have a the minus sign for your last expression.
4. ## Re: Solving equation with parts of a sum and a sum
Originally Posted by Sudharaka
Dear fysikbengt,
$\Delta x_{n+1}=V\left(\frac{l-\displaystyle\sum_{0}^{n}\Delta x_{n}}{v_{0}+nV}\right)$
Therefore, $\Delta x_{n}=V\left(\frac{l-\displaystyle\sum_{0}^{n-1}\Delta x_{n}}{v_{0}+(n-1)V}\right)$
$\Delta x_{n+1}-\Delta x_{n}=V\left(\frac{l-\displaystyle\sum_{0}^{n}\Delta x_{n}}{v_{0}+nV}\right)-V\left(\frac{l-\displaystyle\sum_{0}^{n-1}\Delta x_{n}}{v_{0}+(n-1)V}\right)$
Since, $V<<v_0~;~$ $v_0+nV\approx{v_0+(n-1)V}$
$\Delta x_{n+1}-\Delta x_{n}=V\left(\frac{l-\displaystyle\sum_{0}^{n}\Delta x_{n}}{v_{0}+nV}\right)-V\left(\frac{l-\displaystyle\sum_{0}^{n-1}\Delta x_{n}}{v_{0}+nV}\right)=-\left(\frac{V \Delta x_{n}}{v_{0}+nV}\right)$
So I think you should have a the minus sign for your last expression.
Yes, of course the relation is convergent. The minus sign is there in my notes, it is a typo. Thanks for noticing.
5. ## Re: Solving equation with parts of a sum and a sum
Originally Posted by ojones
Recurrence relations can sometimes be solved using a $Z$ transform.
I tried to study $Z$ transforms and realised it is a full field not easily mastered. I have even forgot most of what I have read about Fourier transforms. So, I might just as well give up. This is not about laziness I hope.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9147036671638489, "perplexity_flag": "middle"}
|
http://www.exampleproblems.com/wiki/index.php/Cumulative_distribution_function
|
# Cumulative distribution function
### From Exampleproblems
In probability theory, the cumulative distribution function (abbreviated cdf) completely describes the probability distribution of a real-valued random variable, X. For every real number x, the cdf is given by
$F(x) = \operatorname{P}(X\leq x),$
where the right-hand side represents the probability that the variable X takes on a value less than or equal to x. The probability that X lies in the interval (a, b] is therefore F(b) − F(a) if a < b. It is conventional to use a capital F for a cumulative distribution function, in contrast to the lower-case f used for probability density functions and probability mass functions.
Note that in the definition above, the "less or equal" sign, '≤' could be replaced with "strictly less" '<'. This would yield a different function, but either of the two functions can be readily derived from the other. The only thing to remember is to stick to either definition as mixing them will lead to incorrect results. In English-speaking countries the convention that uses the weak inequality (≤) rather than the strict inequality (<) is nearly always used.
The "point probability" that X is exactly b can be found as
$\operatorname{P}(X=b) = F(b) - \lim_{x \to b-} F(x)$
Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level. This is called the complementary cumulative distribution function (CCDF), defined as
$F_c(x) = \operatorname{P}(X > x) = 1 - F(x)$.
## Examples
As an example, suppose X is uniformly distributed on the unit interval [0, 1]. Then the cdf is given by
F(x) = 0, if x < 0;
F(x) = x, if 0 ≤ x ≤ 1;
F(x) = 1, if x > 1.
For a different example, suppose X takes only the values 0 and 1, with equal probability. Then the cdf is given by
F(x) = 0, if x < 0;
F(x) = 1/2, if 0 ≤ x < 1;
F(x) = 1, if x ≥ 1.
## Properties
Every cumulative distribution function F is (not necessarily strictly) monotone increasing and continuous from the right (right-continuous). Furthermore, we have $\lim_{x\to -\infty}F(x)=0$ and $\lim_{x\to +\infty}F(x)=1$. Every function with these four properties is a cdf.
If X is a discrete random variable, then it attains values x1, x2, ... with probability pi = p(xi), and the cdf of X will be discontinuous at the points xi and constant in between:
$F(x) = \operatorname{P}(X\leq x) = \sum_{x_i \leq x} \operatorname{P}(X = x_i) = \sum_{x_i \leq x} p(x_i)$
If the cdf F of X is continuous, then X is a continuous random variable; if furthermore F is absolutely continuous, then there exists a Lebesgue-integrable function f(x) such that
$F(b)-F(a) = \operatorname{P}(a\leq X\leq b) = \int_a^b f(x)\,dx$
for all real numbers a and b. (The first of the two equalities displayed above would not be correct in general if we had not said that the distribution is continuous. Continuity of the distribution implies that P(X = a) = P(X = b) = 0, so the difference between "<" and "≤" ceases to be important in this context.) The function f is equal to the derivative of F almost everywhere, and it is called the probability density function of the distribution of X.
The Kolmogorov-Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test (pronounced /kœypəʁ/; a bit like "Cowper" might be pronounced in English) is useful if the domain of the distribution is cyclic as in day of the week. For instance we might use Kuiper's test to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month.
## See also
fr:Fonction de répartition pl:Dystrybuanta pt:Função distribuição acumulada su:Cumulative distribution function zh:累积分布函数
##### Toolbox
Get A Wifi Network Switcher Widget for Android
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8870718479156494, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/35180?sort=newest
|
Topology of function spaces?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X,Y$ be finite-dimensional differentiable manifolds, and let's assume that they are connected. In fact, in applications I would like both $X$ and $Y$ to be riemannian manifolds.
Let $C^\infty(X,Y)$ denote the space of smooth maps $f: X \to Y$. I'm interested in, say, the connected components, fundamental group,... of this space, but I'm really not sure where to start looking. I realise that I first need to topologise this space. My only experience in this realm is an introductory point-set topology course (based on Munkres) I took as a graduate student. Munkres talks about the compact-open topology for the space $C^0(X,Y)$ of continuous maps between two topological spaces and shows, for instance, that if $X$ is locally compact Hausdorff then the evaluation map $X \times C^0(X,Y) \to Y$ is continuous. Later in the book he also applies this to give a slick proof of the existence of covering spaces with prescribed covering group.
Back to the differentiable category, ideally I'd like to be able to do calculus on $C^\infty(X,Y)$, hence I'd like to think of $C^\infty(X,Y)$ as an infinite-dimensional differentiable manifold and possibly even riemannian whenever so are $X$ and $Y$.
In case it helps to focus the question, let me say a few words of (physical, I fear) motivation.
When $X,Y$ are riemannian, $C^\infty(X,Y)$ plays the rôle of the configuration space for a physical model known as the nonlinear sigma model, whose action functional, assigning to $\sigma: X \to Y$, the value of the integral (either take $X$ to be compact or else restrict the possible functions further to assure convergence) $$S[\sigma] = \int_X |d\sigma|^2 \operatorname{dvol}_X,$$ where I'd like to think of $d\sigma$ as a one-form on $X$ with values in the pullback $\sigma^*TY$ by $\sigma$ of the tangent bundle to $Y$, and $|d\sigma|^2$ involves the metric on the bundle $T^*X \otimes \sigma^*TY$ induced from the riemannian metrics on $X$ and on $Y$. The extrema of $S$ are then the harmonic maps.
We are often interested in the quantum theory (un)defined formally by a path integral. A mathematically conservative point of view is that the path integral simply gives a recipe for the perturbative treatment of the quantum theory, where we fix an extremum $\sigma_0$ of $S$ and quantise the fluctuations around $\sigma_0$. By definition, fluctuations around $\sigma_0$ lie in the connected component of $\sigma_0$ and as a first approximation, the path integral becomes a sum over the connected components of the space of maps. Hence the interest in determining the connected components of $C^\infty(X,Y)$, which in this context are often called superselection sectors.
So in summary, a possible question would be this:
What can be said about the topology (e.g., homotopy type) of $C^\infty(X,Y)$ in terms of $X$ and $Y$?
I'm not asking for a tutorial, just for some orientation to the available literature.
Thanks in advance.
Added
In response to the helpful answers I have received already, I'd like to point out that my interest is really on the homotopy type of the space of maps. I only mentioned the analytic aspects in case that narrows down the topology one would put on the space. As pointed out in the answers, most reasonable topologies are equivalent, so this is a relief.
As for concrete examples, I am particularly interested in the case where $X$ is a compact Riemann surface and $Y$ a compact Lie group. I'm happy to put the constant curvature metric on $X$ and a bi-invariant metric on $Y$.
-
Alright: Here's a special case for smooth loop spaces that Andrew wrote up a few months ago. ncatlab.org/nlab/show/smooth+loop+space – Harry Gindi Aug 11 2010 at 1:03
Your last sentence is a little confusing -- why are you putting metrics on $X$ and $Y$? Smooth mapping spaces, without any other conditions on them, do not care about these metrics. – Ryan Budney Aug 11 2010 at 18:25
Ryan, the applications I have in mind require $X$ and $Y$ to be riemannian manifolds. Otherwise you could not write down the action functional for harmonic maps (and the various generalisations I am actually interested in). Since I was so clueless about the possible ways to topologise the space of maps, I was simply trying to give as much information as possible in case that helped rule some topologies out. That's all I meant. – José Figueroa-O'Farrill Aug 11 2010 at 21:57
3 Answers
Homotopy theory of function spaces is a healthy subfield of homotopy theory. See e.g. recent Oberwolfach report from a meeting on the subject.
If you share what $X,Y$ you are interested in, I may be able to say more, even though I am by no means an expert.
EDIT: In response to your edit, suppose $\Sigma$ is a compact Riemann surface, $G$ is a compact Lie group, and let's study $k$th homotopy group of $C^0(\Sigma, G)$. Actually, most of the discussion below applies to general $X,Y$ that are, say, compact manifolds or even finite CW-complexes, but let's stick to the case of interest. There is an well-known homeomorphism $C^0(S^k,C^0(\Sigma, G))\cong C^0(S^k\times\Sigma, G)$ given by the adjoint, and since any homeomorphism preserves path-components, we can identify $\pi_k(C^0(\Sigma, G))$ with $[S^k\times \Sigma, G]$, the set of homotopy classes of maps from $S^k\times \Sigma$ to $G$. Smashing $S^k\vee\Sigma$ inside $S^k\times\Sigma$ to a point gives $S^k\wedge\Sigma$, which is the $k$-fold suspension $S^k\Sigma$ of $\Sigma$. The cofiber sequence of the quotient map $S^k\times\Sigma\to S^k\wedge\Sigma$ is an exact sequence, namely, $$[S^k\Sigma, G]\to [S^k\times\Sigma, G]\to [S^k\vee \Sigma, G]=\pi_k(G)+[\Sigma, G].$$ Let's suppose that $G$ is simply-connected, so that $[\Sigma, G]$ is a point (as any Lie group has trivial $\pi_2(G)$ and $\Sigma$ is $2$-dimensional). Then the above cofiber sequence is an exact sequence of groups (not just sets). In trying to compute $[S^k\Sigma, G]$ it helps to recall that $G$ is rationally homotopy equivalent to the product of odd-dimensional spheres, so rationally $[S^k\Sigma, G]$ is the product of $[S^k\Sigma, S^m]$'s where $m$ is odd, and of course $[S^k\Sigma, S^m]=[\Sigma, \Omega^kS^m]$. This should allow you to do some computations.
Finally, I wish to comment that the inclusion $C^\infty(X,Y)\to C^0(X,Y)$ is a weak homotopy equivalence i.e. it induces an isomorphism of homotopy groups as any map of a sphere/disk is homotopic to a nearby smooth map. A result of Milnor says that $C^0(X,Y)$ is homotopy equivalent to a CW-complex, provided $X$ is a finite CW-complex (if memory serves me). I am not sure why the same is true for $C^\infty(X,Y)$ so at the moment I do not know if the above inclusion is a homotopy equivalence.
-
Thanks, Igor. This Oberwolfach report is really interesting! I have added to the question some more information on the $X$ and $Y$ of most interest to me: $X$ a Riemann surface and $G$ a Lie group. – José Figueroa-O'Farrill Aug 11 2010 at 13:18
Thanks again, Igor. I wish I could double-upvote! – José Figueroa-O'Farrill Aug 11 2010 at 15:25
One small correction: the identification of $\pi_k(C^0(\Sigma, G))$ and $[S^k\times\Sigma, G]$ is only valid when $C^0(\Sigma, G)$ is path-connected, which is the case when $G$ is simply-connected compact Lie group, as any map from a surface to $G$ is null-homotopic. In general, a choice of basepoint $*$ in $C^0(\Sigma, G)$ picks up a subset in $[S^k\times\Sigma, G]$ corresponding to $\pi_k(C^0(\Sigma, G), *)$. – Igor Belegradek Aug 11 2010 at 18:16
1
Regarding your last sentence, a reference is Henderson and West "Triangulated infinite-dimensional manifolds" Bull AMS 76 (1970) 655--660. i.e. these mapping spaces have the homotopy-type of CW-complexes. – Ryan Budney Aug 11 2010 at 18:28
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
First off, you want your source space, $X$, to be compact (technically, sequentially compact will do). If you don't have that, $C(X,Y)$ need not be locally contractible so no hope of a manifold structure, infinite dimensional or otherwise. If $X$ is compact, all the arguments for loop spaces follow through, the only thing is that there's a bit more variety in model spaces - the models are spaces of sections of vector bundles over $X$ which, for $X = S^1$, is quite simple but for general $X$ is more variable. However, all these spaces behave like $C^\infty(X,\mathbb{R}^n)$ for all intents and purposes.
The topology on $C^\infty(X,Y)$ is referred to as the "compact-open" topology, but as I've imposed the restriction that $X$ be compact you could think of it as being a uniform-type of convergence. Basically, open sets are formed by insisting that all derivatives up to a certain finite order are bounded by a certain bound - the Riemannian structure helps with defining this (though it isn't necessary). However, I prefer to think of the smooth structure in terms of the exponential map which says that the smooth structure on $C^\infty(X,Y)$ is precisely that which makes a map $Z \to C^\infty(X,Y)$ smooth if and only if $Z \times X \to Y$ is smooth.
As a Riemannian manifold, $C^\infty(X,Y)$ can certainly be given a Riemannian structure. However, it's a weak structure not a strong one. I'm not sure about general $X$, but for $X = S^1$ then it's the best that it can be whilst still being weak.
Regarding homotopy types, the inclusion $C^\infty(X,Y) \to C^0(X,Y)$ is a homotopy equivalence. Ryan's answer contains the idea on how to do that.
However, you're more interested in references. Ryan's already mentioned the canonical reference, here's a few others:
• nLab
• Harry's mentioned the page on loop spaces
• http://ncatlab.org/nlab/show/differential+topology+of+mapping+spaces
• my papers (and other stuff)
• The differential topology of loop spaces
• How to Construct a Dirac Operator in Infinite Dimensions (contains details on types of Riemannian structure in infinite dimensions)
• The Co-Riemannian Structure of Smooth Loop Spaces
• Constructing Smooth Manifolds of Loop Spaces
-
Andrew: Many thanks for the references. You seem to have done lots of work in this area, so I may be coming back to you for more information! – José Figueroa-O'Farrill Aug 11 2010 at 13:19
Please do! That's why I work on this stuff - to be of use to others. – Andrew Stacey Aug 11 2010 at 14:37
A standard reference for this is Hirsch's Differential Topology textbook. If $X$ is compact near all the topologies you'd like to consider are essentially the same. Sometimes they're called the Whitney topologies, or the $C^\infty$-topology, there are weak and strong variants that are only relevant when $X$ is non-compact.
These spaces have the homotopy-type of the space of continuous maps. The basic idea is to consider $Y$ as a submanifold of some Euclidean space, any continuous map $f : X \to Y$ you can apply a smoothing operator to, then project via the tubular neighbourhood theorem to get a smooth map $X \to Y$ approximating $f$ (in the $C^k$ sense for any $k$ as large as makes sense for your given map $f$), similarly you can apply this construction to families of functions.
Hirsch doesn't bother to get into the details of the Frechet manifold structure on these mapping spaces but it's available. Kriegl and Michor's "Convienient setting for global analysis" is a fairly comprehensive (if daunting) reference for this. But there are other references out there that provide a modest amount of details.
http://www.ams.org/publications/online-books/surv53-index
We have one of the founders of this subject online -- Richard Palais. Perhaps he will have some comments eventually.
edit: Back to your question. Spaces of continuous maps $C^0(X,Y)$ are a rather traditional thing to study in algebraic topology. It really depends on what kinds of questions you have about these spaces. For example, if $X$ and $Y$ are Eilenberg-Maclane spaces $C^0(X,Y)$ has much to do with plain old cohomology. If your spaces $X$ and $Y$ have nice cell decompositions, you can frequently get at aspects of the homotopy-type of $C^0(X,y)$ via obstruction theory. But in general these spaces are pretty complicated.
-
Thanks -- this is very useful. Often just knowing which words to look for is the key to progress! – José Figueroa-O'Farrill Aug 11 2010 at 1:18
No problem. I like to think about various subspaces of these spaces -- embedding, immersion and diffeomorphism spaces. So these are things that are on my mind much of the time although I don't often do much that requires subtle analysis on these spaces. – Ryan Budney Aug 11 2010 at 1:27
2
Apart from the Michor and Kriegel Book, the book "Manifolds of differentiable mappings" by Michor covers many of the same topics and is quite extensive on the various topologies on function spaces. One can get a scanned version at: mat.univie.ac.at/~michor/… – Michael Greinecker Aug 11 2010 at 9:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 122, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380685687065125, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/85751/proving-that-the-graph-is-symmetric-about-the-origin
|
# Proving That The Graph Is Symmetric About The Origin
I know that the graph of an equation in x and y is symmetric with respect to the origin if replacing x by -x and y by -y yeilds the same equation.
How can one prove that if the graph is symmetric with respect to it's x and y axis's,then it's also symmetric with respect to the origin ?
How can i generalise this?
-
If you replace $x$ only with $-x$, then... If you replace $y$ only with $-y$, then... so if you replace both at the same time, then... – J. M. Nov 26 '11 at 10:15
Ha Ha Ha..I know what your saying.Can this be proved using some algebra is what i'am asking? – alok Nov 26 '11 at 10:34
...you have some graph $g(x,y)=0$ with the properties $g(-x,y)=g(x,y)$ and $g(x,-y)=g(x,y)$. If one negates both variables, then what? – J. M. Nov 26 '11 at 10:39
then it would be -g(x,y) ?? – alok Nov 26 '11 at 10:43
1
The solution set of $g(x,y)=0$ is symmetric with respect to $O$ if $g(x,y)=0$ implies $g(-x,-y)=0$. The identity $g(-x,-y)\equiv g(x,y)$ is a sufficient condition for this symmetry, but it is not necessary. Consider, e.g., the function $g(x,y):=x y(e^x+e^y)$. – Christian Blatter Nov 26 '11 at 12:20
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9092885851860046, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/97432?sort=newest
|
## When is a three-manifold deck transformation group solvable?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose that $\pi:Y \to Y'$ is a regular covering of closed, connected, orientable three-manifolds and let $G$ be the deck transformation group. Furthermore, suppose that $Y$ is a rational homology sphere (I don't know how much this condition matters). It's not always the case that $G$ is solvable, since one can take $Y$ to be $S^3$ and $Y'$ to be the Poincare homology sphere. Are there other examples where $G$ is not solvable? Are such examples classified? If $Y$ has elliptic geometry, this is the only example (I think), but I have no clue in general. This seems like it could be related to residual finiteness of three-manifold groups/RFRS/LERF/other four-letter acronyms I don't understand.
-
## 3 Answers
Cooper and Long showed you can realize any finite group acting on a rational homology sphere.
There's no general sort of classification that I know of. Maybe what you're asking for is, given $Y'$ a rational homology sphere, and a homomorphism $\varphi: \pi_1(Y')\to K$, where $K$ is a finite group, when is the cover $Y\to Y'$ corresponding to $ker(\varphi)$ again a rational homology sphere? One perspective is that this amounts to computing $H_1(Y; \mathbb{Z}[K])$, homology with twisted coefficients. Dunfield and Thurston took advantage of this to compute covers of manifolds in the Snappea census with positive first betti number, but such computations can get rapidly complicated, so I don't know of any general pattern when $K$ is non-solvable.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Not exactly an answer to the question, but a classification of solvable groups which are fundamental groups of compact 3-manifolds is given the following celebrated paper:
Thomas, C. B. On 3-manifolds with finite solvable fundamental group. Invent. Math. 52 (1979), no. 2, 187–197.
It is very easy to check which of the manifolds produced are rational homology spheres...
-
You are asking for free actions of finite, nonsolvable groups on $Y$. If you truly don't care that $Y$ is a rational homology sphere, for any finite group $G$ there exists a closed, connected, orientable 2-manifold $X$ and a free action of $G$ on $X$, so $G$ also acts freely on $Y = X \times S^1$.
-
Yes, that is a good observation. I really want to restrict to rational homology spheres then. – Tye Lidman May 19 2012 at 22:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429816603660583, "perplexity_flag": "head"}
|
http://physics.aps.org/articles/v2/18
|
# Viewpoint: Peaceful coexistence of nuclear shapes
, National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824, USA
Published March 2, 2009 | Physics 2, 18 (2009) | DOI: 10.1103/Physics.2.18
Experiments show that spherical and nonspherical states of a light nucleus near neutron number 28 coexist at the same energy, challenging the usefulness of the notion of stable and persistent “magic numbers.”
#### Shell Erosion and Shape Coexistence in 1643S27
L. Gaudefroy, J. M. Daugas, M. Hass, S. Grévy, Ch. Stodel, J. C. Thomas, L. Perrot, M. Girod, B. Rossé, J. C. Angélique, D. L. Balabanski, E. Fiori, C. Force, G. Georgiev, D. Kameda, V. Kumar, R. L. Lozeva, I. Matea, V. Méot, P. Morel, B. S. Nara Singh, F. Nowacki, and G. Simpson
Published March 2, 2009 | PDF (free)
Some of the most basic tenets of nuclear science have fallen victim to new experimental findings in nuclei with extreme neutron-to-proton ratios, also referred to as rare isotopes. One of these tenets, the existence and persistence of the so-called “magic numbers” has been put to test by recent measurements of light nuclei with a neutron number in the vicinity of 28, where a spherical shape is expected. Gaudefroy and collaborators [1] have unambiguously determined the spin and magnetic moment of an isomeric state in $43S$, which has neutron number 27. Their results firmly establish the coexistence in $43S$ of spherical and deformed shapes already known in heavier nuclei and suggest a collapse of neutron number 28 as a magic number for neutron-rich nuclei. The disappearance of the neutron number 28 as a magic number is attributed to a reduction in the energy spacing between adjacent quantum mechanic states of the nucleus, and reiterates the need to broadly examine the robustness of the magic numbers across the nuclear chart. Such an ambitious and important activity will require investments in the next generation of rare isotope beam production facilities.
One of the outstanding questions in nuclear science is the evolution of shell structure from what is known in the region of the stable isotopes. The nuclear shell model was developed based on empirical data within a limited region of the nuclear chart nearest the stable isotopes, where nuclei with certain proton and neutron numbers exhibited properties consistent with extra nuclear binding energy. This added stability at proton and neutron numbers 2, 8, 20, 28, 50, and 82, as well as neutron number 126, was attributed to large energy gaps between successive shell model orbits.
The notion that these magic numbers extend across the entire nuclear chart has been seriously challenged in recent years. As an example, the shell gap at neutron number 20 is well established in stable $40Ca$, but is eroded in $32Mg$. The $32Mg$ nucleus is eight protons removed from $40Ca$ and exhibits a low-energy quantum structure, better attributed to a deformed shape defined by the collective motion of many nucleons. Indeed, the nucleus $34Si$, which has just two protons more than $32Mg$, has a low-energy structure more consistent with $40Ca$ and a quasispherical shape.
Transitions from spherical to deformed nuclei occur across the nuclear landscape. These transitions are often sudden, as in the case described above for the neutron-rich nuclei having 20 neutrons. However, an intermediate condition of shape coexistence is also possible. Shape coexistence is denoted by a near degeneracy of different shapes. The nucleus is an important and unique laboratory to examine the quantum underpinnings of shape coexistence, since the nucleus can exhibit structures associated with both single-nucleon and collective motion. One sign of shapes coexisting at low energy in atomic nuclei is an increased energy level density, which happens simply from the need to accommodate distinct structures at nearly the same nuclear excitation energy.
Another observable is the presence of nuclear levels with unique parity. The major shells in the energy spectrum derived from the shell model have a clustering of orbits with like parity. The reduction of the shell energy gaps and onset of deformation can bring orbits with opposite parity into close proximity. Nuclear shape coexistence can also lead to nuclear isomerism, where photon emission from a given state is hindered due to decay energy or incompatibility with potential final states. The systematic variation of level structures as a function of the neutron-to-proton ratio can reveal shape coexistence. However, detailed measurements of the properties of the low-energy nuclear states are needed to disentangle and distinguish the coexisting shapes.
The magnetic dipole moment is an important nuclear property that can be used to identify and characterize shape coexisting structures. The magnetic moment is sensitive to the orbital and spin contributions to the nuclear state. Therefore, a measurement of the magnetic moment indicates the shell model character of the state. The sign of the magnetic moment on its own provides critical insight into the composition of a nuclear state, because of the sign difference between the free proton ($+2.79$ nuclear magnetons) and free neutron ($-1.91$ nuclear magnetons) magnetic moments. Deviations in magnetic moment values from shell model expectations usually indicate the mixing of many shell model orbits, and the onset of deformation. The magnetic moment is an excellent probe of both collective and single-particle nuclear motion. In practice, magnetic moments are deduced from a direct measurement of the nuclear $g$ factor, which is the ratio of the magnetic moment and the nuclear spin. The measurement of $g$ factors in a nucleus that exhibits shape coexistence is one method to isolate competing shapes.
Erosion of the shell gap at neutron number 20 in neutron-rich $32Mg$ has been well documented, as discussed above. The robustness of the shell gap at neutron number 28 in neutron-rich nuclei, however, is under debate. The low-energy structure of $48Ca$ exhibits features of a shell closure at neutron number 28 and spherical shape for this neutron-rich, stable nuclide. Disparate views abound on the nature of the neutron number 28 shell closure at $42Si$, which is six protons removed from $48Ca$. The beta-decay half-life [2] and low-energy-level structure of $42Si$, [3] suggest that this nuclide has a well-deformed ground state, and therefore that the shell gap at neutron number 28 is dissolved. However, a substantial energy gap between shell model orbits at atomic number 14, which would strongly hinder development of collective features along neutron number 28, has been proposed for $42Si$, [4]. Examination of nuclei in the transition region between $48Ca$ and $42Si$, may offer a means to resolve the outstanding questions regarding the evolution of the neutron number 28 shell gap. For example, an isomeric state was identified at energy $∼320keV$ in $43S$ [5], and cited as possible evidence for the coexistence of spherical and deformed structures near neutron number 28.
Gaudefroy et al. [1] have deduced both the sign and magnitude of the $g$ factor for the known isomeric state in $43S$ to be $-0.317$ $+/-0.004$. The measurement was completed using the Time Dependent Perturbed Angular Distribution (TDPAD) method, where the spin precession of a nuclear state in a static magnetic field is monitored via photon emission. The emitted photon has a fixed angular momentum and orientation, leading to a distinct emission pattern. In a TDPAD measurement, the photon intensity oscillates with time, similar to the observation of the beacon of a lighthouse at a remote, fixed position. The oscillation frequency and phase of the photon intensity are related to the magnitude and sign of the $g$ factor, respectively. Agreement between shell model theory and experiment for the $g$ factor suggests that the isomeric state has quasispherical shape.
An additional outcome of the work by Gaudefroy et al. is a more precise measurement of the half-life of $415+/-5ns$ for the isomeric state in $43S$. The high precision of the half-life defines the characteristics of the hindered photon transition between the isomer and ground states, restricting the spin and parity of the ground state to $3/2-$. Neutron excitations across the neutron number 28 energy gap explain the structure of the $43S$ ground state, and are only plausible if this shell gap is reduced and nuclear deformation is present (Fig. 1). The second $7/2-$ level at $∼940keV$ already known in $43S$ depopulates rapidly to the deformed $3/2-$ ground state [6], in contrast to the slow transition rate associated with the decay of the spherical $7/2-$ isomeric state. The new results for the $g$ factor and half-life of the isomeric state at $∼320keV$ in $43S$ provide direct evidence on the presence of coexisting and distinct spherical and deformed structures in $43S$ and the collapse of the shell gap at neutron number 28 for neutron-rich nuclei.
The presence of both a spherical isomeric state and a deformed band structure at low-energy in $43S$ indicates that the neutron number 28 shell gap is significantly reduced in neutron-rich nuclei. Erosion of the shell gaps at both neutron numbers 20 and 28 in neutron-rich nuclei does not support the naive expectation that the proton and neutron magic numbers established in stable nuclei extend to the limits of the nuclear chart. The evolution of the shell gaps far from the stable isotopes is critical to the development of nuclear structure theories, such as the shell model, that rely on empirical data to account for shortcomings in the nuclear Hamiltonian via effective interactions. The disappearance of shell gaps also impacts modeling efforts in nuclear astrophysics. The properties of neutron-rich nuclei are critical input parameters to simulate mass processing along the astrophysical rapid neutron capture pathway, yet are governed by the location of shell gaps at neutron numbers 50 and 126.
Studying the evolution of the shell gaps above neutron number 28, however, will be extremely challenging. First signs for the collapse of the shell gaps at neutron numbers 20 and 28 were seen in nuclei with neutron-to-proton numbers exceeding 1.6:1. Access to nuclei with such extreme ratios was only possible with advances in isotope production and selectivity, along with the development of highly sensitive experimental detection systems. Strategic investments in advanced rare isotope production facilities, like the proposed Facility for Rare Isotope Beams in the United States, will provide the necessary tools to explore shell evolution at the extremes of the nuclear chart and develop a comprehensive and predictive theory of nuclear structure.
### References
1. L. Gaudefroy et al., Phys. Rev. Lett. 102, 092501 (2009).
2. S. Grévy et al., Phys. Lett. B 594, 252 (2004).
3. B. Bastin et al., Phys. Rev. Lett. 99, 022503 (2007).
4. J. Fridmann et al., Nature 435, 922 (2005).
5. F. Sarazin et al., Phys. Rev. Lett. 84, 5062 (2000).
6. R. W. Ibbotson et al., Phys. Rev. C 59, 642 (1999).
### About the Author: Paul Mantica
Paul Mantica is a Professor at Michigan State University with appointments in the Department of Chemistry and National Superconducting Cyclotron Laboratory (NSCL). He received his Ph.D. in Nuclear Chemistry in 1990 from the University of Maryland. Mantica has been studying the ground state properties of rare isotopes for more than 20 years, and his present research activities involving measuring beta-decay properties and nuclear moments of short-lived radionuclides. He is presently Department Head of the Experimental Nuclear Science Group at NSCL, and serves as the National Director of the Nuclear and Radiochemistry Summer Schools, which are sponsored by the American Chemical Society and the U.S. Department of Energy.
## Related Articles
### More Nuclear Physics
Simple Structure in Complex Nuclei
Viewpoint | May 6, 2013
Squashed Nuclei
Synopsis | Apr 11, 2013
## New in Physics
Wireless Power for Tiny Medical Devices
Focus | May 17, 2013
Pool of Candidate Spin Liquids Grows
Synopsis | May 16, 2013
Condensate in a Can
Synopsis | May 16, 2013
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Fire in a Quantum Mechanical Forest
Viewpoint | May 13, 2013
Insulating Magnets Control Neighbor’s Conduction
Viewpoint | May 13, 2013
Invisibility Cloak for Heat
Focus | May 10, 2013
Desirable Defects
Synopsis | May 10, 2013
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 48, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.91947340965271, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/tagged/absolute-galois-group
|
## Tagged Questions
3answers
908 views
### Subgroups of GL(2,q)
I have been wondering about something for a while now, and the simplest incarnation of it is the following question: Find a finite group that is not a subgroup of any $GL_2(q)$ …
2answers
475 views
### Is it known if the absolute Galois group is “divisible”?
The definitions of a divisible group that I have seen all seem to assume abelian is an a priori property of the group. My question is as to whether or not it is known that--given a …
3answers
378 views
### Absolute Galois group of the field of Puiseux series over $\overline{\mathbb{F}}_p$
Let $K$ be the field of Puiseux series with coefficients in $\overline{\mathbb{F}}_p$ (the algebraic closure of the field with $p$ elements). What is the absolute Galois group of \$ …
0answers
167 views
### Extending systems of l-adic representations to other l
I'm asking this not because I have an idea how one might approach it, but because it seems natural and inherently interesting. Let $K$ be a number field, $G_K$ its absolute Galois …
0answers
215 views
### Haar measure on Galois groups
Galois groups are nice compact Hausdorff groups, and therefore possess a bounded Haar measure, unique if we insist that the total volume be $1$. What is the Haar measure on the abs …
2answers
204 views
### Place stabilizers for the absolute Galois Group
Fix an algebraic closure, $\overline{\mathbb{Q}}$ for the rationals and consider the set, $B_p$, of all places of $\overline{\mathbb{Q}}$ over a fixed (possibly infinite) prime, \$p …
9answers
4k views
### “Understanding” Gal(\bar Q/Q)
I have heard people say that a major goal of number theory is to understand the absolute Galois group of the rational numbers G = Gal(Q bar/Q). What do people mean when they say th …
9answers
3k views
### What are dessins d’enfants?
There was an observation that any algebraic curve over Q can be rationally mapped to P^1 without three points and this led Grothendieck to define a special class of these mappings, …
2answers
609 views
### non-continuous inverse Galois problem
Let $G=Gal(\bar{\mathbf{Q}}/\mathbf{Q})$ be the absolute Galois group over $\mathbf{Q}$. Q1: Is it possible to find a (necessarily non-closed) normal subgroup $K\leq G$ such that …
2answers
236 views
### Is the absolute Galois group of the field of Laurent series in positive characteristic finitely generated?
If $K$ is an algebraically closed field of characteristic $p>0$, then $K((t))$, the field of Laurent series with coefficients in $K$, has infinitely many Galois extensions of degre …
3answers
1k views
### On what kind of objects do the Galois groups act?
I am neither number theorist nor algebraic geometer. I am wondering whether Galois groups of number fields (say the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$) a …
1answer
717 views
### What does Gal(Q_p/Q) mean? [closed]
What does $\mathrm{Gal}(\overline{\mathbb{Q}}_{p}/\mathbb{Q})$ mean? ($p$ is a prime number.) If it is defined as $\mathrm{Aut}(\overline{\mathbb{Q}}_{p}/\mathbb{Q})$, then does …
1answer
596 views
### Is the etale fundamental group of Spec(Z)\{p_1,…,p_n} finitely presented?
(of course not, it's usually uncountable; I really mean is it the profinite completion of a finitely presented group). By definition, \$\pi_1^{\operatorname{et}}(\operatorname{Spec …
1answer
1k views
### Are class numbers encoded in the absolute Galois group of ${\mathbb Q}$?
The absolute Galois group $G_{\mathbb Q}=\text{Gal}(\bar{\mathbb Q}/\mathbb Q)$, as a profinite group, encodes a lot of things: the whole lattice of number fields (closed subgroups …
5answers
2k views
### Element in the absolute Galois group of the rationals
Usually when people talk on the absolute Galois group Gℚ of ℚ they have in mind two elements they can describe explicitly, namely the identity and complex conjugation ( …
15 30 50 per page
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8918816447257996, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/119744/completion-time-of-a-process-on-a-tree/119790
|
## Completion time of a process on a tree
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given is a constant degree rooted tree of depth $D$. It is also known that the total number of nodes in the tree is at most $D^2$. There is a probabilistic process with discrete time steps on the nodes that work as follows:
1. A node becomes eligible to participate in the process when all of its children have succeded.
2. Once eligible, the node succeeds in each time step with iid probably $\frac12$. Once successful, it is "done", i.e. stays succeeded.
I would like a bound on how many steps it requires for all nodes to succeed with probability $1 - \delta$ where $\delta$ is potentially much smaller than $\frac1{D^2}$.
I can see a bound of $O((D + \log{\frac{1}{\delta}})\log D)$ as outlined below. I was wondering if the $\log D$ is necessary?
Outline of argument for $O((D + \log{\frac{1}{\delta}})\log D)$: Consider to be the number of steps required for the depth of tree of unsuccessful nodes to reduce by 1. Since there are at most $D^2$ leaves at any given time, the expected time for this reduction is $\log D$ steps. The situations for different depths are essentially independent (can be written as martingale), and then a concentration inequality would give us the required bound.
-
## 1 Answer
It seems the $\log D$ is unnecessary. We can model the process as follows:
Let $v_1, \dots, v_j$ be the initial leaves of the tree, and let $p_1, \dots, p_j$ be the paths eminating from the leaves to the root vertex. At each step we perform the following:
-Delete the initial vertex of each $p_i$ independently with probability $1/2$.
-If any $p_j$ is a proper subset of some other $p_k$, delete it. If there are two or more identical paths, delete all but one.
Note that by construction after each step the leading vertex of $p_j$ is eligible (if it were not eligible, $p_j$ would be a proper subset of the path containing its eligible child) and the leading vertices are distinct. Furthermore, every unsuccessful vertex is in at least one path until it becomes successful. So the tree becoming successful is equivalent to all the paths either being deleted or becoming empty.
Now consider an alternative process where we do not perform the second step at all, always just reducing each list by $1$ with probability $1/2$ at each step (now a vertex may be deleted from one list but stay on other lists). This alternative process always takes at least as long to finish as the original process. But now it's easy to analyze. Each path has length at most $D$ initially, so the probability it lasts beyond $2D+c\sqrt{D}$ steps decays exponentially in $c^2$. There's at most $D^2$ paths initially, so if we take $c$ large enough to make this probability smaller than $\frac{\delta}{D^2}$, we'll have the desired probability.
I believe this should get you something like $2D+O(D^{1/2} \log(D/\delta))$, which is pretty much the best you can hope for (consider a tree consisting of $D$ disjoint paths of length $D$ from the root).
-
sounds perfect! let me just ruminate on this for a bit more, and I'll accept the answer. Thanks! – Pradipta Jan 26 at 12:55
actually the tree cannot consist D disjoint paths of length D from the root since it is constant degree tree (assuming D is larger). I wonder if that improves the running time further. – Pradipta Jan 28 at 16:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9650354981422424, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/183924/undergraduate-approach-to-a-problem-about-convex-functions/183934
|
# Undergraduate approach to a problem about convex functions
I am preparing some sheets of exercises that I'll assign to my undergraduate students in biology (sophomore class, or first academic year in italian universities). This is the problem:
Exercise. Let $f \colon [a,b] \to \mathbb{R}$ be a continuous and convex function. If $f(a)f(b)<0$, prove that $f$ has exactly one zero.
The solution is essentially clear from the graph of $f$, but I wish they could supply a more rigorous proof. According to your experience, is this problem too hard for this kind of students? Should I be satisfied with a "graphical" answer? Apart from the geometric and analytic definition of convexity, what properties of convex functions should they kknow, to solve rigorously this problem?
-
I think they should be able to translate the picture into a statement. In particular, it is almost trivial to show that if $f$ has two zeros, then $f$ cannot be convex (under the conditions stated). The picture makes this obvious both graphically and analytically (in my opinion). They need only know the usual definition of a convex function (ie, $f(\lambda x + (1-\lambda)y) \leq \lambda f(x) + (1-\lambda) f(y)$). – copper.hat Aug 18 '12 at 8:58
Well. From recent experience with non-math majors I can predict that many students will have problems to see that they have to show that if there are two zeroes, then the function is not convex. This kind of thing seems trivial to people experienced in proving things but apparently it is not. I look at the problem and think "it's clear that there is at least one zero. Why can't there be more than one". These little steps of clarifying the situation seem to be very difficult if you are not used to doing this. – Stefan Geschke Aug 18 '12 at 9:15
It is very hard to accept one particular answer, since they reflect a personal viewpoint. I strongly appreciate your contributions, though they tend to underrate the level of students. My course is more than basic calculus, since I teach theorems and proofs. My students learn the theorem about zeroes of continuous functions, but I am sketchy on the theory of convex functions. – Siminore Aug 18 '12 at 11:07
1
Perhaps give them a hint as in what happens if you take $f(a)<0, f(b) >0$ and two points in between with $f(x) = f(y) = 0$? Have them draw a picture. – copper.hat Aug 18 '12 at 17:46
## 2 Answers
Yes, this problem is almost certainly too hard. In my experience you can be happy if the students get any graphical intuition at all (for problems where functions are involved that are not given by an explicit formula). You won't get a rigorous proof. When I taught calculus (mostly engineering students) the only proof that some of the student were ready to give were proofs that followed exactly the same pattern as other proofs done in class before. I would think that already decoding $f(a)f(b)<0$ as "one is $>0$, the other is $<0$" is a challenge. This is usually different with math majors, but students with other majors often have difficulties with these things, in my experience.
What would be needed to give a rigorous proof? Certainly the intermediate value theorem (to get at least one $0$). Everything else could be done using just the definition of convexity.
-
The only property of convex functions needed is that the graph of the function cannot lie above the chord between any two points on the graph, which is basically the definition af a convex function. Nevertheless, there are many potential stumbling blocks here, each of which might be expected to take its toll among the students.
The condition $f(a)f(b)<0$ has to be interpreted as saying that the function values at the two endpoints have different signs, and the problem has to be divided into two cases. To show that the function cannot have more than one zero in the interval $[a,b]$, one have to assume the existence of two zeros, put names on them, such as $x_1$ and $x_2$, and use these points in the argument. My experience is that student are often insecure about such matters. "Are we allowed to do that?" is a typical question. This is particularly true of students who do not major in mathematics, and for whom mathematics is often just a toolbox of techniques to solve certain kind of problems that one have to deal with.
Another potential problem is that for many students, convexity means no more or less than $f''(x)\geq 0$ for all $x$. What to do when there isn't any expression to differentiate?
If you are afraid that your problem will be too difficult for your students, it is possible to make it easier by explicitly saying that $f(a)<0$ and $f(b)>0$, and/or giving a hint, such as "For showing that the function cannot have more than one zero, use the definition of a convex function (on three suitably chosen points)".
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9706181883811951, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/12283/if-you-place-a-spring-on-a-neodymium-hard-drive-magnet-it-appears-to-vibrate-in
|
# If you place a spring on a neodymium hard-drive magnet, it appears to vibrate in slow-motion. Why is that so?
By chance(playing around really) I saw that a spring(mainly from a pen) placed on a neodymium hard-disk magnet(and then flicked by your finger at the top) makes a nice-effect (see youtube video ). It appears to oscillate in slow-motion(looks like tornado).
Of course, "slow-motion" is purposely simplistic and unscientific - I am very far from a physicist.
I was too impatient in the video though, I should have zoomed in on the spring and waited. Sorry about that..
Here's a page about the magnets used: http://www.reuk.co.uk/Hard-Disk-Drive-Magnets-For-Wind-Turbines.htm
Here are the polarities, plus a horizontal profile below:
More details: You really want to use a retractable pen spring, the thin kind. And Hard-drive magnets are key - I think it doesn't work with others. I think it's partly because of the 4-poles of a neodymium magnet. i.e, it's actually two-magnets-in-one. Cigarette lighters also have a long delicate magnet, which is good but too tipsy.
LBNL, supposedly you can stack these magnets, but they seem impossible to separate from the backing-piece. I appreciate any tips or advice.
-
3
If you could work in a link to a video, that would make a fantastic addition to this question. – David Zaslavsky♦ Jul 14 '11 at 22:45
2
i took a few of my neodymium magnets and an assortment of springs and tried to duplicate this with no success, so i'm really interested in seeing a video. – Jay Kominek Jul 15 '11 at 4:40
1
@Jay I assume. that Adel used a spring with some windings on the ends touching each other. This electrically shortcut windings could produce that effect, a kind of Waltenhof pendulum. – Georg Jul 15 '11 at 9:37
3
The video posted doesn't really show that the spring has sustained vibrations. Any spring would vibrate if one end is fixed, the other free and it is flicked at the free end. – pongapundit Jul 15 '11 at 17:33
3
pongapundit is right, I don't see the spring doing anything unusual that wouldn't happen if, say, you glued one end to the table. Except that the spring's equilibrium configuration is curved, rather than straight up, although that would be a matter for a different question. – David Zaslavsky♦ Jul 15 '11 at 17:59
show 11 more comments
## 3 Answers
First of all, I am not an expert on magnetism, so this is more of an additional question than answer (cannot add pictures to comments, so thats why its here).
• In the case of ferrous materials they generate an magnetic field inside material (ok?).
• Opposite signs attract each other (right?).
• the position of the spring happens to be the local minimum of potential energy by symmetry principle (or you can actually calculate this).
• all the other phenomena are just corrections to above phenomena (?).
If all above are summed together, the spring is just oscillating around a local potential energy minimum, because of the magnetic field, not because of the spring properties. This is also why the coin oscillates the same way.
Anyway could you comment on this, I would like to know where I went wrong (if anywhere).
-
This means that if you but the spring anywhere else than in the middle of the magnet, it won't work. Also, tilting the spring, tilts the magnetism in iron which costs energy. Reorientation of magnetism in iron isn't cheap energywise. Right? – Juha Mar 31 '12 at 9:38
Thank You Very Much Juha! I will study this as I still need to learn more physics! – Adel Apr 1 '12 at 21:33
I'm thinking (assuming the effect is real), that it might be that magnetic effects have lowered the effective spring constant. The total energy in the magnetic field will be different with a long spring than with a short one, because the magnetic permittivity of the spring metal is much higher than air/vacuum. So if one calculates total system potential energy, magnetic, plus internal energy of the spring metal, then one could calculate the mode frequencies (assuming they are long compared to establishing an equilibrium field for a change in spring geometry). The only problem I'm having is I think a longer spring probably means more magnetic field energy, i.e. maybe we would expect the frequency to increase, not decrease.
-
The comments section has already pointed out that eddy currents induced in the spring and acted upon by the magnetic field gives rise to a Waltenhof pendulum type of effect. There is also a more dominant and basic effect which is that the base of the spring isn't fixed and so effectively you have two springs in series. For a spring with spring constant $k$ oscillating longitudinally with a mass m on one end $$\omega_0 = \sqrt{\frac k m}$$ For two springs connected in series $$\omega_0 = \sqrt{\frac{k_1k_2}{(k_1+k_2)m}}$$ Vibrating the spring transversely means the expressons for the frequency of oscillation are far more complicated, but the qualitative explanation you're looking for is the same - there is an effective spring connected in series.
-
Where would the eddy currents circulate? The coil is open circuit, not a closed loop. – endolith Nov 18 '11 at 5:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439516663551331, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/volume+work
|
# Tagged Questions
2answers
43 views
### How is it possible to equate the internal energy at constant volume with the internal energy of an adiabatic process?
I hope my question makes sense. My problem is that, I have read through numerous textbooks that nC(cons. volume)dT = -PdV when deriving the relationship between T and V for an adiabatic process, ...
1answer
838 views
### Thermodynamics - Sign convention
I use the sign convention: Heat absorbed by the system = $q+$ (positive) Heat evolved by the system = $q-$ (negative) Work done on the system = $w +$ (positive) Work done by the system = $w -$ ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305510520935059, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/27367?sort=newest
|
Good reference for homology of $K(\mathbb{Z}, 2n)$?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The homology algebra $H_*( K(\mathbb{Z},2n); \mathbb{Z})$ contains a divided polynomial algebra on a generator $x$ of dimension $2n$.
I suppose I could read through the Cartan seminar for a proof, but I'm hoping someone knows of a nice simple argument for this fact.
-
1
Jeff -- me too. Otherwise, it is year 7, expos\'e 11. The presentation there is very careful, but it's a bit difficult to follow unless one starts at the beginning of the year. – algori Jun 7 2010 at 17:12
@algori: Do you have a more precise reference? – Thomas Kragh Jun 8 2010 at 11:14
4
@Thomas, well, "Cartan seminar, year 7, exposé 11" is quite precise! – Mariano Suárez-Alvarez Jun 8 2010 at 12:33
5 Answers
We need to do three things: (1) show that the homology contains a polynomial algebra, (2) show that powers of the generator are sufficiently divisible, and (3) show that torsion doesn't interfere.
Let me repeat Hatcher's path through (1), since we need the particular generator to show that it is divisible. According to Dold-Thom (Hatcher 4.K; cf Dold-Kan1), a model for $K(Z,m)$ is the free commutative monoid on pointed2 $S^m$. According to James (Hatcher 4.J), the homology of the free associative monoid $JS^m$ on $S^m$ is a polynomial algebra on one generator in degree $m$. Thus there is a map of monoids $JS^m\to K(Z,m)$ so its map on homology is compatible with the ring structure. It is easy to see (eg, by the Serre spectral sequence) that the rational homology of $K(Z,m)$ is the signed-commutative algebra free on a generator in degree $m$, so this map is a rational isomorphism if $m=2n$.
Now for step (2), where we divide the James class by $k!$. If we restrict to words of length $k$ in these monoids, the affect of imposing commutativity is to quotient by the $k$th symmetric group. This piece of the James construction is a pseudo-manifold representing the class in degree $nk$ which is the $k$th power of the generator. The action of the symmetric group is generically free, so this cycle has multiplicity $k!$ when it lands in $K(Z,2n)$. Thus it is divisible. In other words, choose a fundamental domain for the action of the symmetric group on the $k$th filtration of the James construction. This yields is an $kn$-chain which when symmetrized is the whole of the $k$th filtration, and thus a cycle. Projecting it to the quotient is an alternative way of symmetrizing, so it is a cycle for $K(Z,2n)$.
I will leave step (3) as an exercise. What this shows so far is that the integral homology hits the divided power algebra in the rational homology, but the canonical choice of the divided element should make it plausible that these divisions are the right choices, ie, $x^{(k)}x^{(l)}=\binom {k+l}k x^{(k+l)}$.
1If one accepts the fragment of Dold-Kan of the agreement of the two things one can do to a simplicial abelian group (1) homology of (either) associated chain complex and (2) the homotopy groups of the geometric realization; then a version of Dold-Thom follows trivially. Indeed, the very definition of simplicial homology is the free abelian group (ie, simplicial group) on the simplicial set.
2 by "the free (commutative) monoid on a pointed set," I mean that we impose on the free object the relation that the basepoint is the identity for the monoid.
-
I think this sounds like the kind of answer I was hoping for. Thanks! – Jeff Strom Jun 8 2010 at 22:38
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For a reference, you might see this paper of Birgit Richter. A rough outline follows:
Since $X = K(Z,n)$ can be made a commutative topological monoid per Ben Wieland's answer, its singular chain complex $C_*(X)$ is made into a commutative and associative differential graded algebra via the commutative associative Eilenberg-Zilber shuffle product $C_*(X) \otimes C_*(X) \to C_*(X \times X)$ composed with the multiplication on $X$.
The formula for the shuffle product is slightly involved when you iterate it, but essentially: to multiply $\alpha_1 \cdots \alpha_n$ with $\alpha_i$ of degree $k_i$, you sum over all ways to divide a set of size $\sum k_i$ into subsets of size $k_1, \cdots, k_n$ of a product of certain degeneracy operators, depending on the subdivision, applied to the chains $\alpha_i$. (With signs.)
If all the $\alpha_i$ are equal and in positive even degree $k$, then the signs don't interfere and we are summing over all ways to divide $nk$ into $n$ equal pieces of size $k$. However, because the chains are now equal we get the same term if we simply permute the pieces of size $k$, and so each term appears in the sum at least $n!$ times. (Note that $k > 0$ is necessary here in order for these permutations to actually give different subivisions.) As a result, the n-fold product chain is divisible by $n!$.
-
This is basically Thomas' answer except on the chain level, where you don't have to worry about interference from torsion. – Tyler Lawson Jun 8 2010 at 20:52
I really meant mine on the chain level - i write that $\alpha$ is a simplex generating (meant to say a simplex representing a generator). – Thomas Kragh Jun 9 2010 at 6:29
I am not sure if this qualifies as a simple argument, but to me it is very nice. I did not get the reference in the comments above - so sorry if this is close to that. I dont have an exact reference since this is pieced together from several places.
Let $A$ be any abelian group. Let $X=X(A)$ be the simplicial set defined by:
$X_p = \{$colorings of the $n$ faces of the standard $p$ simplex by elements in $A$ such that when restricted to any $n+1$ face the sum with alternating sign of the colorings of the faces are $0\}$
The face and degeneracy maps are defined by restriction and pull-back (the latter introduces 0's when an $n$ face is degenerate.
with this $X_p$ is a single point when $p < n$ (a single empty coloring), $X_{n}=A$, $X_{n+1}$ is "relations", and $X_p$ for $p>n+1$ is given by its restrictions to the $n+1$ faces. I.e. it is $n+(1 or 2?)$ co-skeletal. This means that its geometric realization $\lvert X \rvert$ is a $K(A,n)$.
In fact this is a simplicial group by adding the colorings, and $X(A)\times X(A)$ is isomorphic as simplicial groups to $X(A\oplus A)$ by definition.
To simplify we now use $A=\mathbb{Z}$ and to understand this product on the $\mathbb{Z}$ span of the simplices, I will assume some familiarity with the Eilenberg-Zilber operator, which appears naturally in the product of simplicial sets. Using this we describe the power map
(1) $H_n(X(A),A)^{\otimes k} \to H_{kn}(X(A)\times\cdots\times X(A),A)=H_{kn}(X(A^{\oplus k}),A) \to H_{kn}(X(A),A)$
$H_n(X(A),A)$ is a single $A$ and it is generated by the $n$ simplex $\alpha$ colored by $1$, the tensor product $\alpha^{\otimes k}$ can when mapped to the middle term of (1) be written as the sum
$\sum_{\sigma \in S(k,n)}$ sgn$(\sigma) \beta_{\sigma}$
where $S(k,N)$ is the permutations on $\{1,\dots,kn\}$ which preserves the order of the $k$ sequences SEQ$\strut_i=\{in+1,\dots,(i+1)n\}, i=0,k-1$ (a generalized shuffle), and $\beta_{\sigma}$ is the associated product of degenerations defined by
(2) $\beta_{\sigma}=(\sigma_1^* \alpha)\times \cdots \times (\sigma_k^* \alpha)$,
where $\sigma_i$ is the order preserving surjective map from $\{0,\dots,kn\}$ to $\{0,\dots,n\}$ defined by increasing one precisely when a number is in the image of $\sigma$ restricted to SEQ$\strut_i$.
In the case of $n$ even it is clear that the sign of the permutations $\sigma$ which simply permutes the sequences SEQ$\strut_i$ (without intertwining them) has sign 1 and thus we may act on the $\sigma$'s in the sum by these with out changing the sign, this action, however, permutes the factors in (2), but when mapped to the last factor in (1), they are the same, so the image of the sum in (1) is divisible by $k!$.
To see injectivity of the product one can see that the power map is injective on rational homology by using a Hopf-algebra argument as in section 3.C of Hatchers book.
-
As you say, this is nice; but it is not as simple as I was hoping for. – Jeff Strom Jun 8 2010 at 14:15
Do you want to know the integral homology?
If you are happy with homology with coefficients in $\mathbb{F}_p$, the best way to compute (and describe) the homology of Eilenberg-MacLane spaces is the technique developed in a paper by Ravenel and Wilson (MathSciNet). See also Wilson's "sampler" (MathSciNet). They used the structure called Hopf ring (a collection of Hopf algebras equipped with other operation) to describe $\{H_*(K(\mathbb{Z},n);\mathbb{F}_p)\}_{n\ge 0}$ as a whole.
The point is they worked in homology instead of cohomology so that we can use $$K(\mathbb{Z},n)\times K(\mathbb{Z},m) \longrightarrow K(\mathbb{Z},m+n)$$ to "generate" $H_*(K(\mathbb{Z},n);\mathbb{F}_p)$ from $H_*(K(\mathbb{Z},m);\mathbb{F}_p)$ for $m<n$.
It turns out the Hopf ring structure is compatible with the Eilenberg-Moore spectral sequence and we obtain the Hopf algebra structure on the homology of Eilenberg-MacLane spaces easily. It is also important their technique works for generalized homology theories. In fact, their motivation was to compute the Morava $K$-theory of Eilenberg-MacLane spaces.
-
I really do want integral homology. – Jeff Strom Jun 8 2010 at 14:15
Nice question. Unfortunately the answer I posted an hour ago is wrong because I switched the structures of homology and cohomology for the James reduced product. It is the cohomology that is a divided polynomial ring while the homology is a polynomial ring.
-
Yes, that has tantalized me for some time. – Jeff Strom Jun 8 2010 at 16:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 99, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376636147499084, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/72913/centerlessness-of-reduced-free-group
|
Centerlessness of reduced free group
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let F_n be the free group with n generators, where n is an interger greater than 1. Let RF_n be the reduced free group, which is defined to be the quotient group of F_n obtained from F_n by adding the relations that each generator of F_n commutes with all its conjugates.
Can anyone help to prove or disprove that RF_n is centerless, that is, the center of RF_n is trivial?
Your help will be much appreciated.
-
1 Answer
The group has a center. For example, if $F_2=\langle a,b\rangle$, $[a,b]$ is in the center of the factor-group. Indeed, $[a,b]=a\cdot (a^{-1})^b$ and so it is a product of conjugates of $a$, and commutes with $a$ in the factor-group. On the other hand, $[a,b]=b^ab^{-1}$ is a product of conjugates of $b$, so it commutes with $b$ in the factor-group. Thus $[a,b]$ commutes with $a$ and $b$, and hence is in the center of the factor-group. This example can be generalized to any rank $n$. If $F_n=\langle a_1,...,a_n\rangle$, and we impose the relations that $a_i$ commutes with all its conjugates, then in the factor-group $a_i$ commutes with all products of its conjugates and their inverses, hence $a$ commutes with every element of the normal subgroup $N_i=a_i^{F_n}$ generated by $a_i$. Now take any element $w$ in the intersection of all $N_i$ (that intersection is non-trivial obviously). It would belong to the center of the factor. One can easily find such $w$ which is not 1 in the factor.
-
Thank you Mark for your answer!! Then can we conclude that the center of RF_n is the intersection of all N_i, as is defined in your answer? – Zuriel Aug 15 2011 at 9:21
@Zuriel: I am not sure that the center is the intersection of all $N_i$ (for all $n$). It looks so, but I do not know a proof at the moment. For $n=2$ it is true because the factor-group is just free nilpotent of rank 2 and class 2. – Mark Sapir Aug 15 2011 at 9:40
@Mark Sapir: as you mentioned in your previous answer, [a, b] is in the center of RF_2 and thus the group has a center. But do we also need to prove [a, b] is not equal to the identity element of the group? I know that it should be true; but how to prove it? Thank you again for your answer. – Zuriel Aug 15 2011 at 10:15
If $[a,b]=1$ in the factor-group, then the factor-group is commutative. But the Heizenberg group $\langle a,b \mid [[a,b],a]=[[a,b],b]=1\rangle$ satisfies your relations and is not commutative. – Mark Sapir Aug 15 2011 at 11:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349908232688904, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/98615?sort=oldest
|
## Affine “real algebraic geometry” of hyperbolic space?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Real algebraic geometry, at least to start with, traditionally studies the zero-sets of real polynomials in a given set of variables. But treating, say, the Euclidean plane as an uncoordinatized metric space, one may still consider the ring of functions generated by all functions $D_p(\cdot)=d(\cdot,p)^2$, and also constant functions. Then one has really lost nothing, because (bring back coordinates $x,y$ just for the moment), $x=(D_{(0,0)}-D_{(1,0)} +1)/2$ and $y=(D_{(0,0)}-D_{(0,1)} +1)/2$.
Since the ring definition here makes sense over any metric space, the possibility of a generalized real algebraic geometry arises.
Questions: Does this line of thought occur in the literature? Independently of the geometric optic, does this sort of ring arise anywhere in the literature (other than in the motivating example)? Do complexification and projectivization have well-behaved analogues in this context?
I'm particularly interested in the properties of "real algebraic curves" in the hyperbolic plane. I have narrower questions, but I'll save them until I see response to this.
-
3
It seems that for "most" metric spaces (and hyperbolic space is one of the "most") distance functions (or their squares) will span an infinite-dimensional real vector space (most likely, of uncountable dimension). Thus, doing algebraic geometry in this setting will be somewhat difficult. Note that the map $i: x\mapsto d_x$, where $d_x(y)=d(x,y)$, determines an isometric embedding of any metric space $X$ to $L_\infty(X)$, so you should not expect image of $i$ to be contained in a finite-dimensional affine subspace. On the other hand, hyperbolic plane is a 1-sheeted hyperboloid in $R^3$... – Misha Jun 2 at 5:03
2
... so you have a natural supply of linear functions. For instance, if $x, y$ belong to the upper sheet of the hyperboloid, then their Lorentzian inner product $x\cdot y$ equals $-cosh d(x,y)$, where $d$ is the hyperbolic distance. Thus, provided you replace your square with $-cosh$, you get linear functions $-cosh d(x, )$ on ${\mathbb H}^2$. But then you just get "boring" traditional algebraic geometry. – Misha Jun 2 at 5:08
@Misha: You comments is really an answer. I think that you should rewrite it as an answer. – André Henriques Jun 17 at 19:54
I suppose that I don't understand "Thus, doing algebraic geometry in this setting will be somewhat difficult." Why not "different" instead of "difficult"? (Of course I always find mathematics difficult.) Relations make algebraic objects smaller, but not necessarily more approachable. – David Feldman Jun 17 at 22:54
## 1 Answer
Upon Andre's request, I am rewriting my comments as an answer, even though I am on a somewhat shaky ground since my experience with infinite-dimensional algebraic geometry is very limited.
It seems that for "most" metric spaces (and hyperbolic space is one of the "most") distance functions (or their squares) will span an infinite-dimensional real vector space. I checked this in the case of distance functions on the hyperbolic space (of dimension $\ge 2$, of course) and the same should hold for the distances squared. The point is that for every metric space $X$, the map $i=i_X: x\mapsto d_x$, where $d_x: X\to {\mathbb R}$ is the function $d_x(y)=d(x,y)$, determines an isometric embedding from $X$ to a linear subspace $V$ in $C(X)$ (space of continuous functions on $X$) with the sup-norm on $V$, where $V$ is the linear span of the image of $i$. On the other hand, hyperbolic space (of dimension $\ge 2$) does not embed isometrically in any finite-dimensional Banach space. Thus, doing algebraic geometry on the hyperbolic space becomes (in my mind) a somewhat daunting task since you have to consider the ring of polynomial functions ${\mathbb R}[V]$ on the infinite-dimensional vector space $V$. In particular, you loose the Noetherian property which makes life difficult. The only context where I have seen infinite-dimensional algebraic geometry is the ind-schemes. Ind-schemes appear naturally when one works with, say, affine Grassmannians and which I had to do exactly once in my life (I mean, thinking of affine buildings in algebro-geometric terms). As far as I can tell, dealing with the ind-scheme based on the ${\mathbb R}[V]$ for the hyperbolic space, would amount to considering geometry of finite configurations of points in ${\mathbb H}^n$ (and, occasionally, lines). I could be mistaken, but for the union $Y$ of two distinct geodesic segments in the hyperbolic space, linear span of the image of $i_Y$ (where we use the restriction of the distance function from ${\mathbb H}^n$) will be infinite-dimensional, so you are not allowed to use more than one geodesic. While geometry of finite subsets of hyperbolic spaces has some uses (see the discussion of the sets $K_m$ below), it strikes me (I am a hyperbolic geometer) as mostly boring. (Maybe logicians can add something interesting here since considering finite subsets in ${\mathbb H}^n$ we are dealing with the elementary theory of the hyperbolic space.)
Gromov (see e.g. his book "Metric Structures for Riemannian and Non-Riemannian Spaces") introduces, for every metric space $X$, the collection of subsets $K_m(X)\subset {\mathbb R}^N$, $N=\frac{m(m-1)}{2}$. The set $K_m(X)$ consists of $N$-tuples of pairwise distances between various $m$-tuples of points in $X$. Then $K_3(X)$ (under some mild assumptions on $X$, e.g., $X$ is an unbounded geodesic metric space) is defined by triangle inequalities and nothing else. The set $K_4(X)$ is quite interesting, since all the "curvature" conditions on metric spaces are defined via quadruples of points. However, Gromov could not come up with any interesting uses for $K_m(X), m\ge 5$, and I do not know what to make of these sets either. Thus, the ind-scheme approach to the algebraic geometry of the hyperbolic space might yield something geometrically interesting for quadruples of points, beyond which algebraic geometry is likely to get disconnected from (geo)metric geometry.
Lastly, MO discussion at http://mathoverflow.net/questions/92755/is-there-an-algebraic-approach-to-metric-spaces/92758#92758 is related to David's question.
-
interesting, this works just as well for spheres, and even $S^1$. – Will Sawin Jun 23 at 4:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9191558361053467, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.