url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/150246/is-there-a-linear-function-that-is-not-continuous-between-two-normed-vector-sp
|
# Is there a linear function that is *not* continuous between two normed vector space?
The textbook says that this function has to be continuous at least in the origin for it to be continuous everywhere. But how is it possible that a function is already linear but somehow not continuous?
For example, $E$ and $F$ are two normed vector spaces. $f:E\rightarrow F$ is a linear function. Obviously we know that $f(0) = 0$. Now, for a non-zero vector $a$ in $E$, as long as $f(a)$ has a definition, say $b=f(a)$ for some $b\in F$. Then for however small $\epsilon$, as long as $\lVert x\rVert<\lVert a\rVert\frac{\epsilon}{\lVert b\rVert}$, we have $\lVert f(x)\rVert<\epsilon$. So it seems that this function is continuous at the origin without stipulating it.
-
I don't understand your argument. You make no use of $b$. – Qiaochu Yuan May 27 '12 at 4:23
Sorry, typos! I published it before I proofread it. Now I'm doing it. – Voldemort May 27 '12 at 4:25
Is it worth pointing out that for any linear $f: E \to F$, and any $v$ in $E$, the limit $\lim_{t \to 0} f(tv)$ will exist and be $0$? This is certainly true. A second and far more nontrivial true statement is that if $E$ is finite dimensional then every any linear map from $E$ to any normed space $F$ will be continuous. Probably, some blend of these two facts is where your intuition is coming from. But this intuition does not (and cannot) lead to a proof of a general statement, as examples like Qiaochu's show. – leslie townes May 27 '12 at 4:51
## 1 Answer
It's a standard lemma that a linear operator $f : B \to C$ between two normed spaces is continuous if and only if it is bounded in the sense that the image of the unit ball in $B$ is bounded. It is easy to write down unbounded linear operators. For example, let $B = C$ be the subspace of compactly supported sequences in $\ell^1(\mathbb{Z})$ with basis $e_i, i \in \mathbb{Z}$ and consider the linear operator defined by $T(e_i) = i e_i$.
It is simply false that $||x|| < ||a|| \frac{\epsilon}{||b||}$ implies $||f(x)|| < \epsilon$. (Take $f = T, a = e_1, x = \frac{\epsilon}{2} e_3$.)
-
@Voldemort: $\ell^1(\mathbb{Z})$ is the space of all functions $f_n : \mathbb{Z} \to \mathbb{C}$ such that $\sum |f_n|$ converges equipped with the norm $\sum |f_n|$. The compactly supported sequences in $\ell^1(\mathbb{Z})$ are the sequences with only finitely many nonzero terms; this is spanned as a vector space by the sequences $(e_i)_n = \delta_{in}$ (which are equal to $1$ if $i = n$ and equal to $0$ otherwise). – Qiaochu Yuan May 27 '12 at 4:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9584107398986816, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/267563/closed-set-in-ell1
|
# Closed set in $\ell^1$
Show that the set $$B = \left\lbrace(x_n) \in \ell^1 : \sum_{n\geq 1} n|x_n|\leq 1\right\rbrace$$ is compact in $\ell^1$. Hint: You can use without proof the diagonalization process to conclude that every bounded sequence $(x_n)\in \ell^\infty$ has a subsequence $(x_{n_k})$ that converges in each component, that is $\lim_{k\rightarrow\infty} (x_{n_k}^{(i)})$ exists for all i. Moreover, sequences in $\ell^1$ are obviously closed by the $\ell^1$-norm.
My try: Every bounded sequence $(x_n) \in \ell^\infty$ has a subsequence $(x_{n_k})$ that converges in each component. That is $\lim_{k\rightarrow\infty} (x_{n_k}^{(i)})$ exists for all i. .And all sequences in $\ell^1$ are bounded in $\ell^1$-norm. I want to show that every sequence $(x_n) \in B$, has an Cauchy subsequence. Choose an N and M such that for $l,k > M$ such that $|x_{n_k}^{(i)} - x_{n_l}^{(i)}| < \frac{1}{N^2}$ Then $$\sum_i^N |x_{n_k}^{(i)} - x_{n_l}^{(i)}| + \sum_{i = N+1} ^\infty |x_{n_k}^{(i)} - x_{n_l}^{(i)}| \leqslant \frac{1}{N} + \frac{1}{N+1} \sum_{i = N+1} ^\infty i|x_{n_k}^{(i)} - x_{n_l}^{(i)}| \leqslant \frac{3}{N+1}$$ It feels wrong to compine $M,N$ like this, is it? what can I do instead?
-
I don't understand the first sentence: you mean that a bounded sequence of real numbers has a convergent subsequence, right? But it will help if you can show that you can extract, when you work with sequences whose terms are in $\ell^1$, that will work for all coordinates. – Davide Giraudo Dec 30 '12 at 11:38
Thanks, I corrected it now, can you please see if its wrong now? – Johan Dec 30 '12 at 12:09
## 2 Answers
We can use and show the following:
Let $K\subset \ell^1$. This set has a compact closure for the $\ell^1$ norm if and only if the following conditions are satisfied:
• $\sup_{x\in K}\lVert x\rVert_{\ell^1}$ is finite, and
• for all $\varepsilon>0$, we can find $N$ such that for all $x\in K$, $\sum_{k\geqslant N}|x_k|<\varepsilon$.
These conditions are equivalent to precompactness, that is, that for all $r>0$, we can find finitely many elements $x^1,\dots,x^N$ such that the balls centered at $x^j$ and for radius $r$ cover $K$.
-
Thanks! I have not seen this theorem, cant you prove it with the Hint? what is wrong with my try? – Johan Dec 30 '12 at 12:19
To prove compactness of $B$ you can use the following theorem:
A set $A \subseteq \ell_1$ is compact if and only if
1. $A$ is closed
2. $A$ is bounded
3. $\sup_{x \in A} \sum_{j \geq n} |x_j| \to 0$ as $n \to \infty$
How to prove these properties of $B$?
1. Let $X:= \{x \in \ell_1; \sum_j j \cdot |x_j|<\infty\}$. Define $$\|x\|_X := \sum_j j \cdot |x_j| \qquad (x \in X)$$ Then $(X,\|\cdot\|_X)$ is a normed space. Define $$T:X \to \ell_1, x=(x_j)_j \mapsto (j \cdot x_j)_j$$ Then $T$ is linear, surjective and isometric (in particular continuous) and therefore we conclude that $B=T^{-1}(B[0,1])$ is closed as a pre-image of a closed subset.
2. Let $x=(x_n)_n \in B$, then $$\|x\|_1 = \sum_{n \geq 1} |x_n| \leq \sum_{n \geq 1} n \cdot |x_n| \leq 1$$ where we used the definition of $B$ in the last equation. Hence $$\sup_{x \in B} \|x\|_1 \leq 1$$ which means that $B$ is bounded.
3. Use the following estimate: For all $x=(x_n)_n \in B$ we have $$\sum_{j \geq n} |x_j| = \frac{1}{n} \sum_{j \geq n} n \cdot |x_j| \leq \frac{1}{n} \cdot \sum_{j \geq n} j \cdot |x_j| \leq \frac{1}{n}$$
I think you mess things up. You wrote
Every convergent sequence $(x_n) \in \ell^\infty$ has a convergent subsequence $(x^{n_k})$. That converges component wise.
It doesn't make sense at all: If you consider a convergent sequence $(x_n)_{n} \in \ell_\infty$, then it's trivial that there exists a convergent subsequence (since the whole sequence is convergent). Probably you wanted to consider a sequence of sequences, i.e $(x^n)_n \subseteq \ell_\infty$ (i.e. $x^n \in \ell_\infty$). Similar for the next sentence:
And all sequences in $\ell^1$ are bounded in $\ell^1$-norm.
If you take one (!) sequence $x:=(x_n)_{n} \in \ell^1$, then (by definition) $\|x\|_1 < \infty$. But: If you consider a sequence of sequences, i.e. $(x^n)_n \subseteq \ell_1$, then it's not trivial that the sequence (of sequences) is bounded, i.e.
$$\sup_n \|x^n\|_1 < \infty$$
You have to differentiate between
1. a sequence $x:=(x_n)_n$ which is an element of $\ell_1$, i.e. $\|x\|_1<\infty$
2. a sequence in $\ell_1$, i.e. $(x^n)_{n \in \mathbb{N}} \subseteq \ell_1$ where $x^n \in \ell_1$ for all $n \in \mathbb{N}$.
-
Great Theorem, but I have not seen it before... Is it hard to prove? I rewrote the text, there was some typos, sorry – Johan Dec 30 '12 at 12:16
@Johan It's not that difficult to prove. When I attended functional analysis, it was an exercise left to the students. Probably you'll find this theorem (and a proof) in most books about functional analysis. (And I recognized that you rewrote the text, but I'm still the same oppinion.) – saz Dec 30 '12 at 12:38
I found one, but I cannot manage to prove 1. can u give me a hint? – Johan Dec 30 '12 at 13:46
1
@Johan I added one. – saz Dec 30 '12 at 14:15
@Johan Sorry, there was a typo in the definition of $T$. – saz Dec 30 '12 at 15:15
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.96002596616745, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/173623-finding-coordinates.html
|
# Thread:
1. ## Finding coordinates
Let F = (3,2). There is a point P on the y-axis for which the distance from P to the X-axis equals the distance PF. Find the coordinates of P.
Hints pls.
2. Originally Posted by Veronica1999
Let F = (3,2). There is a point P on the y-axis for which the distance from P to the X-axis equals the distance PF. Find the coordinates of P.
Let $P<img src=$0,b)" alt="P0,b)" /> then $3^2+(b-2)^2=b^2$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8160383701324463, "perplexity_flag": "middle"}
|
http://www.abstractmath.org/Word%20Press/?tag=basic
|
# Gyre&Gimbleposts about math, language and other things that may appear in the wabe
## Metaphors in computing science 2
2012/07/07 — SixWingedSeraph
In Metaphors in Computer Science 1, I discussed some metaphors used when thinking about various aspects of computing. This is a continuation of that post.
### Metaphor: A program is a list of instructions.
• I discussed this metaphor in detail in the earlier post.
• Note particularly that the instructions can be in a natural or a programming language. (Is that a zeugma?) Many writers would call instructions in a natural language an algorithm.
• I will continue to use “program” in the broader sense.
### Metaphor: A programming language is a language.
• This metaphor is a specific conceptual blend that associates the strings of symbols that constitute a program in a computer language with text in a natural language.
• The metaphor is based on some similarities between expressions in a programming language and expressions in a natural language.
• In both, the expressions have a meaning.
• Both natural and programming languages have specific rules for constructing well-formed expressions.
• This way of thinking ignores many deep differences between programming languages and natural languages. In particular, they don’t talk about the same things!
• The metaphor has been powerful in suggesting ways of thinking about computer programs, for example semantics (below) and ambiguity.
### Metaphor: A computer program is a list of statements
• A consequence of this metaphor is that a computer program is a list of symbols that can be stored in a computer’s memory.
• This metaphor comes with the assumption that if the program is written in accordance with the language’s rules, a computer can execute the program and perhaps produce an output.
• This is the profound discovery, probably by Alan Turing, that made the computer revolution possible. (You don’t have to have different physical machines to do different things.)
• You may want me to say more in the heading above: “A computer program is a list of statements in a programming language that satisfies the well-formedness requirements of the language.” But the point of the metaphor is only that a program is a list of statements. The metaphor is not intended to define the concept of “program”.
### Metaphor: A program in a computer language has meanings.
A program is intended to mean something to a human reader.
• Some languages are designed to be easily read by a human reader: Cobol, Basic, SQL.
• Their instructions look like English.
• The algorithm can nevertheless be difficult to understand.
• Some languages are written in a dense symbolic style.
• In many cases the style is an extension of the style of algebraic formulas: C, Fortran.
• Other languages are written in a notation not based on algebra: Lisp, APL, Forth.
• The boundary between “easily read” and “dense symbolic” is a matter of opinion!
A program is intended to be executed by a computer.
• The execution always involves translation into intermediate languages.
• Most often the execution requires repeated translation into a succession of intermediate languages.
• Each translation requires the preservation of the intended meaning of the program.
• The preservation of intended meaning is what is usually called the semanticsof a programming language.
• In fact, the meaning of the program to a person could be called semantics, too.
• And the human semantics had better correspond in “meaning” to the machine semantics!
• The actual execution of the program requires successive changes in the state of the computer.
• By “state” I mean a list of the form of the electrical charges of each unit of memory in the computer.
• Or you can restrict it to the relevant units of memory, but spelling that out is horrifying to contemplate.
• The resulting state of the machine after the program is run is required to preserve the intended meaning as well as all the intermediate translations.
• Notice that the actual execution is a series of physical events. You can describe the execution in English or in some notation, but that notation is not the actual execution.
#### References
Conceptual blend (Wikipedia)
Conceptual metaphors (Wikipedia)
Images and Metaphors (article in abstractmath)
Semantics in computer science (Wikipedia)
## Metaphors in computing science I
2012/05/15 — SixWingedSeraph
Michael Barr recently told me of a transcription of a talk by Edsger Dijkstra dissing the use of metaphors in teaching programming and advocating that every program be written together with a proof that it works. This led me to think about the metaphors used in computing science, and that is what this post is about. It is not a direct answer to what Dijkstra said.
We understand almost anything by using metaphors. This is a broader sense of metaphor than that thing in English class where you had to say "my love is a red red rose" instead of "my love is like a red red rose". Here I am talking about conceptual metaphors (see references at the end of the post).
### Metaphor: A program is a set of instructions
You can think of a program as a list of instructions that you can read and, if it is not very complicated, understand how to carry them out. This metaphor comes from your experience with directions on how to do something (like directions from Google Maps or for assembling a toy). In the case of a program, you can visualize doing what the program says to do and coming out with the expected output. This is one of the fundamental metaphors for programs.
Such a program may be informal text or it may be written in a computer language.
#### Example
A description of how to calculate $n!$ in English could be: "Multiply the integers $1$ through $n$". In Mathematica, you could define the factorial function this way:
fac[n_] := Apply[Times, Table[i, {i, 1, n}]]
This more or less directly copies the English definition, which could have been reworded as "Apply the Times function to the integers from $1$ to $n$ inclusive." Mathematica programmers customarily use the abbreviation "@@" for Apply because it is more convenient:
Fac[n_]:=Times @@ Table[i, {i, 1, 6}]
As far as I know, C does not have list operations built in. This simple program gives you the factorial function evaluated at $n$:
j=1; for (i=2; i<=n; i++) j=j*i; return j;
This does the calculation in a different way: it goes through the numbers $1, 2,\ldots,n$ and multiplies the result-so-far by the new number. If you are old enough to remember Pascal or Basic, you will see that there you could use a DO loop to accomplish the same thing.
#### What this metaphor makes you think of
Every metaphor suggests both correct and incorrect ideas about the concept.
• If you think of a list of instructions, you typically think that you should carry out the instructions in order. (If they are Ikea instructions, your experience may have taught you that you must carry out the instructions in order.)
• In fact, you don't have to "multiply the numbers from $1$ to $n$" in order at all: You could break the list of numbers into several lists and give each one to a different person to do, and they would give their answers to you and you would multiply them together.
• The instructions for calculating the factorial can be translated directly into Mathematica instructions, which does not specify an order. When $n$ is large enough, Mathematica would in fact do something like the process of giving it to several different people (well, processors) to speed things up.
• I had hoped that Wolfram alpha would answer "720" if I wrote "multiply the numbers from $1$ to $6$" in its box, but it didn't work. If it had worked, the instruction in English would not be translated at all. (Note added 7 July 2012: Wolfram has repaired this.)
• The example program for C that I gave above explicitly multiplies the numbers together in order from little to big. That is the way it is usually taught in class. In fact, you could program a package for lists using pointers (a process taught in class!) and then use your package to write a C program that looks like the "multiply the numbers from $1$ to $n$" approach. I don't know much about C; a reader could probably tell me other better ways to do it.
So notice what happened:
• You can translate the "multiply the numbers from $1$ to $n$" directly into Mathematica.
• For C, you have to write a program that implements multiplying the numbers from $1$ to $n$. Implementation in this sense doesn't seem to come up when we think about instruction sets for putting furniture together. It is sort of like: Build a robot to insert & tighten all the screws.
Thus the concept of program in computing science comes with the idea of translating the program instruction set into another instruction set.
• The translation provided above for Mathematica resembles translating the instruction set into another language.
• The two translations I suggested for C (the program and the definition of a list package to be used in the translation) are not like translating from English to another language. They involve a conceptual reconstruction of the set of instructions.
Similarly, a compiler translates a program in a computer language into machine code, which involves automated conceptual reconstruction on a vast scale.
#### Other metaphors
• C or Mathematica as like a natural language in some ways
• Compiling (or interpreting) as translation
Computing science has used other VIM's (Very Important Metaphors) that I need to write about later:
• Semantics (metaphor: meaning)
• Program as text – this allows you to treat the program as a mathematical object
• Program as machine, with states and actions like automata and Turing machines.
• Specification of a program. You can regard "the product of the numbers from $1$ to $n$" as a specification. Notice that saying "the product" instead of "multiply" changes the metaphor from "instruction" to "specification".
#### References
Conceptual metaphors (Wikipedia)
Images and Metaphors (article in abstractmath)
Images and Metaphors for Sets (article in abstractmath)
Images and Metaphors for Functions (incomplete article in abstractmath)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.905691921710968, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/20664?sort=oldest
|
Why is complex projective space triangulable?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In an exercise in his algebraic topology book, Munkres asserts that $\mathbf{C}P^n$ is triangulable (i.e., there is a simplicial complex $K$ and a homeomorphism $|K| \rightarrow \mathbf{C}P^n$). Can anyone provide a reference or a proof?
-
9
All smooth manifolds are triangulable. This is due to Whitehead. There's a nice write-up in Whitney's "Geometric Integration Theory". – Ryan Budney Apr 7 2010 at 22:00
11
For a more direct proof, one might try using the fact that CP^n is homeomorphic to the n-fold symmetric product of S^2. Symmetric products don't take simplicial complexes to simplicial complexes, but the quotient of a subdivision of the n-fold product is itself a simplicial complex. – Tyler Lawson Apr 7 2010 at 22:09
3
Deane, what sort of induction are you imagining? Given a simplicial structure on `$CP^{n-1}$`, one might try to show that there's some triangulation of the next cell such that after attaching this cell we still have a simplicial complex. The attaching map is the quotient map `$S^{2n-1} \to CP^{n-1}$`, whose fibers are copies of `$S^1$`. The inverse image of a point under a simplicial map is always discrete, so this attaching map is definitely not a simplicial map, no matter what simplicial structures you use. So I think John's question is not so trivial. – Dan Ramras Apr 7 2010 at 23:18
2
Oh, triangulating CP^n isn't an exercise in Munkres; rather, one of his exercises says something like, "Assume that CP^n can be triangulated (it can be). Then use the Lefschetz fixed point theorem to ..." -- the statement of the Lefschetz fixed point theorem requires that the space be triangulable. I was looking for justification for his parenthetical remark. – John Palmieri Apr 8 2010 at 21:14
2
I'm still a bit surprised that there isn't a way to triangulate $CP^n$ more easily than an arbitrary smooth manifold. Using google, I found the following short paper by Cairns on triangulations of smooth manifolds: projecteuclid.org/… – Deane Yang Apr 19 2010 at 2:23
show 4 more comments
3 Answers
I will present a triangulation of $\mathbb{CP}^{n-1}$. More specifically, I will give an explicit regular CW structure on $\mathbb{CP}^{n-1}$. As spinorbundle says, the first barycentric subdivision of a regular CW complex is a simplicial complex homeomorphic to the original CW complex.
Recall that to put a regular CW complex on space $X$ means to decompose $X$ into disjoint pieces $Y_i$ such that:
(1) The closure of each $Y_i$ is a union of $Y$'s.
(2) For each $i$, the pair $(\overline{Y_i}. Y_i)$ is homemorphic to $(\mbox{closed}\ d-\mbox{ball}, \mbox{interior of that}\ d-\mbox{ball})$ for some $d$.
The barycentric subdivision of $X$ corresponding to this regular CW complex is the simplicial complex which has a vertex for each $Y_i$ and has a simplex $(i_0, i_1, \ldots, i_r)$ if and only if $\overline{Y_{i_0}} \subset \overline{Y_{i_1}} \subset \cdots \subset \overline{Y_{i_r}}$.
Write $(t_1: t_2: \ldots: t_n)$ for the homogeneous coordinates on $\mathbb{CP}^{n-1}$. For $I$ a nonempty subset of `$\{ 1,2, \ldots, n \}$`, let $Z_I$ be the subset of $\mathbb{CP}^{n-1}$ where $|t_i|=|t_{i'}|$ for $i$ and $i' \in I$ and $|t_i| > |t_j|$ for $i \in I$ and $j \not \in I$. Note that $Z_I \cong (S^1)^{|I|-1} \times D^{2(n-|I|)}$, where $D^k$ is the open $k$-disc. Also, $\overline{Z_I} = \bigcup_{J \supseteq I} Z_J \cong (S^1)^{|I|-1} \times \overline{D}^{2(n-|I|)}$ where $\overline{D}^k$ is the closed $k$-disc.
We now cut those torii into discs. For $i$ and $i'$ in $I$, cut $Z_I$ along $t_i=t_{i'}$ and $t_i = - t_{i'}$. So the combinatorial data indexing a face of this subdivision is a cyclic arrangement of the symbols $i$ and $-i$, for $i \in I$, with $i$ and $-i$ antipodal to each other. For example, let `$I=\{ 1,2,3,4,5 \}$` and write $t_k=e^{i \theta_k}$ for $k \in I$. Then one of our faces corresponds to the situation that, cyclically, `$$\theta_1 < \theta_2 = \theta_4 + \pi < \theta_3 = \theta_5 < \theta_1+ \pi < \theta_2 + \pi = \theta_4 < \theta_3 + \pi = \theta_5 + \pi < \theta_1.$$` This cell is clearly homeomorphic to `$\{ (\alpha, \beta) : 0 < \alpha < \beta < \pi \}$`. Similarly, each of these cells is an open ball, and each of their closures is a closed ball. We have put a CW structure on the torus.
Cross this subdivision of the torus with the open disc $D^{2(n-|I|)}$. The result, if I am not confused, is a regular $CW$ decomposition of $\mathbb{CP}^{n-1}$.
-
After thinking about it for a while, this looks good to me. I have to think about it some more before I'll be completely convinced, but so far it looks very nice. – John Palmieri Aug 16 2010 at 21:05
2
According to the authors of uk.arxiv.org/abs/1012.3235 "no explicit triangulation of $CP^3$ was known so far". – Robin Chapman Dec 16 2010 at 14:10
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think the comments answer the question, but to give you a reference:
Milnor, Stasheff: Characteristic Classes, Chapter 6
They prove that every Grasmann manifold $G_n(\mathbb{R}^m)$ is a CW-Complex. (The cells are constructed with Schubert symbols). The complex case works in the same fashion.
As a result you get that $\mathbb{CP}^n$ consists of $n+1$ cells: for every $0 \leq k \leq n$ you get one $2k$-cell. The $2k$-skeleton is a $\mathbb{CP}^k$
EDIT: Sorry for the sloppiness!
Not every CW-Complex is triangulable, but the following holds:
Every regular CW-Complex (and $\mathbb{CP}^n$ is a regular complex $\oplus$) $X$ is triangulable.
This is true, since the barycentric subdivision is a simplicial complex that is homeomorphic to $X$. For a full proof, see for example Cellular structures in topology (p.130) by Fritsch and Piccinini.
Edit 2: $\oplus$: Perhaps the next sloppiness: The CW-structure of $\mathbb{CP}^n$ obtained by Schubert cells isn't regular (the characteristic map is 2-to-1) but I think there exists a regular CW-structure. But this might be harder to prove than I thought?!
-
1
Sorry, I guess I don't know much about triangulations. Why does a CW-complex structure guarantee a simplicial complex structure? – John Palmieri Apr 8 2010 at 21:17
The characteristic map isn't 2-to-1, it collapses an entire dimension! That is to say, the big cell in $\mathbb{CP}^n$ is $2n$ dimensional, so its boundary should be $S^{2n-1}$, but it is glued to $\mathbb{CP}^{n-1}$, which has dimension $2n-2$. (You might be thinking of $\mathbb{RP}^n$.) – David Speyer Aug 11 2010 at 16:08
You're right, I was thinking of RP^n. Thanks for the correction – Spinorbundle Aug 11 2010 at 16:30
An online search yielded a reference to Francis Sergeraert's paper, Triangulations of complex projective spaces, available at http://www-fourier.ujf-grenoble.fr/~sergerar/Papers/ . But, to quote the author: "The Kenzo program is used to automatically produce triangulations of the complex projective spaces $P^nC$ as simplicial sets, more precisely of spaces having the right homotopy type. The homeomorphism question between the obtained objects and the projective spaces is open."
-
I've browsed through some of Sergeraert's work before, but I hadn't seen this paper. Unfortunately, it seems to deal with simplicial sets, not simplicial complexes, and it's not clear how to get from a simplicial set structure to a simplicial complex structure. Is it? (A preprint by Lutz (arxiv.org/abs/math/0506372) says that explicit triangulations, as simplicial complexes, of CP^n are not known for n>2.) – John Palmieri Apr 13 2010 at 21:43
Thanks, John. As Sergeraert says, he hasn't proved his triangulations actually are homeomorphic to $CP^n$! It's still a surprise that explicit triangulations are apparently not known, but on first sight, Tyler's idea looks sound to me. It shows the problem is harder than it looks. – Robin Chapman Apr 14 2010 at 5:55
If anyone is still interested, a paper on triangulations of $CP^2$ by Bagchi and Datta appeared on the ArXiV today: uk.arxiv.org/abs/1004.3157 . – Robin Chapman Apr 20 2010 at 12:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 75, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332000613212585, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/115063/solved-cubic-thue-equation
|
## Solved cubic Thue equation
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi everybody. I need to know if the cubic Thue equation $x^3 + x^2y + 3xy^2 - y^3 = \pm 1$ is completely solved. I know that there are effective algorithms to solve any cubic Thue equation and that some of them are implemented in computer programs. However, I think that since the coefficients of that equation are small, it may have already been discussed in the literature. Thank you.
-
2
What context gave rise to this equation for you? The equation can be rewritten as $|(x-y)^3 + 4x^2y| = 1$, and then changing variables by $u=x-y$ and $v = y$, it becomes $|u^3 + 4u^2v + 8uv^2 + 4v^3| = 1$. The left side in integers $u$ and $v$ (necessarily not both 0) is the index $[{\mathbf Z}[\alpha]:{\mathbf Z}[u\alpha+v\alpha^2]]$, where $\alpha$ is a root of $t^3 - 2t^2 + 4t - 4$. Therefore the question asks for all possible ring generators of ${\mathbf Z}[\alpha]$ up to addition by an integer, e.g., $(x,y) = (1,0)$ corresponds to $\alpha$ and $(x,y)=(1,-1)$ corr. to $\alpha-\alpha^2$. – KConrad Dec 22 at 0:08
## 2 Answers
For a cheaper solution, use Magma. For a free solution, use pari/gp:
(17:47) gp > thue(thueinit(x^3+x^2+3*x-1,1),1)
%2 = [[1, 0], [0, -1]]
-
Thanks, I didn't know about this function! I just checked that thue(thueinit(x^3+x^2+3*x-1,1),4) works too, in a few milliseconds even using the "certify unconditionally" flag (as opposed to assuming GRH). – Noam D. Elkies Dec 21 at 19:24
1
...and come to think of it, this means gp also solves the second part of problem B-1 on the 1982 Math Olympiad: thue(thueinit(x^3-3*x+1,1),2891) returns []. (Does this gp routine thue know to look for shortcuts such as the intended obstruction mod 9, or the possibly unintended obstruction mod 7 which is what I used?) – Noam D. Elkies Dec 21 at 22:07
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Dec. 1, 2012: your equation has only two integer solutions:
$x=0,y=-1$ and $x=1,y=0$ if the right-hand-side equals $+1$,
or $x=-1,y=0$ and $x=0,y=1$ if the right-hand-side equals $-1$.
if you have access to `Mathematica`, to find these solutions is a simple one-liner:
````Reduce[x^3 + x^2*y + 3*x*y^2 - y^3 == 1, {x, y}, Integers]
````
Dec. 22, 2012: the question has evolved a bit into the direction of the (monetary -- not computational) cost of the software used to solve the Thue equation; in that connection it might be of some interest to note that the algorithm implemented by `Mathematica` can actually be called free of charge through the Wolfram Alpha interface:
````solve for integer x,y: x^3+x^2*y+3*x*y^2−y^3=1
````
-
$(47,159)$ is close, though... (Giving $-4$.) – Noam D. Elkies Dec 1 at 16:39
1
en.wikipedia.org/wiki/Thue_equation Thue's equation has only a finite number of integer solutions, and there exists a bound on the solutions, so by simply trying all integers within the bound you are guaranteed to find all solutions. The calculation can be lengthy, I enlisted the help of Mathematica. – Carlo Beenakker Dec 1 at 17:11
2
@Richard: You should convince people who read your article that solving that equation is a routine matter - given the bounds established by Baker, say. Indeed, all that Mathematica uses is the existence of these bounds on $x,y$, which are part of the (by now) standard theory. The fact that you learned about this only recently doesn't force you to belabor the point; on the contrary, if you spend too many words on this your readers might get the false impression that there's something unusual going on here (which there ain't). – René Pannekoek Dec 2 at 3:51
1
more precisely, the algorithm implemented by Mathematica is described in Journal of Symbolic Computation, 38 (2004) 1145 ---finanz.math.tu-graz.ac.at/~ziegler/Papers/… – Carlo Beenakker Dec 2 at 13:34
3
@John Cremona: Richard Pinch once disproved the GRH using Mathematica (in the 1990s). In the manual, Mathematica claimed "our primality test is the following [blah] and this test is proved to be correct assuming GRH". Pinch then found a non-prime number that Mathematica's primality test asserted was prime, and got in touch with them. He managed to persuade them to let him see part of the source code, and told me that the code in the actual program bore no resemblance to the claims made in the manual! Hopefully this story answers your question, at least in a weak sense. – wccanard Dec 22 at 17:01
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236472845077515, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/50492/true-false-or-meaningless/50501
|
# True, false, or meaningless?
Are the following two assertions always true, always false or meaningless?
$\exists i \in \emptyset$
$\forall i \in \emptyset$
Because it seems that one encounters expressions of this kind fairly similar in mathematics, especially if we are dealing with degenerate cases of definitions. Let me give an example (in graph theory) of such a case. There, one can formalize the idea the two vertices are connected in the following way: Let $G=(V,E)$ be a graph and let $v,w\in V$. We define $v$ and $w$ to be "connected" if: $\exists n \in \mathbb{N}, \ \exists \alpha: \left\{ 1,\ldots,n \right\} \rightarrow V, \ \alpha_0=v \ \& \ \alpha_n=w \ \ \forall i\in \mathbb{N}, 0\leqslant i \ \& \ i<n:\ \left\{ \alpha_i, \alpha_{i+1} \right\} \in E$
Now, if we would ask, if every node is connected to itself (a fact which intuitively we would want to be true), we would have to exhibit an $n$ and a sequence $\alpha$, such that [bla bla...]. The obvious choice for $n$ is 0. But for this choice of $n$, no mattter which sequence $\alpha$ we would consider, the set of the $i$'s would be empty, because the set of the $i$'s is actually $\left\{ i\in \mathbb{N} | 0\leqslant j \ \& \ j<n \right\}$. But for $n=0$ this set is the empty set. Thus, because we quantify $\forall i \in \left\{ i\in \mathbb{N} | 0\leqslant j \ \& \ j<0 \right\}=\emptyset$, should the statement be true/false by definition?
EDIT: Sorry, for my unclear formulation and to everybody who on my fault interpreted it wrongly. The way Carl Mummert or Listing interpreted it, was what I meant.
-
7
In my opinion, in the present form of the question - with $\exists i\in I$ and $\forall i\in I$ - they are not even propositions, so nothing can be said about the truth values. If you would be asking about $(\exists i\in I)i=i$ and $(\forall i\in I)i=i$, then the first one is false and the second one is true. (You can replace $i=i$ by any proposition $P(i)$.) – Martin Sleziak Jul 9 '11 at 11:19
But maybe I have misunderstood you and you're asking about $(\exists i)i\in\emptyset$ and $(\forall i)i\in\emptyset$. They are both false. But I don't think this is what you had in mind. – Martin Sleziak Jul 9 '11 at 11:22
## 2 Answers
When we translate mathematical statements into formal logic, the "bounded quantifiers" $(\exists x \in I) P(x)$ and $(\forall x \in I) P(x)$ are usually viewed as abbreviations, as follows:
• $(\exists x \in I) P(x)$ is an abbreviation of $(\exists x)(x \in I \land P(x))$.
• $(\forall x \in I) P(x)$ is an abbreviation of $(\forall x)(x \in I \to P(x))$.
With these conventions, the bounded quantifiers continue to make sense even when $I$ is empty. In that case, $(\exists x \in \emptyset) P(x)$ will always be false, and $(\forall x \in \emptyset) P(x)$ will always be true, regardless of the formula $P(x)$. So, for example, $(\exists x \in \emptyset)(i = i)$ is false and $(\forall x \in \emptyset)(i \not = i)$ is true.
A nice property of this definition of the bounded quantifiers is that it makes them dual in the sense that for any set $I$ (possibly empty) and any formula $P(x)$, we have
• $(\exists x \in I) P(x)$ if and only if $\lnot (\forall x \in I)\lnot P(x)$
• $(\forall x \in I)P(x)$ if and only if $\lnot (\exists x \in I)\lnot P(x)$
These can be verified by direct calculation: $$\begin{split} (\exists x \in I) P(x) & \Leftrightarrow (\exists x) (x \in I \land P(x)) \\ & \Leftrightarrow \lnot (\forall x) \lnot (x \in I \land P(x)) \\ & \Leftrightarrow \lnot (\forall x)(x \not \in I \lor \lnot P(x)) \\ & \Leftrightarrow \lnot (\forall x)(x \in I \to \lnot P(x))\\ & \Leftrightarrow \lnot (\forall x \in I)\lnot P(x) \end{split}$$ and $$\begin{split} (\forall x \in I) P(x) & \Leftrightarrow (\forall x) (x \in I \to P(x)) \\ & \Leftrightarrow \lnot (\exists x) \lnot (x \not \in I \lor P(x))\\ & \Leftrightarrow \lnot (\exists x) (x \in I \land \lnot P(x))\\ & \Leftrightarrow \lnot (\exists x \in I) \lnot P(x) \end{split}$$
-
So my graph theory example is true, because of all set sets over which I quantified, one is the empty set and thus the assertion is true ? (Side question: I seem to have difficulties translating mathematical statements into formal logic and then drawing the right conclusions about them using just the rules of formal logic. Could you maybe recommend a book the explains/trains this ?) – temo Jul 9 '11 at 13:15
I think there is some typo in your formula regarding whether the indexes start at 0 or 1. But once that is fixed the formula will indeed be true when there is just one node. Re formal logic, using just those rules is very time-consuming, so even in logic we don't typically write proofs just from the rules. But knowing what the rules are does help illustrate how to do some reasoning. One book I like is A mathematical introduction to logic by Enderton. – Carl Mummert Jul 9 '11 at 13:27
In this form, these are just beginnings of propositions. But you may have meant: $$\exists i: i\in \{\}$$ $$\forall i: i\in \{\}$$ The second one says that every $i$ is an element of the empty set. This proposition is false. It's enough to find one counterexample. Barack Obama is not an element of the empty set. So it's false because the condition ($i$ is an element of the empty set) doesn't hold for every $i$.
The first one is weaker but it is still false. To prove that it is true, we would have to find an element $i$ that is an element of the empty set. But because the empty set has no elements, you can't find any. :-) The situation when the condition (in this case, $i$ is an element of the empty set) has exactly zero solutions is the way - and the only way - how the existential quantifier may fail.
Your comments about the graphs are far more complicated than the two simple propositions above but it is true that one must safely understand the logic of the two propositions above to be sure that he can evaluate more complex existential and universal propositions about the graphs (and everything else in maths), too.
-
This is not what he wants. I think he asks for the degenerate case where an expression has the form: $\forall i \in \emptyset: T(i)$ where $T$ is a logical operator that depends on $i$. – Listing Jul 9 '11 at 12:13
2
I see, so $\forall i\in \{\}: T(i)$ holds for any $T(i)$ because there is no counterexample in an empty set for which $T(i)$ would fail - it holds for everyone (all zero of them). On the other hand, $\exists i\in\{\}: T(i)$ is always untrue because there doesn't exist any $i$ in an empty set that has a property - whatever property - because there's nothing in an empty set even without adjectives. – Luboš Motl Jul 9 '11 at 14:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491227269172668, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/41068/are-packing-homogeneous-spaces-homogeneous/41079
|
## Are packing-homogeneous spaces homogeneous?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a metric space (M,d) define the packing function P(x,R,r) to be the maximum number of non-intersecting balls of radius r with centers in the ball B(x,R). Let’s call M packing-homogeneous if the packing function is independent of the base point x.
Conjecture A complete connected packing-homogeneous space M is homogeneous. That is, the group of isometries acts transitively on M.
Completeness is necessary as Ricky Demer pointed out.
Connectedness is necessary. It is not difficult to construct finite graphs that are packing-homogeneous, but are not homogeneous (vertex-transitive).
Remark 1 It follows from the work of Gleason, Montgomery, Zippin and others on Hilbert’s 5th problem that a complete connected finite-dimensional homogeneous space is a manifold. Thus, a complete connected finite-dimensional packing-homogeneous space that is not a manifold (for example, has fractional Hausdorff dimension) would provide a counter-example to the conjecture.
Remark 2 I hoped to settle this question for length spaces (or inner metric spaces), i.e. when the distance between two points is equal to the length of a shortest path. Every length space is a Gromov-Hausdorff limit of grpahs. I was hoping to establish a connection between some "almost packing-homogeneous" and "almost homogeneous" properties of graphs that in the limit would give me the desired result (in the spirit of what Gromov does in his paper on groups of polynomial growth). Or, to prove the opposite, constructing a convergent sequence of graphs that is "almost packing-homogeneous" but "increasingly inhomogeneous". After some attempts I thought that the problem for graphs may be as difficult as the original conjecture.
Remark 3 As far as I know, the conjecture is open even for Riemannian manifolds of dimension greater than 2.
I was not able to find any papers that would mention this question or try to attack it. How plausible is it that the conjecture is true? Where one would look for counterexamples?
I would also be grateful for any comments that would put this problem in a broader context, or any guesses as to how difficult this problem might be.
-
## 1 Answer
Let $d_e$ be the usual metric on $\mathbb{R}^2$ and $S = \{(\frac1n,0) : n \operatorname{is} \operatorname{a} \operatorname{positive} \operatorname{integer}\}$. Let $(X,d)$ be $\mathbb{R}^2-S$ with the metric inherited from $(\mathbb{R}^2,d_e)$.
Clearly, $P_{(X,d)}(x,R,r)\leq P_{(\mathbb{R}^2,d_e)}(x,R,r)$ and $(\mathbb{R}^2,d_e)$ is packing-homogeneous. Consider a ball $B$ in $(\mathbb{R}^2,d_e)$ with center $C$, and disjoint balls of radius $r$ with centers $\{c_i : i\in I\}$ such that $\{c_i : i\in I\} \subseteq B_0$. Bounded subsets of $(\mathbb{R}^2,d_e)$ are totally bounded, so $|\{c_i : i\in I\}| < \infty$. Define $R(\theta)$ as the rotation of $(\mathbb{R},d_e)$ by $\theta$ radians about $C$, clearly these are all isometries which fixe $C$ and $B$ is invariant under. Since the points in $S$ are colinear, $|\{s\in S : d_e(C,c_i) = d_e(C,s)\}| \leq 2$, so if $C$ is not a member of $S$, then $|\{\theta\in (-\pi,\pi] : (r(\theta))(c_i)\in S\}| \leq 2$. This gives us $|\{\theta\in (-\pi,\pi] : (\exists i\in I)((r(\theta))(c_i)\in S)\}| =$ $|\displaystyle\bigcup_{i\in I} \{\theta\in (-\pi,\pi] : (r(\theta))(c_i)\in S\}| \leq |I|\cdot 2 < \infty\cdot 2 = \infty \leq |(-\pi,\pi]|$, so there exists $\theta\in (-\pi,\pi]$ such that $\neg (\exists i\in I)((r(\theta))(c_i)\in S)$. Let $\phi$ be such a $\theta$. Then $(\forall i\in I)((r(\phi))(c_i)\not\in S)$. This shows that if $C$ is not a member of $S$, then there is a packing in $B$ which does not use any points of $S$, so $P_{(\mathbb{R}^2,d_e)}\leq P_{(X,d)}(x,R,r)$. Let $x,y$ be members of $X$, then $P_{(X,d)}(x,R,r)\leq P_{(\mathbb{R}^2,d_e)}(x,R,r) = P_{(\mathbb{R}^2,d_e)}(y,R,r) \leq P_{(X,d)}(y,R,r)$ $\leq P_{(\mathbb{R}^2,d_e)}(y,R,r) = P_{(\mathbb{R}^2,d_e)}(x,R,r)\leq P_{(X,d)}(x,R,r)$, which shows that the packing function on $(X,d)$ is independent of basepoint. Therefore $(X,d)$ is packing-homogeneous.
$(X,d)$ is not locally compact near $(0,0)$, but it is locally compact near $(0,1)$, so $(X,d)$ is not even topologically homogeneous. To get to $(0,0)$ from any point in $(X,d)$, if you start below the x axis travel down to $y=-1$, otherwise travel up to $y=1$, then travel horizontally to $x=0$, then travel vertically to $y=0$. This shows that every point has a path to $(0,0)$, so $(X,d)$ is connected. Therefore $(X,d)$ is a connected packing-homogeneous space which is not homogeneous. QED
Demanding completeness might be enough. It would certainly stop my idea.
-
Thank you! This definitely works if we don't require completeness. I will modify my question accordingly. – Yevgeny Liokumovich Oct 4 2010 at 23:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456666707992554, "perplexity_flag": "head"}
|
http://mathforum.org/mathimages/index.php?title=Parametric_Equations&oldid=20255
|
# Parametric Equations
### From Math Images
Revision as of 14:08, 14 June 2011 by Ifayanj (Talk | contribs)
Butterfly Curve
Field: Algebra
Image Created By: Direct Imaging
Website: [1]
Butterfly Curve
The Butterfly Curve is one of many beautiful images generated using parametric equations.
# Basic Description
Parametric Equations can be used to define complicated functions and figures in simpler terms, using one or more additional independent variables, known as parameters . For the many useful shapes which are not "functions" in that they fail the vertical line test, parametric equations allow one to generate those shapes in a function format. In particular, Parametric Equations can be used to define and easily generate geometric figures, including(but not limited to) conic sections and spheres.
The butterfly curve in this page's main image uses more complicated parametric equations as shown below.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Linear Algebra
[Click to view A More Mathematical Explanation]
[[Image:Animated_construction_of_butterfly_curve.gif|thumb|right|500px|Parametric construction of the [...]
[Click to hide A More Mathematical Explanation]
Parametric construction of the butterfly curve
Sometimes curves which would be very difficult or even impossible to graph in terms of elementary functions of x and y can be graphed using a parameter. One example is the butterfly curve, as shown in this page's main image.
This curve uses the following parametrization:
$\begin{bmatrix} x \\ y\\ \end{bmatrix}= \begin{bmatrix} \sin(t) \left(e^{\cos(t)} - 2\cos(4t) - \sin^5\left({t \over 12}\right)\right) \\ \cos(t) \left(e^{\cos(t)} - 2\cos(4t) - \sin^5\left({t \over 12}\right)\right)\\ \end{bmatrix}$
### Parametrized Curves
Many useful or interesting shapes otherwise inexpressible as xy-functions can be represented in coordinate space using a non-coordinate parameter, such as circles. A circle cannot be expressed a function where one variable is dependent on another. If a parameter (t) is used like to represent an angle in the coordinate plane, the parameter can be used to generate a unit circle, as shown below. The parameter $t$ does, in the case of a unit circle, represent a physical quantity in space: the angle between the x-axis and a vector of magnitude 1 going to point $(x,y)$ on the coordinate plane.
### Parametrized Surfaces
The surface of a sphere can be graphed using two parameters.
In the above cases only one independent variable was used, creating a parametrized curve. We can use more than one independent variable to create other graphs, including graphs of surfaces. For example, using parameters s and t, the surface of a sphere can be parametrized as follows: $\begin{bmatrix} x \\ y\\ z\\ \end{bmatrix}= \begin{bmatrix} sin(t)cos(s) \\ sin(t)sin(s) \\cos(t) \end{bmatrix}$
### Parametrized Manifolds
While two parameters are sufficient to parametrize a surface, objects of more than two dimensions, such as a three dimensional solid, will require more than two parameters. These objects, generally called manifolds, may live in higher than three dimensions and can have more than two parameters, so cannot always be visualized. Nevertheless they can be analyzed using the methods of vector calculus and differential geometry.
### Parametric Equation Explorer
This applet is intended to help with understanding how changing an alpha value changes the plot of a parametric equation. See the in-applet help for instructions.
If you can see this message, you do not have the Java software required to view the applet.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# Related Links
### Additional Resources
If you can see this message, you do not have the Java software required to view the applet.
• applet is intended to help with understanding how changing an alpha value changes the plot of a parametric equation. See the in-applet help for instructions.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8198296427726746, "perplexity_flag": "middle"}
|
http://ams.org/bookstore?fn=20&arg1=mmonoseries&ikey=MMONO-160
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Return to List
Linear and Nonlinear Perturbations of the Operator $$\operatorname{div}$$
V. G. Osmolovskiĭ, St. Petersburg State University, Russia
SEARCH THIS BOOK:
Translations of Mathematical Monographs
1997; 104 pp; hardcover
Volume: 160
ISBN-10: 0-8218-0586-X
ISBN-13: 978-0-8218-0586-2
List Price: US\$59
Member Price: US\$47.20
Order Code: MMONO/160
The perturbation theory for the operator div is of particular interest in the study of boundary-value problems for the general nonlinear equation $$F(\dot y,y,x)=0$$. Taking as linearization the first order operator $$Lu=C_{ij}u_{x_j}^i+C_iu^i$$, one can, under certain conditions, regard the operator $$L$$ as a compact perturbation of the operator div.
This book presents results on boundary-value problems for $$L$$ and the theory of nonlinear perturbations of $$L$$. Specifically, necessary and sufficient solvability conditions in explicit form are found for various boundary-value problems for the operator $$L$$. An analog of the Weyl decomposition is proved.
The book also contains a local description of the set of all solutions (located in a small neighborhood of a known solution) to the boundary-value problems for the nonlinear equation $$F(\dot y, y, x) = 0$$ for which $$L$$ is a linearization. A classification of sets of all solutions to various boundary-value problems for the nonlinear equation $$F(\dot y, y, x) = 0$$ is given.
The results are illustrated by various applications in geometry, the calculus of variations, physics, and continuum mechanics.
Readership
Graduate students and research mathematicians interested in partial differential equations.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8416894674301147, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/6045/how-to-break-an-arbitrary-xor-and-rotation-based-encryption
|
# How to break an arbitrary XOR and Rotation based encryption?
I heard encryption based purely on XOR and Rotation is inherently weak. The paper Rotational Cryptanalysis of ARX says:
It is also easy to prove that omitting addition or rotation is devastating, and such systems (XR and AX) can always be broken.
But I am not able to find any information on how to actually do it. Can anyone give a hint?
(Update:)
@CodesInChaos pointed out: "You can describe each output bit as the xor of a fixed set of input/key bits. This results in a few hundred linear equations modulo 2, which can be solved efficiently." For simple XR cipher, I understand how this works. But there are issues for me for more complex ones. Illustrated as follows:
Suppose a toy XOR/Rotation based cipher (cipher 1) which encrypts a 4 bit plaintext (p) to a 4 bit ciphertext (c) with a 4 bit key (k). The encryption process is as follows (with example p = 1001, k= 1000, and c = 1110, all additions are modulo 2 additions):
• E1. Right rotate p by 2 bits, producing m ( 1001 --> 0110),
• E2. XOR m with k, producing c (0110 + 1000 = 1110)
The corresponding decryption process:
• D1. XOR c with k, producing m (1110 + 1000 = 0110)
• D2. Left rotate m by 2 bits, producing p ( 0110 --> 1001)
Following @CodesInChaos 's advice, I can convert the decryption to the following linear equation system :
````c1 + k1 = p3 1 + k1 = 1 k3 = 1
c0 + k0 = p2 ==> 0 + k0 = 0 ==> k2 = 0 (A)
c3 + k3 = p1 1 + k3 = 0 k1 = 0
c2 + k2 = p0 1 + k2 = 1 k0 = 0
````
So far so good. But what if the rotation bits in the above step E2 is not a constant 2, but changes with the input plain text? For example, let's modify the above cipher a little bit to this (cipher 2):
• E1. Right rotate p by n bits, producing m. In which n = the upper 2 bits of p( 1001 --> 0110),
• E2. XOR m with k, producing c (0110 + 1000 = 1110)
I cannot convert this cipher to a simple linear equation system. Because there is no longer a fixed function for each output bit as of key & input bits.
So my questions is: Is cipher 2 still qualified as a "pure XR" system? Is there still a generic way to break it?
-
– Brandon Jan 17 at 15:32
Can you further expand your question, what are you XORing, what are you rotating, when and in what order are you performing these operations? – trumpetlicks Jan 17 at 15:37
– Brandon Jan 17 at 15:38
2
You can describe each output bit as the xor of a fixed set of input/key bits. This results in a few hundred linear equations modulo 2, which can be solved efficiently. – CodesInChaos Jan 18 at 22:08
Thanks a lot for the answer and the edit. I will look into it. – Penghe Geng Jan 19 at 1:02
show 1 more comment
## 1 Answer
XOR operations, fixed bit movements (as in taking the 2 topmost bits or concatenating bits etc.) and data dependent rotations form a functional complete set of operations. This means that you can realize any function between fixed length binary strings, including all possible blockciphers, using them.
To show that these operations form a functional complete set one can show that all operations of another functional complete set can be realized. For example the set {NOT, AND}:
• Realizing a NOT operation is easy, since this is only a XOR operation with a 1 constant.
• Realizing an AND operation requires the data dependent rotations. Given the inputs $a$ and $b$ construct the value $v = RotLeft_{a}(0b)$. The leftmost bit of $v$ is now the result of the AND operation of $a$ and $b$. This can be verified by looking at the possible input values: If $a$ is zero the rotation is does nothing and the leftmost bit stays zero. If $a$ is one the rotation will move the value of $b$ to the leftmost bit and the result is one exactly if $b$ is also one.
This would turn any algorithm that could break any cipher based on these oprations efficiently into an algorithm that breaks any arbitrary cipher efficiently, unlikely to exist and certainly not known.
Nevertheless I would not assume that most or even many of the ciphers constructed from these premitives are secure. For example if there are only few data dependent rotations and it is feasible to enumerate all possible rotation count combinations, the system can be broken by just trying to solve the resulting linear system for each combination.
-
Thanks @jix. Your answer makes sense to me. I revisited the ARX paper and think the weak XOR/Rotation systems the author mentions should probably be non-data-dependent. – Penghe Geng Feb 10 at 16:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9104188084602356, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/251822/finding-a-basis-for-the-plane-with-the-equation/251835
|
# Finding a basis for the plane with the equation
I know the conditions of being a basis. The vectors in set should be linearly independent and they should span the vector space.
So while finding a basis for the equation y = z, it's easy to see that the x is free variable and if we call it as x = s, y and z becomes t for example. And the basis are (1,0,0) and (0,1,1)
However, i could not apply the same logic to x - 2y + 5z =0 Is there any free variable in this equation? How can I find the basis?
-
## 2 Answers
As in your first example, you have two (not one) free variable. You let $x= s$, $y = t$ in the first example and $z = t$ follow from the equation. In this example you must take $x$ but you can choose between $y$ and $z$ to be free for you.
In the second example you can take any two variables as free. Let's take $x = s$, $y = t$. The equation $x-2y+5z = 0$ gives $$z = -\frac 15 x + \frac 25y = \frac 15(2t -s)$$ So a basis is given by $(1,0, -\frac 15)^\top$, $(0,1,\frac 25)^\top$.
-
Oh thanks but the given basis is not your answer. – Yigit Can Dec 5 '12 at 21:22
Which given basis? – martini Dec 5 '12 at 21:23
i mean in the key – Yigit Can Dec 5 '12 at 21:23
What key? I don't unterstand? – martini Dec 5 '12 at 21:24
1
@martini The OP probably meant his/her solution manual. – user1551 Dec 6 '12 at 4:02
When you have the equation of a plane as $ax+by+cz=0$, the vector $(a,b,c)$ is perpendicular to the plane. So you simply need to find one vector contained in the plane by inspection, e.g. $(2,1,0)$, then get the second one as the cross product of the first vector with the perpendicular vector, i.e. $(2,1,0) \times (1, -2, 5) = (5, -10, -5)$.
So the pair of vectors $(2,1,0)$ and $(1, -2,-1)$ form not only a basis, but an orthogonal one, of your plane.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341188669204712, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/18074?sort=newest
|
## A special integral polynomial
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given $n \in \mathbf{N}$,is always possible to construct a monic polynomial in $\mathbf{Z}[x]$ of degree $2n$, whose roots are in $\mathbf{C} \setminus \mathbf{R}$ and whose Galois group over $\mathbf{Q}$ is $S_{2n}$? I have an approximate idea of how to solve the problem for the Galois group (I immagine something related to the Hilbert irreducibility theorem), but I have no idea for the condition on the roots. Furthermore, is it possible to give an explicit example?
-
Is reasonable to think that the generic monic integral polynomial will have that form? I do not have a precise meaning here for the word generic, but maybe it might be given a number theoretic (and scheme-theoretic on Spec Z?) sense. – Roberto Svaldi Mar 13 2010 at 17:43
In some sense a generic polynomial with no real roots should have a Galois group of S_2n, but I don't know a sense in which a generic polynomial doesn't have any real roots. – Douglas Zare Mar 13 2010 at 17:51
1
No, the condition of having no real roots is not generic. Rather, it defines a nonempty open set (in the analytic topology) of the space of all degree $2n$ polynomials: e.g. for quadratic polynomials the condition is just $b^2-4ac < 0$. Thus if you endow this space with some reasonable measure, the locus you want will have positive, but not full, measure. In contrast the locus of the set where the Galois group is $S_{2n}$ will have full measure, so morally there's your existence proof. But I didn't immediately see how to make this rigorous, so I did something totally different below. – Pete L. Clark Mar 13 2010 at 18:02
## 2 Answers
An easy way to ensure that a polynomial $g$ of degree $m$ over $\mathbf{Z}$ has Galois group $S_m$ is to take primes $p_1$, $p_2$ and $p_3$ with $g$ irreducible modulo $p_1$, a linear times an irreducible modulo $p_2$ and a bunch of distinct linears times an irreducible quadratic modulo $p_3$. Then the Galois group must be doubly transitive and have a transposition, so it's $S_m$.
Now take $m=2n$ and a polynomial $f$ over $\mathbf{Q}$ with no real roots (e.g. $(x^2+1)^n$). Replacing the coefficients of $f$ by close rationals won't create any real roots. So replace the $x^k$ coefficient of $f$ by a sufficient close rational $a_k/b_k$ where $a_k$ and $b_k$ are congruent modulo $p_1 p_2 p_3$ respectively to the $x^k$ coefficient of $g$ and to $1$. Then the new polynomial has rational coefficients, no real roots and Galois group $S_{2n}$. You can easily convert it to one with these properties and integer coefficients should you wish.
-
1
It's easier than that to eliminate real roots. Just add a multiple of $p_1p_2p_3$ greater than the minimum. – Douglas Zare Mar 14 2010 at 0:40
Thanks Douglas, that's a nice trick. +1 My method (basically weak approximation) extends to obtaining any even number of non-real roots. – Robin Chapman Mar 14 2010 at 7:52
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Yes, it is always possible.
First note that it suffices to construct a totally complex Galois extension $K/\mathbb{Q}$ of degree $2n$ with Galois group $S_{2n}$. By the primitive element theorem, this extension is of the form $\mathbb{Q}[t]/(f(t))$ for some irreducible polynomial $f$, the minimal polynomial of an algebraic number $\alpha \in K$. Then there exists $n \in \mathbb{Z}^+$ such that $n \alpha$ is an algebraic integer: take the minimal polynomial of that algebraic integer: it generates the same field extension.
To construct the desired extension $K$, in turn it suffices to find an irreducible polynomial with $\mathbb{Q}$-coefficients with no real roots and whose Galois group is the largest possible $S_{2n}$. This is possible by a weak approximation / Krasner's Lemma argument. I will just sketch it for now; I can fill in more details if needed. The idea is to find a finite set of primes $p$ and degree $2n$ polynomials $f_p$ such that the Galois group of $f_p$, as a group of permutations on the roots of $f_p$, is of a certain form (e.g. contains a specific transposition). Also let $f_{\infty}$ be any degree $2n$ polynomial over $\mathbb{R}$ without real roots. Then by Krasner's Lemma, there exists a polynomial $f$ which is sufficiently $p$-adically close to each $f_p$ and to $f_{\infty}$ to have the same local behavior: in particular, to factor the same way over `$\mathbb{Q}_p$` and over $\mathbb{R}$ and to generate the same local Galois groups. Then, by identifying the local Galois groups with decomposition groups at $p$ (of unramified extensions), if one has enough primes so as to get permutations of every possible cycle type, then the global Galois group of $f$ certainly must be $S_{2n}$. Indeed, to see this we use the following result from lecture notes of Keith Conrad (and Bertrand's postulate!):
http://www.math.uconn.edu/~kconrad/blurbs/galoistheory/galoisSnAn.pdf
Theorem: For $n \geq 2$, a transitive subgroup of $S_n$ which contains a transposition and a $p$-cycle for some prime $p > \frac{n}{2}$ is $S_n$.
The condition at infinity means that $\mathbb{Q}[t]/(f(t))$ is totally complex, hence so is its splitting field. To ensure that $f$ is irreducible, we may apply Krasner's Lemma again and take its coefficients sufficiently close to those of an irreducible degree $2n$ polynomial over $\mathbb{Q}_p$ (for a different $p$ from those used thus far) so as to be irreducible over $\mathbb{Q}_p$, which implies irreducibility over $\mathbb{Q}$.
This can in principle be made explicit, but I might search the literature for a known classical family of polynomials doing what you want before I tried to carry out this construction explicitly.
-
1
@Pete: I'm sure this works but I don't quite understand the argument yet. Your strategy shows that I can find f with no real roots and such that for some finite set of primes p in S, f mod p factors in a given way. The upshot is that you can decree the cycle type of Frob_p for p in a finite set. But you can't control which roots are in which cycle, can you? So aren't you left with the following issue: you have to prove that if G is a transitive subgroup of S_{2n} containing an element of each cycle type, then G=S_{2n}. No doubt this is standard but don't you need it to complete the argument? – Kevin Buzzard Mar 13 2010 at 18:33
@Kevin: Thanks for the comment. I completed the argument along the lines you suggested. – Pete L. Clark Mar 13 2010 at 20:04
:-/ Now you mention the result I remember using it about 10 years ago to check that the char poly of T_2 on S_k(SL_2(Z)) had Galois group S_n for all k<=2048. Daft story connected with this: after I checked this I emailed William Stein telling him what I had done, and the next day he emailed me back saying he'd just done k=2050 so now he held the record :-) – Kevin Buzzard Mar 13 2010 at 21:23
@Kevin: that's funny. By the way, why no upvotes? I don't need the reputation, but the corroboration that my argument is correct and understandable is very welcome. – Pete L. Clark Mar 13 2010 at 21:43
By the way, since this is an application of the sort of weak approximation + Krasner argument that I have used in my own work and now introduced in my course on local fields, having looked at Keith's paper it's natural to try to make a similar argument work to get the alternating groups A_n as Galois groups over any global field. [Yes, I know this is due to Hilbert.] But it's not immediately clear to me how to do it -- can anyone help? – Pete L. Clark Mar 13 2010 at 21:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267862439155579, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/24001/what-is-the-mass-density-distribution-of-an-electron
|
# What is the mass density distribution of an electron?
I am wondering if the mass density profile $\rho(\vec{r})$ has been characterized for atomic particles such as quarks and electrons. I am currently taking an intro class in quantum mechanics, and I have run this question by several professors. It is my understanding from the viewpoint of quantum physics a particle's position is given by a probability density function $\Psi(\vec{r},t)$. I also understand that when books quote the "radius" of an electron they are typically referring to some approximate range into which an electron is "likely" to fall, say, one standard deviation from the expectation value of its position or maybe $10^{-15}$ meters.
However it is my impression that, in this viewpoint, wherever the particle "is" or even whether or not the particle "had" any position to begin with (via the Bell Inequalities), it is assumed that if it were (somehow) found, it would be a point mass. This has been verified by my professors and GSIs. I am wondering if its really true.
If the particle was truly a point mass then wherever it is, it would presumably have an infinite mass density. Wouldn't that make electrons and quarks indistinguishable from very tiny black holes? Is there any practical difference between saying that subatomic particles are black holes and that they are point masses? I am aware of such problems as Hawking Radiation although at the scales of the Schwarchild radius of an electron (back of the envelope calculation yields $\tilde{}10^{-57}$ meters), would it really make any more sense to use quantum mechanics as opposed to general relativity?
If anyone knows of an upper bound on the volume over which an electron/quark/gluon/anything else is distributed I would be interested to know. A quick Google Search has yielded nothing but the "classical" electron radius, which is not what I am referring to.
Thanks in advance; look forward to the responses.
-
I think the density will not have symetric form. It will have veriety density according to its speed. And also it will have a singularity in the middle of the particle, hard to determine where is the middle of the particle. And more impotantly, we don't know how fermions are shaped. – 4545454545SI Apr 19 '12 at 3:22
Is there any reason to suspect that it wouldn't? Either way that is why I wrote it as $\rho(\vec{r})$ instead of $\rho(r)$; $\vec{r}=[x,y,z]$ and can take on any value (at least that's the convention we've been using). – clevy Apr 19 '12 at 3:26
I didnt say it would not. I just said we dont know what fermions really seems like. You can assume It will strongly relate with atomic spin. – 4545454545SI Apr 19 '12 at 3:38
## 2 Answers
Let me start by saying nothing is known about any possible substructure of the electron. There have been many experiments done to try to determine this, and so far all results are consistent with the electron being a point particle. The best reference I can find is this 1988 paper by Hans Dehmelt (which I unfortunately can't access right now) which sets an upper bound on the radius of $10^{-22}\text{ m}$.
The canonical reference for this sort of thing is the Particle Data Group's list of searches for lepton and quark compositeness. What they actually list in that reference is not exactly a bound on the electron's size in any sense, but rather the bounds on the energy scales at which it might be possible to detect any substructure that may exist within the electron. Currently, the minimum is on the order of $10\text{ TeV}$, which means that for any process occurring up to roughly that energy scale (i.e. everything on Earth except high-energy cosmic rays), an electron is effectively a point. This corresponds to a length scale on the order of $10^{-20}\text{ m}$, so it's not as strong a bound as the Dehmelt result.
Now, most physicists (who care about such things) probably suspect that the electron can't really be a point particle, precisely because of this problem with infinite mass density and the analogous problem with infinite charge density. For example, if we take our current theories at face value and assume that general relativity extends down to microscopic scales, an point-particle electron would actually be a black hole with a radius of $10^{-57}\text{ m}$. However, as the Wikipedia article explains, the electron's charge is larger than the theoretical allowed maximum charge of a black hole of that mass. This would mean that either the electron would be a very exotic naked singularity (which would be theoretically problematic), or general relativity has to break at some point before you get down to that scale. It's commonly believed that the latter is true, which is why so many people are occupied by searching for a quantum theory of gravity.
However, as I've mentioned, we do know that whatever spatial extent the electron may have cannot be larger than $10^{-22}\text{ m}$, and we're still two orders of magnitude away from probing that with the most powerful particle accelerator in the world. So for at least the foreseeable future, the electron will effectively be a point.
-
1
So the answer is no one really knows the mass density profile of an electron? – clevy Apr 19 '12 at 6:08
I see your point :) Thank you very much. – clevy Apr 19 '12 at 6:13
1
Yep, nobody knows, except that we know $\rho(\vec{r})$ is negligible for $|\vec{r}| \gtrsim 10^{-22}\text{ m}$. – David Zaslavsky♦ Apr 19 '12 at 6:18
David Zaslavsky has given a solid, relatively model-independent explanation of the empirical bounds on the size of an electron based on particle-physics experiments that probe short distance scales by using collisions at short wavelengths. There is also another way of getting at this question, which has been studied by people who have tried to model quarks and leptons as composites of more fundamental particles called preons. If the preons are confined to a space of linear size $x$, then the uncertainty principle says their mass-energy is at least about $\hbar/x$. But given even a relatively weak bound on $x$, this makes the mass-energy of the preons greater than the mass of the electron they supposedly make up. This is called the confinement problem. Various people (e.g., 't Hooft 1979) have worked out various possible ways to get around the confinement problem, but essentially the confinement problem makes these ideas unlikely to work out.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606884121894836, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/15538/explain-rho-0-dote-bfpt-bf-dotf-nabla-0-cdot-bfq-r/15545
|
# Explain $\rho_{0}\dot{e} - \bf{P}^{T} : \bf{\dot{F}}+\nabla_{0} \cdot \bf{q} -\rho_{0}S = 0$
I am trying to understand the balance of energy -law from continuum mechanics, fourth law here. Could someone break this a bit to help me understand it? From chemistry, I can recall $$dU = \partial Q + \partial W$$ where $U$ is the internal energy, $Q$ is heat and $W$ is the work. How is the fourth law of conservation in CM:
$$\rho_{0}\dot{e} - \bf{P}^{T} : \bf{\dot{F}}+\nabla_{0} \cdot \bf{q} -\rho_{0}S = 0$$
related to that?
Terms
• $e(\bar{x}, t) = \text{internal energy per mass}$
• $q(\bar{x}, t) = \text{heat flux vector}$
• $\rho(\bar{x}, t) = \text{mass density}$
Operations
• $: \text{ -operation} = \text{Frobenius inner product?}$ (related)
• $\dot{\text{v}} = \text{derivative of vector } v$
• $\dot{\text{M}} = \text{transpose of matrix } M$
-
Add bounty to your question if you want to get answers faster. – Masi Oct 9 '11 at 18:00
2
It would improve your question significantly if you can be a bit more specific about what you're looking for than just saying "how are they related?" – David Zaslavsky♦ Oct 9 '11 at 18:52
## 1 Answer
You could add reference to the wikipedia article where you got this equation from (the Lagrangian description is simpler to understand -- I think; the terms have the same meaning, but are in the current configuration). So going one after another:
1. change of internal energy $e$, per unit mass (so multiplied by density)
2. change of elastic energy (elastic potential, stored elastic energy, deformation energy); The $:$ means tensor contraction, $\mathbf{P}$ is Piola-Kirchhof stress, $\mathbf{\dot F}$ is rate of deformation gradient.
3. divergence of heat, i.e. change of heat in the volume element
4. material energy change (think e.g. of chemical reaction going on at the point, which produces energy, not taken in account by other terms)
All those terms have to balance each other, i.e. sum to (scalar) zero.
Going back to your chemistry equation, the second term is (mechanical) work, third is heat flux, and the first and last terms are internal energy (which is broken down in two sources, though cen be written as $\rho_0(\dot e-\mathbf{S})$. (adjust grammar to signs)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399227499961853, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/topology+quantum-hall-effect
|
Tagged Questions
1answer
230 views
First Chern number, monoples and quantum Hall states
The first Chern number $\cal C$ is known to be related to various physical objects. Gauge fields are known as connections of some principle bundles. In particular, principle $U(1)$ bundle is said to ...
1answer
240 views
Questions about Thouless-Kohmoto-Nightingale-den Nijs (TKNN) paper
I am reading the famous and concise Thouless-Kohmoto-Nightingale-den Nijs (TKNN) paper Quantized Hall Conductance in a Two-Dimensional Periodic Potential, Phys. Rev. Lett. 49, 405–408 (1982), where I ...
1answer
317 views
Chern number in condensed matter physics
In mathematics, the Chern number is defined in terms of the Chern class of a manifold. What is the exact definition of Chern number in condensed matter physics, i.e. quantum hall system?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8698886036872864, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/235320/stamp-problem-homework
|
# Stamp Problem Homework
Suppose that you have a large supply of $3$ and $7$ cent stamps. Write a recurrence relation and initial conditions for the number $S_n$ of different ways in which n cents worth of stamps can be attached to an envelope if the order in which the stamps are attached does matter.
-
1
What is the trouble? – user32240 Nov 12 '12 at 0:56
I know you have to set the initial conditions so that S0 = 3, and S1 = 7. But I do not know how to set up the equation after that. – MKZ Nov 12 '12 at 0:58
Let us set up the reverse recurrence scheme: $S_{n}=S_{n-7}+S_{n-3}$. Your set up makes little sense as $0,1$ cents has no choice at all - they are impossible. – user32240 Nov 12 '12 at 1:01
Note: order does not matter. Hence e.g. $S_{10}=1 \neq 2=S_3+S_7$. – Douglas S. Stones Nov 12 '12 at 1:03
## 2 Answers
The ordered case, as previously noted, satisfies the recurrence $$S_{\mathrm{ord}}(n)=S_{\mathrm{ord}}(n-3)+S_{\mathrm{ord}}(n-7).$$
For the unordered case, I give the following hint:
Hint (since this is flagged as homework): We can adjust this recurrence to account for the "order is not important" by adding in a correction term to account for overcounting. So, we have $$S(n)=S(n-3)+S(n-7)-[???].$$ There's quite an elegant reason for this, and I don't want to spoil this for you.
The initial conditions are merely bookkeeping (they can be counted by hand, once you know the depth of the recurrence).
-
If you want to put $n$ cents worth of stamps, you can either put one $7$ cent stamp and $n-7$ cents worth of stamps (i.e. $S_{n-7}$), provided that $n \geq 7$, or one $3$ cent stamp and $n-3$ cents worth of stamps (i.e. $S_{n-3}$), provided that $n \geq 3$. As far as initial conditions are concerned, you just have to analyse what is going on for $n < 7$, which should be fairly easy.
-
So for all n>= 7, Sn = 7+S(n-7) and for all n>=3, Sn = 3+S(n-3)? How would that work for n= 4? You can't have use a 3 cent stamp and then have .01\$ of another. – MKZ Nov 12 '12 at 1:04
1
Actually, I did not see the part where you say that the order does not matter. However, for the ordered case, the relation would be for $n \geq 7, S_n = S_{n-7} + S_{n-3}$. – beauby Nov 12 '12 at 1:06
Sorry, I edited the original question. Order does matter – MKZ Nov 12 '12 at 1:08
No, $S(n) = S(n-3) + S(n-7)$ for $n\ge 7$. Smaller than that you do by hand. Note however that $S(0)=1$. And I guess this is ordered, because the choice in the recursion (take 3 or 7) fixes the value of the first stamp, etc. – Hendrik Jan Nov 12 '12 at 1:11
Can you work out why this is true? – MKZ Nov 12 '12 at 1:19
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9574071168899536, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/50298?sort=oldest
|
## co-$A_\infty$ spaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A co-$A_n$ space is a based space $Y$ equipped with a co-action by the Stasheff associahedron operad $K_\bullet$. This means that $Y$ is comes with certain maps $c_n: Y \times K_n \to Y^{\vee n}$, $n = 2,3,\dots$ that are inductively described (the definition of $c_n$ uses $c_{n-1}$ as input; the map $c_2$ is a co-$H$ structure). The suspension of a based space $X$ has the structure of a co-$A_\infty$ space.
Assume $Y$ is $2$-connected and has the homotopy type of a finite complex. Then Schwaenzl, Vogt and I showed that a co-$A_\infty$ space $Y$ desuspends to a space $X$ in the sense that there's a weak equivalence $\Sigma X \simeq Y$.
However we didn't try to check that the given weak equivalence is compatible in the co-$A_\infty$ sense. Part of the problem is that a morphism $f: Y \to Z$ of co-$A_\infty$ spaces should amount a co-$A_\infty$-structure on its mapping cylinder restricting to the given ones on $X \times 1$ and $Y$. However, this doesn't form a category: it's an $\infty$-category.
Now to my questions:
Question 1: is there a documented proof somewhere that the functor which assigns to a based space $X$ its suspension (considered as an co-$A_\infty$ space) induces an equivalence between the homotopy category of $1$-connected spaces and $2$-connected co-$A_\infty$ spaces?
Presumably, such a proof should be Hilton-Eckmann dual to one of the main results in the Book of Boardman and Vogt.
Question 2: Do function spaces coincide up to weak equivalence under this functor? That is, is the map $$\hom_{\text{Top}_*}(X,X') \to \hom_{\text{co-}A_\infty}(\Sigma X,\Sigma X')$$ A weak equivalence under suitable hypotheses on $X$ and $X'$?
By $\hom$ in each case, I mean topologized mapping spaces.
How would one go about proving a result like this?
-
1
Prof. Klein, there was some discussion of questions related to this here (but in a number of respects it seems you are already more informed): mathoverflow.net/questions/4117/… Also, welcome! – Tyler Lawson Dec 24 2010 at 21:00
1
Thanks Tyler, I wasn't aware of that discussion (and you can call me John if you wish). The two references that speak about matters in this direction are: 1. Hopkins, M.J. Formulations of cocategory and the iterated suspension. Algebraic homotopy and local algebra (Luminy, 1982), 212–226, Astérisque, 113-114, Soc. Math. France, Paris, 1984 2. Klein, J.; Schwänzl, R.; Vogt, R. M. Comultiplication and suspension. Topology Appl. 77 (1997), no. 1, 1–18 The first of these gives a Segal-type approach to desuspension (by finding a model for a cobar construction) which isn't operadic. – John Klein Dec 24 2010 at 21:33
## 1 Answer
I have the feeling many of us would agree those statements `should' be true, but I can't think of anyone who wrote things down publicly. Why not ask at alg-top? overflow requires active logging in alg-top doesn't
jim
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935797929763794, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/63412/upper-bounds-for-the-sum-of-primes-n/65493
|
## Upper bounds for the sum of primes <= n
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $s(n)$ denote the sum of primes less than or equal to n. Clearly, $s(n)$ is bounded from above by the sum of the first $n/2$ odd integers $+1$. $s(n)$ is also bounded by the sum of the first $n$ primes, which is asymptotically equivalent to $\frac{n^2}{2\log{n}}$. It should thus be possible to find estimates for $s(n)$ using the fact that for an $\epsilon > 0$ and $n$ large enough $s(n) < (1+\epsilon)\frac{n^2}{\log{n}}.$
I would like to know if there are any known sharp upper bounds for $s(n)$. That is, I am looking for a function $f(n)$ such that for every $n > N_0$ $$s(n) \leq f(n)$$
As a way of relaxing the question, $s(n)$ could be regarded as the sum of the primes in the interval $[c,n]$ given a constant $c$.
-
2
I would have thought the sum of the first $n$ primes was asymptotically equivalent to $(n^2 \log n)/2$. (The $n$th prime is near $n \log n$, so the sum of the integers up to the $n$th prime is near $(n \log n)^2/2$, but only a proportion $1/\log n$ of the summands are prime. – Michael Lugo Apr 29 2011 at 14:28
Did you try already to use Weil's explicit formula. Appropiated choosen tex functions, e.g. $e^{x tanh(x)} e^{-x^2 \delta}$ should give you an upper bound. – Marc Palm Apr 29 2011 at 14:33
## 5 Answers
By partial summation $$s(n) = n\pi(n)-\sum_{m=2}^{n-1}\pi(m)$$ so by the Prime Number Theorem $$s(n) = \frac{n^2}{\log n}-\sum_{m=2}^{n-1}\frac{m}{\log m}+O\left(\frac{n^2}{\log^2 n}\right).$$ The sum on the right is $$\sum_{m=2}^{n-1}\frac{m}{\log m} = \int_2^n \frac{x}{\log x}dx + O\left(\frac{n}{\log n}\right)$$ using the monotonicity properties of the integrand. Now the integral equals, by partial integration, $$\int_2^n \frac{x}{\log x}dx = \left[\frac{x^2}{2\log x}\right]_2^n + \int_2^n \frac{x}{2\log^2 x}dx = \frac{n^2}{2\log n} + O\left(\frac{n^2}{\log^2 n}\right).$$ Altogether we have $$s(n) = \frac{n^2}{2\log n} + O\left(\frac{n^2}{\log^2 n}\right).$$ This can be made more precise both numerically and theoretically.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It is not difficult to calculate upper bounds on $s(n)$ from bounds on the prime counting function $\pi(n)$. Just use integration by parts, $$s(n) = \int_0^n x\,d\pi(x) = n\pi(n) - \int_0^n\pi(x)\,dx.$$ I'm not sure what the currently best known bounds for $\pi(x)$ are but, checking Wikipedia, gives $$\frac{x}{\log x}\left(1+\frac{1}{\log x}\right) < \pi(x) < \frac{x}{\log x}\left(1+\frac{1}{\log x}+\frac{2.51}{(\log x)^2}\right)$$ with the left hand inequality holding for $x\ge599$ and the right hand holding for $x\ge355991$. So,
$$s(n)\le \frac{n^2}{\log n}\left(1+\frac{1}{\log n}+\frac{2.51}{(\log n)^2}\right)-\int^n\left(1+\frac{1}{\log x}\right)\frac{x\,dx}{\log x}+c$$ (where $c$ is a constant which you can compute if you feel so inclined). Applying integration by parts,
$$s(n)\le\frac{n^2}{2\log n}\left(1+\frac{1}{\log n}+\frac{5.02}{(\log n)^2}\right)-\frac12\int^n\left(1+\frac{2}{\log x}\right)\frac{x\,dx}{(\log x)^2}+c$$
Bounding $\log x\le\log n$ in the integral gives a bound
$$s(n)\le\frac{n^2}{2\log n}\left(1+\frac{1}{2\log n}+\frac{4.02}{(\log n)^2}\right)+c$$
You can also take $c=0$ if you only require the bound to hold for $n\ge N$ (some $N$), since the term I neglected in the integral by applying $\log x\le \log n$ grows withuot bound, and will eventually dominate any constant term. Obviously, if you know any better bounds for $\pi(n)$ then you will get improved bounds for $s(n)$. For example, the same Wikipedia article linked to above states that $\left\vert\pi(x)-{\rm Li}(x)\right\vert\le\frac{\sqrt{x}\log x}{8\pi}$ for $x\ge2657$ under the assumption that the Riemann hypothesis holds.
-
+1 nice answer! – Marc Palm Apr 29 2011 at 15:22
There definitely are earlier references than our book. An asymptotic formula for
$\sum_{p \leq x} p^a$
is in T. Salát and S. Znám, On the sums of the prime powers, Acta Fac. Rer. Nat. Univ. Com. Math. 21 (1968), pp. 21-24. (Cited by Spearman & Williams--I actually have not seen this paper.) It probably goes back further than that. The natural place to look would be Landau's "Primzahlen"--I forget the exact title--but I was unable to find that sum in there.
Eric.
-
Welcome to MathOVerflow, Eric! – Charles May 20 2011 at 4:04
The following paper gives the asymptotic expansion of the sum of the first $n$ prime numbers. Hence for sufficiently large $n$, the first few positive and negative terms of the asymptotic expansion will give best upper and lower bound on the sum of primes.
http://arxiv.org/pdf/1011.1667.pdf
$$\sum_{r \le n}p_r = \frac{n^2}{2}\Bigg[\ln n + \ln\ln n - \frac{3}{2} + \frac{\ln\ln n}{\ln n} - \frac{3}{\ln n}- \frac{\ln^2 \ln n}{2\ln^2 n}$$
$$+ \frac{7 \ln \ln n}{2\ln^2 n} - \frac{27}{4\ln^2 n} + o\Bigg(\frac{1}{\ln^2 n}\Bigg) \Bigg].$$
-
Until a better answer appears. Here is a link:
http://mathworld.wolfram.com/PrimeSums.html
It says that
$$s(p_n) \tilde \quad n^2 \log n /2.$$
where $p_n$ is the $n$-th prime.
Perhaps you want to look at the reference, and figure out if you can make the bound effective.
-
So Michael Lugo's heuristic actually can be made effective... – Marc Palm Apr 29 2011 at 15:08
1
Not exactly: there is a difference between the sum of primes less than or equal to $n$, and the sum of the $n$ first primes. I must have missed something: how could $s(n) \sim n^2 \ln(n)/2$ have not be proven before 1996 by Bach and Shallit? I do not see what is wrong with the fact that, since, $p_n \sim n \ln(n)$ (prime number theorem), one has: $\sum_{k=1}^n p_k \sim \sum_{k=1}^n k \ln(k) \sim \int_1^n t\ln(t) \mathrm{d}t \sim n^2\ln(n)/2$ – Bernikov Apr 29 2011 at 15:11
Sorry, I have missed that difference. I did not read carefully. – Marc Palm Apr 29 2011 at 15:18
Fixed the mistake... – Marc Palm Apr 29 2011 at 15:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309276938438416, "perplexity_flag": "head"}
|
http://cstheory.stackexchange.com/questions/593/is-there-a-stable-heap
|
# Is there a stable heap?
Is there a priority queue data structure that supports the following operations?
• Insert(x, p): Add a new record x with priority p
• StableExtractMin(): Return and delete the record with minimum priority, breaking ties by insertion order.
Thus, after Insert(a, 1), Insert(b, 2), Insert(c, 1), Insert(d,2), a sequence of StableExtractMin's would return a, then c, then b, then d.
Obviously one could use any priority queue data structure by storing the pair $(p, time)$ as the actual priority, but I'm interested in data structures that do not explicitly store the insertion times (or insertion order), by analogy to stable sorting.
Equivalently(?): Is there a stable version of heapsort that does not require $\Omega(n)$ extra space?
-
I think you mean "a, then c, then b, then d"? – Ross Snider Aug 25 '10 at 23:32
Heap with linked list of records + balanced binary tree keyed on priority pointing to corresponding linked list won't work? What am I missing? – Aryabhata Aug 26 '10 at 1:05
Moron: That's storing the insertion order explicitly, which is precisely what I want to avoid. I clarified the problem statement (and fixed Ross's typo). – JɛffE Aug 26 '10 at 5:25
## 6 Answers
The Bently-Saxe method gives a fairly natural stable priority queue.
Store your data in a sequence of sorted arrays $A_0,\ldots,A_k$. $A_i$ has size $2^i$. Each array also maintains a counter $c_i$. The array entries $A_i[c_i],\ldots,A_i[2^i-1]$ contain data.
For each $i$, all elements in $A_i$ were added more recently than those in $A_{i+1}$ and within each $A_i$ elements are ordered by value with ties being broken by placing older elements ahead of newer elements. Note that this means we can merge $A_i$ and $A_{i+1}$ and preserve this ordering. (In the case of ties during the merge, take the element from $A_{i+1}$.)
To insert a value $x$, find the smallest $i$ such that $A_i$ contains 0 elements, merge $A_0,\ldots,A_{i-1}$ and $x$, store this in $A_i$ and set $c_0,\ldots,c_i$ appropriately.
To extract the min, find the largest index $i$ such that the first element in $A_i[c_i]$ is minimum over all $i$ and increment $c_i$.
By the standard argument, this gives $O(\log n)$ amortized time per operation and is stable because of the ordering described above.
For a sequence of $n$ insertions and extractions, this uses $n$ array entries (don't keep empty arrays) plus $O(\log n)$ words of bookkeeping data. It doesn't answer Mihai's version of the question, but it shows that the stable constraint doesn't require a lot of space overhead. In particular, it shows that there is no $\Omega(n)$ lower-bound on the extra space needed.
Update: Rolf Fagerberg points out that if we can store null (non-data) values, then this whole data structure can be packed into an array of size $n$, where $n$ is the number of insertions so far.
First, notice that we can pack the $A_k,\ldots,A_0$ into an array in that order (with $A_k$ first, followed by $A_{k-1}$ if it's non-empty, and so on). The structure of this is completely encoded by the binary representation of $n$, the number of elements inserted so far. If the binary representation of $n$ has a 1 at position $i$, then $A_i$ will occupy $2^i$ array location, otherwise it will occupy no array locations.
When inserting, $n$, and the length of our array, increase by 1, and we can merge $A_0,\ldots,A_i$ plus the new element using existing in-place stable merging algorithms.
Now, where we use null values is in getting rid of the counters $c_i$. In $A_i$, we store the first value, followed by $c_i$ null values, followed by the remaining $2^i-c_i-1$ values. During an extract-min, we can still find the value to extract in $O(\log n)$ time by examining $A_0[0],\ldots,A_k[0]$. When we find this value in $A_i[0]$ we set $A_i[0]$ to null and then do binary search on $A_i$ to find the first non-null value $A_i[c_i]$ and swap $A_i[0]$ and $A_i[c_i]$.
The end result: The entire structure can be implemented with one array whose length is increment with each insertion and one counter, $n$, that counts the number of insertions.
-
I'm not sure what your constraints are; does the following qualify? Store the data in an array, which we interpret as an implicit binary tree (like a binary heap), but with the data items at the bottom level of the tree rather than at its internal nodes. Each internal node of the tree stores the smaller of the values copied from its two children; in case of ties, copy the left child.
To find the minimum, look at the root of the tree.
To delete an element, mark it as deleted (lazy deletion) and propagate up the tree (each node on the path to the root that held a copy of the deleted element should be replaced with a copy of its other child). Maintain a count of deleted elements and if it ever gets to be too large a fraction of all elements then rebuild the structure preserving the order of the elements at the bottom level — the rebuild takes linear time so this part adds only constant amortized time to the operation complexity.
To insert an element, add it to the next free position on the bottom row of the tree and update the path to the root. Or, if the bottom row becomes full, double the size of the tree (again with an amortization argument; note that this part is not any different from the need to rebuild when a standard binary heap outgrows its array).
It's not an answer to Mihai's stricter version of the question, though, because it uses twice as much memory as a true implicit data structure should, even if we ignore the space cost of handling deletions lazily.
-
I like this. Just like with a regular implicit tree min-heap, probably 3-ary or 4-ary implicit tree will be faster because of cache effects (even though you need more comparisons). – Jonathan Graehl Aug 30 '10 at 19:21
Is the following a valid interpretation of your problem:
You have to store N keys in an array of A[1..N] with no auxiliary information such that you can support: * insert key * delete min, which picks the earliest inserted element if there are multiple minima
This appear quite hard, given that most implicit data structures play the trick of encoding bits in the local ordering of some elements. Here if multiple guys are equal, their ordering must be preserved, so no such tricks are possible.
Interesting.
-
1
I think this should be a comment, not an answer, as it doesn't really answer the original question. (You can delete it and add it as a comment.) – Jukka Suomela Aug 26 '10 at 21:23
5
Yeah, this website is a bit ridiculous. We have reputations, bonuses, rewards, all sorts of ways to comment that I can't figure out. I wish this would look less like a kids' game. – Mihai Aug 26 '10 at 21:32
I think he needs more rep to post a comment. that's the problem. – Suresh Venkat♦ Aug 26 '10 at 21:38
@Suresh: Oh, right, I didn't remember that. How are we actually supposed to handle this kind of situation (i.e., a new user needs to ask for clarifications before answering a question)? – Jukka Suomela Aug 26 '10 at 21:42
2
no easy way out. I've seen this often on MO. Mihai will have no trouble gaining rep, if its the Mihai I think it is :) – Suresh Venkat♦ Aug 26 '10 at 21:53
show 2 more comments
Short answer : You can't.
Slightly longer answer :
You'll need $\Omega(n)$ extra space to store the "age" of your entry which will allow you to discriminate between identical priorities. And you'll need $\Omega(n)$ space for information that will allow fast insertions and retrievals. Plus your payload (value and priority).
And, for each payload you store, you'll be able to "hide" some information in the address (e.g. $addr(X) < addr(Y)$ means Y is older than X). But in that "hidden" information, you'll either hide the "age", OR the "fast retrieval" information. Not both.
Very long answer with inexact flaky pseudo-math :
Note : the very end of the second part is sketchy, as mentioned. If some math guy could provide a better version, I'd be grateful.
Let's think about the amount of data that is involved on an X-bit machine (say 32 or 64-bit), with records (value and priority) $P$ machine words wide.
You have a set of potential records that is partially ordered : $(a,1) < (a,2)$ and $(a,1) = (a,1)$ but you can't compare $(a,1)$ and $(b,1)$.
However you want to be able to compare two non-comparable values from your set of records, based on when they were inserted. So you have here another set of values : those that have been inserted, and you want to enhance it with a partial order : $X < Y$ iff $X$ was inserted before $Y$.
In the worst-case scenario, your memory will be filled with records of the form $(?,1)$ (with $?$ different for each one), so you'll have to rely entirely upon the insertion time in order to decide which one goes out first.
• The insertion time (relative to other records still in the structure) requires $X - log_2(P)$ bits of information (with P-byte payload and $2^X$ accessible bytes of memory).
• The payload (your record's value and priority) requires $P$ machine words of information.
That means that you must somehow store $X - log_2(P)$ extra bits of information for each record you store. And that's $O(n)$ for $n$ records.
Now, how much bits of information does each memory "cell" provide us ?
• $W$ bits of data ($W$ being the machine word width).
• $X$ bits of address.
Now, let's assume $P \geq 1$ (payload is at least one machine word wide (usually one octet)). This means that $X - log_2(P) < X$, so we can fit the insertion order information in the cell's address. That's what happening in a stack : cells with the lowest address entered the stack first (and will get out last).
So, to store all our information, we have two possibilities :
• Store the insertion order in the address, and the payload in memory.
• Store both in memory and leave the address free for some other usage.
Obviously, in order to avoid waste, we'll use the first solution.
Now for the operations. I suppose you wish to have :
• $Insert(task, priority)$ with $O(log n)$ time complexity.
• $StableExtractMin()$ with $O(log n)$ time complexity.
Let's look at $StableExtractMin()$ :
The really really general algorithm goes like this :
1. Find the record with minimum priority and minimum "insertion time" in $O(log n)$.
2. Remove it from the structure in $O(log n)$.
3. Return it.
For example, in the case of a heap, it will be slightly differently organized, but the work is the same : 1. Find the min record in $0(1)$ 2. Remove it from the structure in $O(1)$ 3. Fix everything so that next time #1 and #2 are still $O(1)$ i.e. "repair the heap". This needs to be done in "O(log n)" 4. Return the element.
Going back to the general algorithm, we see that to find the record in $O(log n)$ time, we need a fast way to choose the right one between $2^(X - log_2(P))$ candidates (worst case, memory is full).
This means that we need to store $X - log_2(P)$ bits of information in order to retrieve that element (each bit bisects the candidate space, so we have $O(log n)$ bisections, meaning $O(log n)$ time complexity).
These bits of information might be stored as the address of the element (in the heap, the min is at a fixed address), or, with pointers for example (in a binary search tree (with pointers), you need to follow $O(log n)$ on average to get to the min).
Now, when deleting that element, we'll need to augment the next min record so it has the right amount of information to allow $O(log n)$ retrieval next time, that is, so it has $X - log_2(P)$ bits of information discriminating it from the other candidates.
That is, if it doesn't have already enough information, you'll need to add some. In a (non-balanced) binary search tree, the information is already there : You'll have to put a NULL pointer somewhere to delete the element, and without any further operation, the BST is searchable in $O(log n)$ time on average.
After this point, it's slightly sketchy, I'm not sure about how to formulate that. But I have the strong feeling that each of the remaining elements in your set will need to have $X - log_2(P)$ bits of information that will help find the next min and augment it with enough information so that it can be found in $O(log n)$ time next time.
The insertion algorithm usually just needs to update part of this information, I don't think it will cost more (memory-wise) to have it perform fast.
Now, that means that we'll need to store $X - log_2(P)$ more bits of information for each element. So, for each element, we have :
• The insertion time, $X - log_2(P)$ bits.
• The payload $P$ machine words.
• The "fast search" information, $X - log_2(P)$ bits.
Since we already use the memory contents to store the payload, and the address to store the insertion time, we don't have any room left to store the "fast search" information. So we'll have to allocate some extra space for each element, and so "waste" $\Omega(n)$ extra space.
-
did you really intend to make your answer CW ? – Suresh Venkat♦ Aug 28 '10 at 7:06
Yes. My answer isn't 100% correct, like stated within, and It'd be good if anybody could correct it even if I'm not on SO anymore or whatever. Knowledge should be shared, knowledge should be changeable. But maybe I misunderstood the usage of CW, if so, please tell me :) . EDIT : whoops, indeed I just discovered that I won't get any rep from CW posts and that the content is CC-wiki licenced in any way... Too bad :). – Georges Dupéron Aug 29 '10 at 16:07
If you implement your priority queue as a balanced binary tree (a popular choice), then you just have to make sure that when you add an element to the tree, it gets inserted to the left of any elements with equal priority.
This way, the insertion order is encoded in the structure of the tree itself.
-
1
But this adds O(n) space for the pointers, which I think is what the questioner wants to avoid? – Jeremy Jun 27 '11 at 22:25
I don't think that's possible
concrete case:
```` x
x x
x x 1 x
1 x
````
min heap with all x > 1
heapifying will eventually give something a choice like so
```` x
1 1
x x x x
x x
````
now which 1 to propagate to root?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 105, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931328535079956, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/43790-double-integrals-polar-coordinates.html
|
Thread:
1. Double Integrals and Polar Coordinates
Hi there,
First off, I'm having a bit of trouble changing this integral's bounds so that I can evaluate it starting by integrating the dx part first. Any helps/tips will be appreciated.
$\int_1^3{\int_0^{\ln{x}} xdy}dx$
Second, I've just finished learning how to change variables into polar coordinates, but I'm still a bit confused on how to go about doing so. Here is an example I'm trying to evaluate, but I'm not quite sure where to start:
$\int_0^4{\int_{\sqrt{x}}^2{e}^{y^3}dy}dx$
Thanks.
2. Originally Posted by discretemather
Hi there,
First off, I'm having a bit of trouble changing this integral's bounds so that I can evaluate it starting by integrating the dx part first. Any helps/tips will be appreciated.
$\int_1^3{\int_0^{\ln{x}} xdy}dx$
Second, I've just finished learning how to change variables into polar coordinates, but I'm still a bit confused on how to go about doing so. Here is an example I'm trying to evaluate, but I'm not quite sure where to start:
$\int_0^4{\int_{\sqrt{x}}^2{e}^{y^3}dy}dx$
Thanks.
Why do you have to change the integration order in the first one?
And the second one you do not need to change to polar coordinates, just change the integration order.
3. Originally Posted by Mathstud28
Why do you have to change the integration order in the first one?
And the second one you do not need to change to polar coordinates, just change the integration order.
For the first one, I have to show that integrating it one way gives the same result as the other way.
For the second one, if changing the integration order, wouldn't I also have to change the bounds like in the first one?
4. Originally Posted by discretemather
Hi there,
First off, I'm having a bit of trouble changing this integral's bounds so that I can evaluate it starting by integrating the dx part first. Any helps/tips will be appreciated.
$\int_1^3{\int_{{\color{red}y = } 0}^{ {\color{red}y = }\ln{x}} xdy}dx$
[snip]
I find it helps to add the stuff in red. Did you try drawing the region that the integral terminals define? When you do, it should be evident that when you reverse the order of integration you have
$\int_{y=0}^{y=\ln 3} \int_{x=e^y}^{x=3} x \, dx \, dy$.
5. Originally Posted by discretemather
[snip]
Second, I've just finished learning how to change variables into polar coordinates, but I'm still a bit confused on how to go about doing so. Here is an example I'm trying to evaluate, but I'm not quite sure where to start:
$\int_0^4{\int_{{\color{red}y = }\sqrt{x}}^{{\color{red}y = }2} {e}^{y^3}dy}dx$
Thanks.
It's already been advised (correctly) to change the order of integration. And my previous remarks remain totally pertinent.
Upon reversal:
$\int_{y=0}^{y=2} \int_{x=0}^{x=y^2} e^{y^3} \, dx \, dy$.
6. While drawing the region of integration is the easiest,safest way out there, sometimes using inequalities will work too. If you have more algebraical affinities compared to geometrical, you can manipulate the inequalities.
The first bound says $0 \leq y\leq \ln x$
The second bound says $1 \leq x \leq 3$
Remember that both $\ln x$ and $e^x$ are increasing functions.
$1 \leq x \leq 3 \Rightarrow 0 \leq ln x \leq \ln 3$
$0 \leq ln x \leq \ln 3, 0 \leq y\leq \ln x \Rightarrow0 \leq y\leq \ln x \leq \ln 3$
Raise all sides to the power e.
$0 \leq y\leq \ln x \leq \ln 3 \Rightarrow 1 \leq e^y \leq x \leq 3$
So clearly $e^y \leq x \leq 3$ and $0 \leq y\leq\ln 3$
So the bounds on x are e^y and 3, while the bounds on y are 0 and ln 3
Thus $\int_1^3{\int_{0}^{\ln{x}} xdy}dx = \int_{0}^{\ln 3} \int_{e^y}^{3} x \, dx \, dy$
7. Thanks everyone! mr fantastic, for the first one, I had those exact bounds when I was trying it (I drew it out as well), but I guess I was making an error somewhere midway in the rest of my calculations (seems to work just fine on this try).
Isomorphism, that is an awesome way of figuring out the bounds! This will definitely help with my upcoming assignment. Thank you for sharing that!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495875835418701, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=121324
|
Physics Forums
Thread Closed
Page 1 of 2 1 2 >
## Infinite Line Charges
Hi there, this is my first post and I hope you don't mind helping me out a bit because I'm really stuck. I'm doing computer science in university and the electrotechnology part can be a bit tricky for us.
For an infinite line charge Pl = (10^-9)/2 C/m on the z axis, find the potential difference points a and b at distances 2m and 4m respectively along the x axis.
I have a basic understanding of physics, Coloumb's Law, Voltage etc... but I don't know about this infinite line charge stuff. I'm not looking for a solution, I actually have the answer here (it's 6.24v in case you're interested) but I'd like to know how to calculate it. If this wasn't a line charge, i.e. if this was a point charge at the origin, I would be able to do it.
Thanks in advance
Edit: I noticed in the FAQ it said I should show my attempts. The thing is, I know how to finish it, just not how to begin it.
So if I were to go about this, I would get the potential of each point using this formula:
q/(4)(pi)(eps)(r). I derived that from integrating Coloumb's Law with respect to r. I'd then subtract point b from point a to get the voltage.
Coloumb's law wouldn't appear to work here though because it's a line charge and not a point charge. This is my attempt, I hope it's not too pathetic.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I see this is being done for a computer science class, are you attempting to calculate the potential from an infinite line of charge by summing up an large number of point charges, as you might do in a computer approximation? Or are you attempting to do an integral and find the algebraic answer? I'm guessing you are talking about the algebraic answer: You have an algebraic answer for the potential from a point charge correct? $$V(r) = \frac{1}{4 \pi \epsilon_0}\frac{q}{r}$$ where r is the distance from the charge to your point: Now what you want to do is sum up an infinite number of these small charges, the force law is the same, but your distance will change as you move down the wire. If you are integrating down the z axis from 0 to infinity, try to think of a way to write a Coloumb's law expression for any arbitrary point on the wire z distance from the origin. Then you can integrate over all the possible z's. ~Lyuokdea
Lyuokdea, thanks so much for your help! This really clears up a few things in my head. Maybe you could just give me one or two more hints though. Essentially what I'm trying to get the potential due to the point charge at R to R+infinite and R to R-infinite. I would imagine integration would be a good way to do this, but I can't see a way to write that. Hmmm
## Infinite Line Charges
$$V(r) = \frac{q}{4 \pi \epsilon_0} \int^r_\infty \frac{1}{r}$$
This is what I thought of, but it doesn't help because essentially it yields:
$$\ln\frac{r}{\infty} = \ln0$$
which is undefined. :<
Recognitions:
Homework Help
Science Advisor
Quote by exiztone Lyuokdea, thanks so much for your help! This really clears up a few things in my head. Maybe you could just give me one or two more hints though. Essentially what I'm trying to get the potential due to the point charge at R to R+infinite and R to R-infinite. I would imagine integration would be a good way to do this, but I can't see a way to write that. Hmmm
I can guide you through it but I first want to make sure that I am doing it at the right level.
There are three approaches possible:
a) There is a formula for th potential due to an infinite line of charge. If you have seen it in class and you are allowed to use it, the calculation is just two lines. I don't want to make you do an integral if you are allowed to use the formula!
b) At a more fundamental level, one can actually prove the formula mentioned above using calculus. Do you think this is what you are expected to do? Are you at ease with simple integrals?
c) it can also be done purely numerically as Lyuokdea mentioned.
So what is expected of you, do you think?
Patrick
Hi Patrick: I sincerely don't know what we're allowed to use. I would like to see the integral method for my own understanding, but if there is an easier way, I would probably use that in the exam (I am just trying to understand the concepts right now). As for proof. I think we just have to arrive at the correct answer after calculating it somehow. I don't think we have to worry about proving formulae, just be able to use them and understand what they do. I apologise that I'm very hazy about this. The whole class is on the same boat since we signed up for computer science and this electrotechnology is a little out of our league. We're scared of the lecturer so nobody asks questions. Also, the person who is supposed to help us with these problems wasn't able to.
Recognitions:
Homework Help
Science Advisor
Quote by exiztone Hi Patrick: I sincerely don't know what we're allowed to use. I would like to see the integral method for my own understanding, but if there is an easier way, I would probably use that in the exam (I am just trying to understand the concepts right now). As for proof. I think we just have to arrive at the correct answer after calculating it somehow. I don't think we have to worry about proving formulae, just be able to use them and understand what they do. I apologise that I'm very hazy about this. The whole class is on the same boat since we signed up for computer science and this electrotechnology is a little out of our league. We're scared of the lecturer so nobody asks questions.
Ah, yes, the scary lecturer I also try to be as scary as possible when I teach...saves a lot of questions
There *is* a simple formula that could be used to solve in two lines. But if this was the way to go, you would have seen it in class (I assume). In any case, we'll prove it since you know calculus.
Let's say that the line of charge is on the x axis, from -infinity to plus infinity. Let's say that we want to find the potential at a point P=(0,y), that is at a distance y above the origin.
Now consider a small piece of the line of charge of length dx located somewhere to the left of the origin. It contains a small charge dq.
So, to be clear, this small dq is located at a position (-x, 0). Ok?
What is the distance r between this small dq and the point P, in terms of x and y? (that's just Pythagora's theorem) That will be what we will use for "r" in the integral.
Patrick, thanks, this makes a lot of sense! Would it be something like this then? $$\frac{q}{4 \pi \epsilon_0} \int^1_\infty \frac{1}{\sqrt{r^2 + n^2}}DN$$ Thanks very much for the help! I think I'm getting on the right track now :) Edit: I can't seem to get it right in Latex but that should be -infinite and +infinite for the definite integral.
Recognitions:
Homework Help
Science Advisor
Quote by exiztone Patrick, thanks, this makes a lot of sense! Would it be something like this then? $$\frac{q}{4 \pi \epsilon_0} \int^1_\infty \frac{1}{\sqrt{r^2 + n^2}}DN$$ Thanks very much for the help! I think I'm getting on the right track now :) Edit: I can't seem to get it right in Latex but that should be -infinite and +infinite for the definite integral.
Well, I am more used to see x and y used instead or r and n!
But you almost have it exceot for one thing:
where does your DN come from? (what I would call dx)
You cannot put it by hand like this with no justification.
To be exact, you should have
$$\int_{-\infty}^{\infty} { dq \over {\sqrt{x^2 + y^2}}}$$
where y is a constant kept fixed (so we could call it "r", the perpendicular distance between the line and the point P).
The only step missing is to relate dq to dx. You know how to do that?
Patrick: Now I am stuck again. I put in DN (DX) because I thought we have to integrate the formula with respect to X (since we're summing up an infinite amount of points on the X axis). I don't see where you got your DQ from though. And why would we want that as opposed to DX? Sorry for being clueless on the matter, I'm familiar with integration, but I think this is the first time I've applied it to a real situation. I don't think I know how to relate dq to dx, unless it's that substitution with U routine. Thanks again! I'm really learning from this.
Recognitions:
Homework Help
Science Advisor
Quote by exiztone Patrick: Now I am stuck again. I put in DN (DX) because I thought we have to integrate the formula with respect to X (since we're summing up an infinite amount of points on the X axis). I don't see where you got your DQ from though. And why would we want that as opposed to DX? Sorry for being clueless on the matter, I'm familiar with integration, but I think this is the first time I've applied it to a real situation. I don't think I know how to relate dq to dx, unless it's that substitution with U routine. Thanks again! I'm really learning from this.
Sorry I forgot the 1/(4 pi epsilon_0).
I slowed you down because this is the most tricky (and important) point of the derivation. And I know by experience that this is always difficult for students applying calculus to physics for the first time.
The key point is that the potential of a point charge is
q/(4 pi epsilon_0 r)
right?
Now, your small dq located at (-x,o) is a point charge, so the potential it produces at the point P (0,y) is simply
$${ 1 \over 4 \pi \epsilon_0} { dq \over {\sqrt{x^2 + y^2}} }$$
This is *it*!! Just think of dq as being a point charge q...there is nothing else to add!!
Now, the problem is that before you can integrate, you must express all the variables in terms of a single variable. What varies here is "x" (as you sum over all the small pieces of the line of charge) so you want to relate the infinitesimal dq to an infinitesimal dx.
The key point is that dq is, by definition, the small charge contained in a small piece of length dx. Therefore, $dq = \lambda dx$ where lambda is the linear charge density. Now you are ready to integrate because you have
$${1 \over 4 \pi \epsilon_0} \int_{- \infty}^{\infty} \, \lambda dx \, {1 \over {\sqrt{x^2 +y^2}} }$$
Here, y and lambda are constants. So you are all set, you just have to integrate this. we have done most of the work, which is to get from the physical situation to an integral to calculate. Now it is like pure maths, you just have to integrate this (should make you think about arctan, btw).
I hope this makes sense.
Patrick
Hi Patrick, Thanks for such a lengthy reply. Once again it's very strange because some of these ideas were never introduced to us in class and now I feel a little angry because of that. I'd like to ask you about two things, namely the dq and the lambda. You see, I don't understand why we have this dq when the infinite line charge is equal, there's $${10^{-9}\over 2} C/m$$ which is a constant. Also, the above equation has no Q in it so how do I insert the size of the charge? And this lambda, what do I do with it? Is there an absolute value for it? How do I get rid of it to get my potential, or does it disappear when we subtract F(b) from F(a)? Thanks for your help!
Recognitions:
Homework Help
Science Advisor
Quote by exiztone Hi Patrick, Thanks for such a lengthy reply. Once again it's very strange because some of these ideas were never introduced to us in class and now I feel a little angry because of that. I'd like to ask you about two things, namely the dq and the lambda. You see, I don't understand why we have this dq when the infinite line charge is equal, there's $${10^{-9}\over 2} C/m$$ which is a constant.
*This* is your lambda! Notice the units: coulomb per meter. What is the meaning of this quantity? Consider the following question: if you take a piece of the line of charge which is, let's say, 80 cm long. How much charge does it contain? The answer is: multiply the length by lambda! So here, you would multiply 0.80 meter with your lambda and get an answer in Coulomb, which represents the amount of charge contained in a section 0.80 m long.
So, the amount of charge contained in a small piece of length dx is equal to dx times lambda. Since this is a tiny charge, we use the symbol dq instead of q. So dq= lambda dx
And this lambda, what do I do with it? Is there an absolute value for it? How do I get rid of it to get my potential, or does it disappear when we subtract F(b) from F(a)? Thanks for your help!
It's the value you gave above. And no, it won't disappear at the end. It will be needed to get your numerical answer at the end.
Hope this makes sense.
It is kind of crazy to have you do a problem like this without having explained those concepts!
PAtrick
Hi Patrick, thank you very much. All that makes perfect sense now. You are very kind to explain it all to me! I have to say, that integration at the end is rather complicated. I've heard for integrating that format you use $$\ln{|{x + \sqrt{y^2 + x^2}\over y}|}$$ This is very complicated to calculate, especially with infinites involed. Might you suggest an easier approach to integrate such a formula? Thanks!
Recognitions:
Homework Help
Science Advisor
Quote by exiztone Hi Patrick, thank you very much. All that makes perfect sense now. You are very kind to explain it all to me! I have to say, that integration at the end is rather complicated. I've heard for integrating that format you use $$\ln\pmod{x + \sqrt{y^2 + x^2}\over y}$$ This is very complicated to calculate, especially with infinites involed. Might you suggest an easier approach to integrate such a formula? Thanks!
$$\int {dx \over {\sqr{ x^2 + y^2} }} = \int {dx \over y} {1 \over {\sqrt{ 1 + (x^2/y^2)}}} = \int { dz \over {\sqrt{ 1 + z^2}}}$$
where z is defined to be x/y (and the limits are -infinity to +infinity in all the integrals).
This last integral is a standard one. If you have a table of integral, it will surely be listed.
Patrick
My tables book doesn't have anything like that, I'm sorry. I have a special one for the Irish education system. I know this is slightly cheeky, but do you think you could perform the integration on one of the numbers (say the point charge at 2m), I'm really having trouble with it. Thanks a million!
Patrick, or anyone else who might be able to help. After many attempts I got the following: $$[\sinh^{-1} ({\frac{x}{2}})]^{+\infty}_{-\infty}$$ But I can't go on from there. I have to get rid of the infinite somehow, but I can't see how. Can you point me in the right direction? Thank you so much!
Thread Closed
Page 1 of 2 1 2 >
Thread Tools
| | | |
|--------------------------------------------|-------------------------------|---------|
| Similar Threads for: Infinite Line Charges | | |
| Thread | Forum | Replies |
| | General Physics | 17 |
| | Introductory Physics Homework | 1 |
| | Introductory Physics Homework | 2 |
| | Introductory Physics Homework | 2 |
| | Introductory Physics Homework | 6 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9664823412895203, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-applied-math/49706-work-vector-integration.html
|
# Thread:
1. ## Work: Vector Integration
The force function $\bold{F} = -\bold{\hat{x}}kx - \bold{\hat{y}}ky$ is using Cartesian coordinates.
Find the work done from (1,1) to (4,4) using the following path:
$(1,1) \to (1,4) \to (4,4)$
So, what I did was:
$\int^{4,4}_{1,1} \bold{F}\cdot ~d\bold{r}$
$= \int^{4,1}_{1,1} \bold{F}\cdot ~d\bold{r} + \int^{4,4}_{4,1} \bold{F}\cdot ~d\bold{r}$
$= \int^{4,1}_{1,1} (-kx ~dx - ky ~dy) + \int^{4,4}_{4,1} (-kx ~dx - ky ~dy)$
$= \int^4_1 (-kx ~dx) + \int^1_1 (-ky ~dy) + \int^4_4 (-kx ~dx) + \int^4_1 (-ky ~dy)$
$= -k\left(\int^4_1 x ~dx + \int^1_1 y ~dy + \int^4_4 x ~dx + \int^4_1 y ~dy\right)$
$= -k\left[\left(8 - \frac{1}{2}\right) + \left(8 - \frac{1}{2}\right)\right]$
$= -k(15) = -15k$
Is this right?
2. It seems to me that if you are heading in the x-direction the work done will only depend on the x component of the force, and if you are heading in the y direction the work done will only depend on the y-component of the force, so you can add $\int _1 ^4 f(y) dy + \int _1 ^4 f(x) dx$ where f(y) is the y-component of force and f(x) is the x-component of force. Your answer looks right to me.
3. i agree. your answer is correct. i've never seen that method before, though. it seems to make sense
4. I've never seen this method either... But it's the way me and my friends interpreted it given this example:
The force exerted on a body is $\bold{F} = -\bold{\hat{x}}y + \bold{\hat{y}}x$. The problem is to calculate the work done going from the origin to the point (1,1):
$W = \int^{1,1}_{0,0} \bold{F} \cdot ~d\bold{r} = \int^{1,1}_{0,0} (-y ~dx + x ~dy)$
Separating the integrals, we obtain:
$W = -\int^1_0 y ~dx + \int^1_0 x ~dy$
The first integral cannot be evaluated until we specify the values of x as y ranges from 0 to 1. Likewise, the second integral requires x as a function of y. Consider the first path:
$(0,0) \to (1,0) \to (1,1)$
Then:
$W = -\int^1_0 0 ~dx + \int^1_0 1 ~dy = 1$
since y = 0 along the first segment of the path and x = 1 along the second. If we select the path:
$0,0 \to (0,1) \to (1,1)$
then the integral gives $W = -1$. For this force the work done depends on the choice of path.
That's pretty much all we had to go by. No professor, no class, just us and a book...
We did have this relation:
$W = \int \bold{F}\cdot ~d\bold{r} = \int F_x(x,y,z) ~dx + \int F_y(x,y,z) ~dy + \int F_z(x,y,z) ~dz$
Using that, the separation of the integrals follows.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9190019965171814, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/48419/hensels-lemma-and-implicit-function-theorem
|
# Hensel's Lemma and Implicit Function Theorem
In the literature and on the web happened to me several times to read confused or simply cryptic assertions regarding the fact that Hensel's Lemma is the algebraic version of Implicit Function Theorem.
I tried to explicit this relation but I failed, here there are some observations I made.
A first good property of Henselian rings, so rings that satisfy Hensel's Lemma, is that their spectrum is homotopically equivalent to their closed point in the sense of Grothendieck. Precisely, if $\widehat{\pi}$ is the pro-fundamental group of a scheme as in SGA1, then $\widehat{\pi}(Spec(A)) \simeq \widehat{\pi}(Spec(k(m))$, where $A$ is an Henselian ring and $k(m)$ is the residue field of the maximal ideal $m$ of $A$.
So I thought that spectra of Henselian rings were the kind of "small neighborhoods" in which you can write a "function" explicitely, thanks to Hensel's Lemma. But I'm confused in trying to understand what kind of functions I have to examine.
Another observation is that Henselianity is exactly the condition needed for a local ring $R$ for having no non-trivial étale coverings of $Spec(R)$ which are trivial on the closed point. Since these coverings are in correspondence with ètale algebras of $R$ I examined this direction and I found that, for any field $k$, the $k$-algebra of the form $k[x]/f(x)$ is ètale over $k$ if and only if $f'(x)$ is invertible in the algebra.
There is also a more complicated criterion for ètale algebras over rings which uses the invertibility of the determinant of the Jacobian of a system of polynomials. This is very reminiscent of the key condition of the Implicit Function Theorem, but I don't know why.
Here I put the link for the wikipedia pages of some related concepts, such as the implicit function theorem, Henselian rings and Hensel's lemma. Moreover here you can find an article with a large introduction about Henselian rings.
Thank you in advance for your time.
-
1
So, is there a question? For what it's worth, I always thought Hensel's Lemma was the $p$-adic version of Newton's Method. – Gerry Myerson Jun 29 '11 at 12:56
1
I also think of Hensel's lemma as Newton's method, and so does the Wikipedia article. Where did you find the "assertions regarding the fact that Hensel's Lemma is the algebraic version of Implicit Function Theorem"? – ShreevatsaR Jun 29 '11 at 13:12
I see both principles as a way to go from "local" solutions to "global" solutions for an equation. I think that's where the analogy lies. – Joel Cohen Jun 29 '11 at 13:33
I'm sorry if I was not enough explicit. The question was: "how Hensel's Lemma is the algebraic analogue of the Implicit Function Theorem?", which follows implicitely: "it is true that Hensel's lemma is the algebraic analogue of IFT?". Actually I found the explicit statement of the analogy only in some lecture notes, but they are often compared in the sense outlined by J.Cohen. So I thought there was a more deep and precise connection between the topics. For me it is also enough to know that this connection actually does not exist. Anyway, thank again for your time. – Giovanni De Gaetano Jun 29 '11 at 13:46
Hensel's lemma in a sufficiently general formulation using more than one variable is an algebraic version of the IFT. This is well-explained in the book H. Kurke, G. Pfister, M. Roczen, Henselsche Ringe, Deutsch. Verlag Wissenschaft. (1975), which was printed in the German Democratic Republic, obviously out of print. Unfortunately I know of no other reference. – Hagen Jun 29 '11 at 13:57
## 3 Answers
This is elaborated in various places, e.g. see Kuhlman's paper Valuation theoretic and model theoretic aspects of local uniformization in Hauser et al. Resolution of singularities p.389 ff. excerpted below. You can find full proofs in the links following the excerpt. See also Ribenboim, Equivalent forms of Hensel's lemma, Exposition. Math. 3 (1985), no. 1, 3-24.
[K2] 10.5 The multidimensional Hensel's Lemma, in Ch. 10, Hensel's Lemma, in
Draft of Franz-Viktor Kuhlmann's book on Valuation Theory.
[PZ] A. Prestel, M. Ziegler, Model-theoretic methods in the theory of topological fields.
J. Reine Angew. Math. 299(300) (1978), 318-341
-
Ok! It is the kind of answer I was looking for, I'm only a little bit worried by "It is (not all too well) known that Hensel's lemma...". Now I will try to understand completely the connection. Thank you very much! – Giovanni De Gaetano Jun 29 '11 at 14:46
Just a remark: Franz-Viktor's book covers the case of valuation domains only. I was under the impression that you are interested into the general case ... – Hagen Jun 29 '11 at 15:08
Bill Dubuque has largely answered the question, but just to be explict:
Suppose that you have an equation $f(x,y) = 0$, which you want to solve to express $y$ as a function of $x$. (This is a typical implicit function theorem situation.)
Well, the implicit function theorem says that first, you should choose a small n.h. of a point, say $x = 0$ to fix ideas. You should then choose a value $y_0$ of $y$ at this point, i.e. fix a solution to $f(y_0,0) = 0$; again, let's assume that we can take $y_0 = 0$. (In other words, we assume that $f(x,y)$ has no constant term, i.e. that $f(0,0) = 0$.)
Now the implicit function theorem says that we can solve for $y$ locally as a function of $x$ provided that $\dfrac{\partial f}{\partial y}(0,0) \neq 0.$ (Of course, there are other technical assumptions --- $f$ should be smooth and so on; let's ignore those, since in a moment I will take $f$ to be a polynomial.)
Now we could think about the implicit function theorem for analytic functions, and then for formally analytic functions, i.e. for formal power series.
So now the question is: given $f(x,y) = 0$, with $f$ a polynomial with no constant term, when we can find a solution $y \in \mathbb C[[x]]$ with no constant term. A sufficient condition is given by Hensel's lemma: one needs that $f'(0)$ be a unit in $\mathbb C[[x]]$ (thinking of $f$ as a polynomial in $y$ with coeficients in $x$, and taking the derivative in $y$). A formal power series is a unit precisly if its constant term is non-zero, so this can be rephrased as $\dfrac{\partial f}{\partial y}f(0,0) \neq 0,$ which is exactly the same condition as in the implicit function theorem.
In short, Hensel's Lemma in the case of a formal power series ring is exactly the same as an implicit function theorem (for polynomial equations, say) in which one only asks for formal power series solutions.
Incidentally, the connection with Newton's Method is easy to see too:
Suppose that we are trying to solve $f(x,y) = 0$ for $y$ in terms of $x$, under the assumption that $f(0,0) = 0$ and that $\dfrac{\partial f}{\partial y}(0,0) \neq 0.$ We may assume that the latter quantity equals $1$, by rescaling $f$ if necessary, so our equition has the form $$0 = a x + y + b x^2 + c xy + d y^2 + \cdots,$$ which we can rewrite as $$y = -a x - b x^2 - c x y - d y^2 + \cdots .$$ Note that this already determines $y$ up to second order terms. Now subsitute this expression for $y$ back into the right hand side, to get $$y = - a x - b x^2 - c x (- a x - b x^2 - \cdots) - d (- a x - b x^2 - \cdots)^2 + \cdots,$$ to get $y$ up to third order terms. Now substitute in again, to get $y$ up to fourth order terms, and so on.
This is just Newton's Method.
This proves Hensel's Lemma in this context. It is also easy to estimate the size of the power series coefficients for $y$ that one obtains, and to prove that $y$ has a positive radius of convergence. Thus we also establish a version of the implicit function theorem for analytic functions with the same argument.
Summary: The Implicit Function Theorem, Hensel's Lemma, and Newton's Method are all variants of the same theme.
-
Although it might not be exactly the same result, you can see both principles as a way to go from "local" solutions to "global" solutions for an equation. Here's a very rough sketch of how I think about it (in each case there are some adjustments to be made if you want a rigorous statement).
First, note that the implicit function theorem is usually formulated with a function of two variables $x$ and $y$ (it goes "for all y, there is a unique x ..."), but $y$ will be fixed throughout so I'll omit it. Start form an "approximate" solution $x$ to your equation (in the implicit theorem case, you take $(x_0, y_0)$ an actual solution, and then $(x_0, y)$ is an approximate solution). In each case, you want to solve $f(x+h) = 0$. Now when $h$ is "small", because $f$ is "sufficiently regular", you can write
$$f(x+h) = f(x) + f'(x).h +o(h)$$
So the rough idea (or guess), would be that has unique solution $f(x+h) = 0$ if $f(x) + f'(x).h = 0$ has one, which would be the case if $f'(x)$ is invertible (if $f$ is multivariate, $f'(x)$ is understood as the differential, meaning it's invertible when the Jacobian is non zero). Both theorems more or less state that it actually works.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455992579460144, "perplexity_flag": "head"}
|
http://rjlipton.wordpress.com/2012/12/04/the-amazing-zeta-code/
|
## a personal view of the theory of computation
But can it encode itself?
Sergei Voronin was an expert in analytic number theory, especially the Riemann zeta function. He proved a sense in which the Riemann Hypothesis “ain’t necessarily so.” That is, he found complex analytic functions ${\zeta'}$ that obey functional equations similar to the one for ${\zeta}$, but where ${\zeta'}$ has greater deviations in the frequency of zeroes on the critical line than ${\zeta}$ can have if the hypothesis is true.
Today Ken and I wish to discuss an older result of his—proved in 1975—about the Riemann zeta function that seems to be amazing.
Voronin proved that the zeta function is universal in the precise sense that it encodes all possible other functions, to any desired degree of precision. This seems like a complexity-theory result more than a number-theory result to me, and also seems like it should have complexity consequences. We share this opinion by Matthew Watkins, who maintains a page on number theory and physics:
“This extraordinary 1975 result receives surprisingly little coverage…”
He goes on to say, “…it was difficult to find a clear and accurate statement anywhere on the WWW in March 2004.” Let’s turn now to discuss it in detail.
## The Zeta Function
The zeta function is of course defined for complex numbers ${s}$ with real part greater than ${1}$ by the famous equation
$\displaystyle \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s}.$
It then can be extended to all complex numbers, except that at ${s=1}$ the function has a simple pole. This means just that the extension near ${1}$ behaves like ${\frac{1}{s-1}}$.
It has been known for several centuries that the zeta function holds the key to our understanding of the distribution of the primes. The proof that it has no zeroes at ${1 +it}$ where ${t}$ is real is equivalent to the Prime Number Theorem (PNT). This is the statement that the number of primes less than ${x}$, denoted by ${\pi(x)}$, is approximately
$\displaystyle \mathsf{Li}(x) = \int_{2}^{x} \frac{1}{\ln t}dt \approx \frac{x}{\ln x}.$
The zeta function can be used, and was used, by Leonhard Euler to give a proof that the number of primes must be infinite. This is weaker than the PNT, but a very simple argument. It was later modified and used by Gustav Dirichlet to prove that there are an infinite number of primes in any arithmetic progression
$\displaystyle a+b, a+2b, \dots, a+mb, \dots$
provided ${a}$ and ${b}$ have no common factor.
## The Amazing Property
Let ${U}$ be any compact set of complex numbers whose real parts ${x}$ satisfy ${\frac{1}{2} < x < 1}$. Because ${U}$ is closed and bounded, there exists a fixed ${r < \frac{1}{4}}$ such that ${\frac{3}{4} - r \leq x \leq \frac{3}{4} + r}$ for all ${z \in U}$ with real part ${x}$. The ${\frac{3}{4} - r}$ part can be as close as desired to the critical line ${x = \frac{1}{2}}$, but it must stay some fixed distance away.
We also need ${U}$ to have no “holes,” i.e., to be homeomorphic to a closed disk of radius ${r}$. Often ${U}$ is simply taken to be the disk of radius ${r}$ centered on the ${x}$-axis at ${\frac{3}{4}}$, but we’ll also think of a square grid. Then Voronin proved:
Theorem 1 Given any analytic function ${f(z)}$ that is non-vanishing on ${U}$, and any ${\epsilon>0}$, we can find some real value ${t}$ such that for all ${z \in U}$,
$\displaystyle |\zeta(z+it)-f(z)| < \epsilon.$
That is, by translating ${U}$ upward by a distance of ${t}$, we find a region where ${\zeta}$ produces the same values as ${f}$ does on the original ${U}$, within a tolerance of ${\epsilon}$. Moreover, as ${\epsilon \rightarrow 0}$ we can find more and more ${t}$‘s for which the simulation is better and better. Watkins quotes this from Martin Gutzwiller’s book Chaos in Classical and Quantum Mechanics (Springer-Verlag, 1990):
“Although the Riemann zeta-function is an analytic function with [a] deceptively simple definition, it keeps bouncing around almost randomly without settling down to some regular asymptotic pattern. The Riemann zeta-function displays the essence of chaos in quantum mechanics, analytically smooth, and yet seemingly unpredictable…
In a more intuitive language, the Riemann zeta-function is capable of fitting any arbitrary smooth function over a finite disk with arbitrary accuracy, and it does so with comparative ease, since it repeats the performance like a good actor infinitely many times on a designated set of stages.” [emphasis in original]
That is to say, even if we have a function ${g}$ that is analytic and nonzero on some compact region ${U'}$ well away from the critical strip, or even overlapping the critical line, we can map ${U'}$ conformally to some ${U}$ within the strip, and then ${\zeta}$ can approximate the resulting mapped function ${g'}$.
## A “Concrete” View of the Proof
The reason we think this is relevant to complexity theory comes from a discrete explanation of how the theorem works. Given a ${\delta > 0}$, take ${D = 2\lfloor\frac{r}{\delta}\rfloor}$. Consider ${U}$ to be the ${D \times D}$ square grid centered on ${\frac{3}{4}}$ on the ${x}$-axis. We will use the ${D^2}$-many centers ${u}$ of each grid square. Note that in the leftmost grid boxes, the real part ${x}$ of ${u}$ is displaced from ${\frac{1}{2}}$ by about ${\frac{\delta}{2}}$ in addition to the fixed distance ${\frac{1}{4} - r}$ which is independent of ${\delta}$. If we really fix ${\delta}$, then we can allow the grid to extend all the way to the critical line ${x = \frac{1}{2}}$.
We also make a discrete set ${V}$ of allowed complex values ${v = x + iy}$ such that ${x}$ and ${y}$ are integral multiples of the target approximation goal ${\epsilon}$. More precisely we set ${E = \lceil\frac{2}{\epsilon}\rceil}$. Then ${x}$ and ${y}$ may have values ${j\frac{1}{E} + \frac{1}{2E}}$ for integers ${j}$ such that ${-E^2 \leq j \leq E^2 - 1}$. Note that these values stay away from zero, this time by an amount that depends on ${\epsilon}$. To every grid center ${u}$, assign a value ${v_u}$ from ${V}$.
Now we could use polynomial interpolation over these ${D^2}$-many values to define an analytic function ${f}$ that has those values at those points of ${U}$, and apply Voronin’s theorem to ${f}$. The way it works, however, is really the reverse: Given any ${f}$ that is analytic and non-zero on ${U}$, we observe two consequences of ${U}$ being compact:
$\displaystyle \begin{array}{rcl} &&(\exists \epsilon_0 > 0)(\forall z \in U): \epsilon_0 < |f(z)| < \frac{1}{\epsilon_0},\\ &&(\forall \epsilon > 0)(\exists \delta > 0)(\forall z,z' \in U): |z - z'| < \delta \implies |f(z) - f(z')| < \epsilon. \end{array}$
Thus for sufficiently small ${\epsilon}$ we can choose ${\delta}$ to make the grid sufficiently fine to give an ${\epsilon}$-approximation of ${f}$ everywhere on it. Namely, for each box center ${u}$ take a value in ${V}$ that is nearest to ${f(u)}$. There is always a suitable ${v_u}$ whose real and imaginary parts have magnitude no more than ${E}$.
The key idea is that Euler’s product formula for ${\zeta(s)}$ enables one to do “Zeta Interpolation” on the resulting finite grid of values ${v_u}$. Well this could be the idea. The proof sketch given by Wikipedia seems to do the approximation more directly on ${f}$, using selections of primes and complex phase angles. It might be interesting to work it out as a discrete interpolation.
A little intuition is conveyed by imagining the above colored grid against some extension of Wikipedia’s colored figure on zeta-function universality. Overall this shows how discrete structures emerge out of conditions in continuous mathematics such as compactness—a hallmark of what Ron Graham, Don Knuth, and Oren Patashnik promote as “Concrete Mathematics.”
## The Zeta Code
Take any string ${w}$ over some finite alphabet ${\Sigma}$. We can choose ${\epsilon,\delta}$ in some minimal manner so that the corresponding ${D,E,V}$ satisfy ${D \geq |w|}$ and ${|V| \geq |\Sigma|}$ with values spaced far enough to allow unique decoding from any ${\epsilon}$-approximation of those values. If ${w}$ is a binary string, we can take ${\Sigma = \{0,1\}^b}$ for some block length ${b}$ that optimizes the relation between ${D}$ and ${E}$. Then we can plug in the values encoding ${w}$ as the values ${v_u}$ for our grid—actually, for just one row of grid points ${u}$. It follows that we can find ${t}$ such that the values ${\zeta(u + it)}$ encode the string ${w}$. That is, rounding those values to the closest elements of ${V}$ decodes ${t}$ to yield ${w}$.
It is amazing enough that we can do this for just one row, one-dimensionally. That we can do this in two dimensions—say encoding a larger ${w}$ as a matrix and mapping blocks inside ${w}$ to single values ${v_u}$—is even more amazing. All of this gets encoded by a single numerical value ${t}$. Let’s call this the Zeta Code.
Is the Zeta Code useful? We cannot expect ${t}$ to be a simple value—it must have Kolmogorov complexity at least as great as the ${w}$ it encodes. But this already speaks a connection to complexity theory. The code’s performance is so good that the working ${t}$‘s have positive limit density. That is, for any ${f}$ and ${U}$:
$\displaystyle (\forall \epsilon)(\exists \gamma)(\forall^{\infty}T)\frac{1}{T}\mu(\{t \leq T: (\forall z \in U) |\zeta(z+it) - f(z)| \leq \epsilon\}) \geq \gamma,$
where ${\mu}$ is Lebesgue measure. We can define ${\gamma = \gamma_w}$ in terms of ${w}$ alone by maximizing over ${b,\epsilon,\delta}$ satisfying the above constraints. Then ${\gamma_w}$ can be called the “zeta-density” of the string ${w}$. What might be its significance?
## Can Zeta Simulate Itself?
Another way complex analysis could be like computational complexity is self-reference. The ${\zeta}$ function is analytic away from the unique pole at ${s = 1}$. It is non-zero away from the trivial zeroes on the negative ${x}$-axis, and away from the non-trivial zeroes which the Riemann Hypothesis asserts all have real part ${\frac{1}{2}.}$
Thus Voronin’s theorem says that the ${\zeta}$-function on the critical strip can approximate the ${\zeta}$-function on disk-like regions sufficiently away from the critical line, to any degree of approximation. In that sense it is “fractal.” But can it simulate itself via the theorem within the strip, getting arbitrarily close to the critical line?
The question is equivalent to the Riemann Hypothesis itself. For suppose the hypothesis were false. Then there would be a zero ${\frac{1}{2} + e_0 + i t_0}$ for some fixed ${e > 0}$. Now fix ${r}$ such that ${\frac{1}{4} - e_0 < r < \frac{1}{4}}$, and take ${U}$ of width ${r}$ and height ${t_0}$. This includes the zero, so the condition of Voronin’s theorem does not apply. Thus if the condition applies for ${r}$ arbitrarily close to ${\frac{1}{4}}$, then Riemann must be true. The converse also holds: if Riemann is true then ${\zeta}$ is analytic arbitrarily close, and we can apply the theorem to produce infinitely many regions where ${\zeta}$ replicates itself within any desired precision.
This picture has been sharpened just this past summer in a paper by Johan Andersson, titled “Non universality on the critical line.” He shows that ${\zeta}$ definitely does not have analogous universal behavior on the line itself. Thus ideas of universal simulation, which we think of as Alan Turing’s province, matter directly to the Riemann Hypothesis, which Turing himself worked on.
## Open Problems
Can more be derived from this nexus of complexity and complex analysis? Can it help attack the Riemann Hypothesis itself?
I (Dick) find Voronin’s universality theorem quite surprising. I also think that we should be able to use it to prove a lower bound on the computational complexity of computing the zeta function. One of the reasons I find this result cool is it seems to be linked—in some way—to the complexity of computing the zeta function. I cannot prove this yet, but the fact that “all” functions are encoded should in principle yield a lower bound on the cost of evaluating the zeta function. Does anyone have an idea of how to make this precise?
### Like this:
from → All Posts, History, Ideas, Oldies
9 Comments leave one →
1. December 4, 2012 5:34 pm
What does this do for physicists – can their sums over all surfaces be reduced to integrations over functionals of the Riemann zeta-function?
2. Andy D
December 4, 2012 9:32 pm
Amazing theorem.
3. fnord
December 5, 2012 8:28 am
Tiny errata: Your equation for the Zeta function uses “k” for the summation, but “n” in what follows.
• rjlipton *
December 5, 2012 10:32 am
fnord,
Thanks for catch. We try to avoid errors, misteaks do occur—i mean mistakes do occur.
dick
• December 5, 2012 9:32 pm
Thanks. Some errores are invisible .
4. December 6, 2012 12:23 am
One interpretation of the theorem is that there simply aren’t that many analytic functions otherwise \$C_0\$ wouldn’t be enough to parametrize (ie. \$t \in R\$) arbitrarily good approximations to all analytic functions.
It’s been a while since I spent any time doing complex analysis, but I seem to remember encountering other theorems which had the same flavor, i.e. analytic functions are “rare”.
5. Bo Waggoner
December 6, 2012 3:02 am
Very interesting, thanks for the post!
Here’s an initial reaction for why this feels tricky. zeta can approximate arbitrarily well the function g(x) = Chaitin’s constant (for instance). Since we know zeta is computable, it must be the case that zeta does so only on an (uncomputable) sequence of (uncomputable?) reals.
Similarly, zeta can solve any NP-Complete problem by mapping instances to the constant that encodes its certificates; but if the \$t\$ required to do so grows exponentially with the size of the input, then we haven’t learned much about the internals of the zeta function — it’s just a passive pump.
So it seems hard to understand zeta’s computational power or limits without understanding its language — the sizes of its inputs and outputs. I think it’d be very cool to follow up on zeta density and think about how much “space” zeta has to encode all of these functions. I wonder what happens if we try to make a counting argument with the number of computable analytic functions to show that they just can’t all fit — some functions will need to be encoded exponentially (that is, the associated sequence of \$t\$s will need to grow exponentially).
But I think such a result would make finding complexity limits on zeta tougher, because it would show that the zeta function “offloads” difficult computation to its input (rather than showing that zeta itself solves hard problems).
6. December 7, 2012 4:24 pm
I am far from expert here, but I remember hearing a talk by Gauthier on universality results for various kinds of entire function. See some seemingly related details here
http://www.mfo.de/document/0807a/OWR_2008_06.pdf which suggest that universality should somehow be generic. (But perhaps only in the sense that a “generic” continuous function should be nowhere differentiable.)
### Trackbacks
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 160, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271978735923767, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/120410?sort=newest
|
## why are subextensions of Galois extensions also Galois?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Generally a Galois extension is defined to be an algebraic extension that is also normal & separable. It is then shown that in the sequence of field extensions $L|M|K$ if $L|K$ is Galois then $L|M$ is. This follows since the same property is valid for separable & normal extensions individually. It also follows that $L|K$ is a Galois extension iff the set of elements of $L$ invariant under the action of $Aut_K L$ is $K$
In Robalo Delgados thesis on Galois Categories referenced in nLab-Grothendiecks Galois Theory he takes the opposite tack, and in definition 3.2.1.1 defines an algebraic extension of fields $L|K$ to be a Galois extension iff the set of elements of $L$ invariant under the action of $Aut_K L$ is $K$.
It is then shown that in the sequence of algebraic field extensions $L|M|K$ if $L|K$ is Galois then $L|M$ is. This is asserted to be an obvious deduction (and so has no details), I don't see the obviousness...can someone clarify.
In proposition 3.2.1.3 he shows that Galois extension is normal and separable.
All this appears to be in the opposite order of the standard treatments. One reason I'm interested in his formulation, if it is correct, is that one side of the Galois correspondence follows easily from this.
disclaimer: I've already asked this question on math.stackexchange but the answers there revolved around characterising Galois extensions as being normal & separable, and then showing this property follows.
-
2
I can't see how the question wasn't answered in the Stackexchange thread. Try reading it again, and consulting your Galois theory textbook? – Ketil Tveiten Jan 31 at 14:44
6
The question as I read it seems really not answered on math.SE, possiblty because it was misunderstood. OP does not seem to ask how one can proof this at all but rather: Suppose we define an extension to be Galois if the field fixed under Aut_K(L) is K. Is there then an 'obvious' reason that for an intermideate field M also the extension L over M is Galois. [I am not sure this is an appriate question ATM; an have no time to decide, but in any case I feel the question is partly misunderstood.] – quid Jan 31 at 17:45
7
This is an interesting question, which should be read more carefully (especially by the ones who down/close voted it because it seems to be elementary). Here a reformulation: If $L/M/K$ are algebraic field extensions, and $K = L^{\mathrm{Aut}_K(L)}$, how can we prove directly that $M = L^{\mathrm{Aut}_M(L)}$? – Martin Brandenburg Jan 31 at 18:22
3
@Martin & Mozibur. (Assuming $G=\mathop{\rm Aut}_K(L)$ finite). It is obvious that $M$ is contained in $L^{\mathop{\rm Aut}_M(L)}$, what needs to be shown is the other inclusion. Or: if $x\not\in M$, there exists $g\in\mathop{\rm Aut}_M(L)$ such that $g(x)\neq x$. Now, all conjugates of $x$ over $M$ belong to $L$, and one of them, say $y$, is distinct from $x$. This gives an $M$-linear morphism $M[x]\to L$ such that $x\mapsto y$. Going on, this morphism can be extended to a $M$-linear morphism from $L$ to $L$, which is then an element $g$ of $G$ such that $g(x)=y$. – ACL Feb 1 at 12:02
3
@ACL: You use that $L/M$ is normal and separable. – Martin Brandenburg Feb 1 at 15:52
show 5 more comments
## 2 Answers
This proof might not count as direct. It is for finite extensions.
Lemma: $L/K$, a finite extension, is Galois if and only if $L \otimes_K L$ is a product of copies of $L$.
Proof: If $L\otimes_K L$ is a product of copies of $L$, and $x$ is fixed by every automorphism, then $s \otimes 1- 1 \otimes s$ is zero in $L \otimes_K L$, so $s \in K$ (via explicit description of tensor products of vector spaces.)
If there are a lot of automorphisms, then each automorphism gives a different surjective map $L \otimes_K L \to L$, so we get a surjective map from $L \otimes_K L$ to the product of $[L:K]$ copies of $L$, which must be an isomorphism by dimension-counting.
Then $L \otimes_M L$ is a quotient of $L \otimes_K L$, so is a product of finitely many copies of $L$.
I believe one can extend this to all algebraic extensions via a slightly more complicated argument. But that might just be pointless, as one could argue that my condition is just a clever way of saying "normal and separable" in different language.
-
Dear Will, when you say <<$L \otimes_K L$ is a product of copies of L>>, by product you mean for example $L \times L$? Thanks. – BenjaLim Feb 1 at 3:23
Yes. Obviously it being a tensor product does not say much. – Will Sawin Feb 1 at 3:31
Is it obvious that if there are "enough" automorphisms then there must be $[L:K]$ of them? – Eric Wofsey Feb 1 at 14:23
I can't think of an easy proof. There is a proof that is standard in Galois theory, but I think that just makes this the regular proof in new clothing. – Will Sawin Feb 1 at 16:41
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Edited (in view of new comments to this answer and the original question): I believe that Delgado missed the point that $M=Fix(Aut_M(L))$ isn't a formal consequence of $K=Fix(Aut_K(L))$ for algebraic extensions $K\subseteq M\subseteq L$. I took a closer look into the (master?) thesis. It doesn't claim to contain anything new, This work is a journey through the main ideas and sucessive [sic] generalizations of Galois Theory, towards the origins of Grothendieck’s theory of Dessins d’Enfants ... as the author puts it in his abstract.
The chapter on Galois theory just repeats well-known text book material, mostly without proofs. Considering the verbose character of this chapter, I'm sure the author would have said more than We immediately conclude that ... if there had been a novel aspect. To me it appears that he simply missed an essential aspect of Galois theory.
At any rate, from $K=Fix(Aut_K(L))$ alone we cannot conclude much, one somehow has to use the fact that $L/K$ is algebraic too as the following example shows: If $L=K(x)$ for a transcendental $x$, and if $K$ is infinite, then $K$ is the fixed field of $Aut_K(L)$, but for most rational functions $r(x)$ the extension $L/K(r(x))$ isn't Galois in either sense.
So if we want to show that $M=Fix(Aut_M(L))$ for an algebraic extension $L/K$ with $K=Fix(Aut_K(L))$, then I believe that one is automatically lead to the usual kind of arguments, which by the are also listed in this thesis.
-
2
Once again, this doesn't answer the question, and probably Mozibur knows all that (see also the math.SE discussion). Mozibur has asked if there is a proof which avoids the usual characterization of Galois extensions as well as the Main Thm on Galois theory, because Robalo Delgados indicates that this is possible. And even for finite extensions this is an interesting question. – Martin Brandenburg Jan 31 at 19:35
You show that $a$ is the unique root of $f$, but this only implies that $f(x)=x-a$ if we already knew that $L/M$ is separable. You also use in the proof that $L/M$ is normal. Therefore, again this is just the proof (reducing to the statements for normal and separable) which Mozibur wants to avoid. Of course, I don't claim that this is possible at all, but it would be interesting. – Martin Brandenburg Feb 1 at 0:14
My comments refer to older versions of the answer. – Martin Brandenburg Feb 1 at 10:01
2
@Mueller: I'm beginning to suspect that Delgado is wrong in his claim, particularly the 'immediacy' of the deduction... – Mozibur Ullah Feb 1 at 13:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489989280700684, "perplexity_flag": "head"}
|
http://scientopia.org/blogs/goodmath/tag/crackpottery/
|
Just another Scientopia Blogs site
## The Gravitational Force of Rubbish
Imagine, for just a moment, that you were one a group of scientists that had proven the most important, the most profound, the most utterly amazing scientific discovery of all time. Where would you publish it?
Maybe Nature? Science? Or maybe you'd prefer to go open-access, and go with PLOS ONE? Or more mainstream, and send a press release to the NYT?
Well, in the case of today's crackpots, they bypassed all of those boring journals. They couldn't be bothered with a pompous rag like the Times. No, they went for the really serious press: America Now with Leeza Gibbons.
What did they go to this amazing media outlet to announce? The most amazing scientific discovery of all time: gravity is an illusion! There's no gravity. In fact, not just is there no gravity, but all of that quantum physics stuff? It's utter rubbish. You don't need any of that complicated stuff! No - you need only one thing: the solar wind.
A new theory on the forces that control planetary orbit refutes the 400-year old assumptions currently held by the scientific community. Scientific and engineering experts Gerhard and Kevin Neumaier have established a relationship between solar winds and a quantized order in both the position and velocity of the solar system's planets, and movement at an atomic level, with both governed by the same set of physics.
The observations made bring into question the Big Bang Theory, the concept of black holes, gravitational waves and gravitons. The Neumaiers' paper, More Than Gravity, is available for review at MoreThanGravity.com
Pretty damned impressive, huh? So let's follow their instructions, and go over to their website.
Ever since humankind discovered that the Earth and the planets revolved around the Sun, there was a question about what force was responsible for this. Since the days of Newton, science has held onto the notion that an invisible force, which we have never been able to detect, controls planetary motion. There are complicated theories about black holes that have never been seen, densities of planets that have never been measured, and subatomic particles that have never been detected.
However, it is simpler than all of that and right in front of us. The Sun and the solar wind are the most powerful forces in our solar system. They are physically moving the planets. In fact, the solar wind spins outward in a spiral at over a million miles per hour that controls the velocity and distances that planets revolve around the Sun. The Sun via the solar wind quantizes the orbits of the planets – their position and speed.
The solar wind also leads to the natural log and other phenomenon from the very large scale down to the atomic level. This is clearly a different idea than the current view that has been held for over 400 years. We have been working on this for close 50 years and thanks to satellite explorations of space have data that just was not available when theories long ago were developed. We think that we have many of the pieces but there are certainly many more to be found. We set this up as a web site, rather as some authoritative book so that there would be plenty of opportunity for dialog. The name for this web site, www.MorethanGravity.com was chosen because we believe there is far more to this subject than is commonly understood. Whether you are a scientific expert in your field or just have a general interest in how our solar system works, we appreciate your comments.
See, it's all about the solar wind. There's no such thing as gravity - that's just nonsense. The sun produces the solar wind, which does absolutely everything. The wind comes out of the sun, and spirals out from the sun. That spiral motion has eddies in it an quantized intervals, and that's where the planets are. Amazing, huh?
Remember my mantra: the worst math is no math. This is a beautiful demonstration
of that.
Of course... why does the solar wind move in a spiral? Everything we know says that in the absence of a force, things move in a straight line. It can't be spiraling because of gravity, because there is no gravity. So why does it spiral? Our brilliant authors don't bother to say. What makes it spiral, instead of just move straight? Mathematically, spiral motion is very complicated. It requires a centripetal force which is smaller than the force that would produce an orbit. Where's that force in this framework? There isn't any. They just say that that's how the solar wind works, period. There are many possible spirals, with different radial velocities - which one does the solar wind follow according to this, and why? Again, no answer from the authors.
Or... why is the sun producing the solar wind at all? According to those old, stupid theories that this work of brilliance supercedes, the sun produces a solar wind because it's fusing hydrogen atoms into helium. That's happening because gravity is causing the atoms of the sun to be compressed together until they fuse. Without gravity, why is fusion happening at all? And given that it's happening, why does the sun not just explode into a supernova? We know, from direct observation, that the energy produced by fusion creates an outward force. But gravity can't be holding the sun together - so why is the sun there at all? Still, no answers.
They do, eventually, do some math. One of the big "results" of this hypothesis is about the "quantization" of the orbits of planets around the sun. They were able to develop a simple equation which predicts the locations where planets could exist in their "solar wind" system.
Let’s start with the distance between the planets and the Sun. We guessed that if the solar system was like an atom, that planetary distance would be quantized. This is to say that we thought that the planets would have definite positions and that they would be either in the position or it would be empty. In a mathematical sense, this would be represented by a numerical integer ordering (0,1,2,3,…). If the first planet, Mercury was in the 0 orbital, how would the rest of the planets line up? Amazingly well we found.
If we predict the distance from the surface of the Sun to each planet in this quantized approach, the results are astounding. If D equals the mean distance to the surface of the Sun, and d0 as the distance to Mercury, we can describe the relationship that orders the planets mathematically as:
Each planetary position can be predicted from this equation in a simple calculation as we increase the integer (or planet number) n. S is the solar factor, which equals 1.387. The solar factor is found in the differential rotation of the Sun and the profile of the solar wind which we will discuss later.
Similar to the quantized orbits that exist within an atom, the planetary bodies are either there or not. Mercury is in the zero orbital. The next orbital is missing a planet. The second, third, and fourth orbitals are occupied by Venus, Earth, and Mars respectively. The fifth orbital is missing. The sixth orbital is filled with Ceres. Ceres is described as either the largest of all asteroids or a minor planet (with a diameter a little less than half that of Pluto), depending on who describes it. Ceres was discovered in 1801 as astronomers searched for the missing planets that the Titius-Bode Law predicted would exist.
So. What they found was an exponential equation which products very approximate versions of the size of first 8 planets' orbits, as well as a couple of missing ones.
This is, in its way, interesting. Not because they found anything, but rather because they think that this is somehow profound.
We've got 8 data points (or 9, counting the asteroid belt). More precisely, we have 9 ranges, because all of the orbits are elliptical,but the authors of this junk are producing a single number for the size of the orbits, and they can declare success if their number falls anywherewithin the range from perihelion to aphelion in each of the orbits.
It would be shocking if there weren't any number of simple equations that described exactly the 9 data points of the planet's orbits.
But they couldn't even make that work directly. They only manage to get a partial hit - getting an equation that hits the right points, but which also generates a bunch of misses. There's nothing remotely impressive about that.
From there, they move on to the strawmen. For example, they claim that their "solar wind" hypothesis explains why the planets all orbit in the same direction on the same plane. According to them, if orbits were really gravitational, then planets would orbit in random directions on random planes around the sun. But their theory is better than gravity, because it says why the planets are in the same plane, and why they're all orbiting in the same direction.
The thing is, this is a really stupid argument. Why are the planets in the same plane, orbiting in the same direction? Because the solar system was formed out of a rotating gas cloud. There's a really good, solid, well-supported explanation of why the planets exist, and why they orbit the sun the way they do. Gravity doesn't explain all of it, but gravity is a key piece of it.
What they don't seem to understand is how amazingly powerful the theory of gravity is as a predictive tool. We've sent probes to the outer edges of the solar system. To do that, we didn't just aim a rocket towards Jupiter and fire it off. We've done things like the Cassini probe, where we launched a rocket towards Venus. It used the gravitational field of Venus twice to accelerate it with a double-slingshot maneuver, and send it back towards earth, using the earth's gravity to slingshot it again, to give it the speed it needed to get to Jupiter.
This wasn't a simple thing to do. It required an extremely deep understanding of gravity, with extremely accurate predictions of exactly how gravity behaves.
How do our brilliant authors answer this? By handwaving. The extend of their response is:
Gravitational theory works for things like space travel because it empirically measures the force of a planet, rather than predicting it.
That's a pathetic handwave, and it's not even close to true. The gravitational slingshot is a perfect answer to it. A slingshot doesn't just use some "empirically measured" force of a planet. It's a very precise prediction of what the forces will be at different distances, how that force will vary, and what effects that force will have.
They do a whole lot more handwaving of very much the same order. Pure rubbish.
### Share this:
17 responses so far
## A Bad Mathematical Refutation of Atheism
At some point a few months ago, someone (sadly I lost their name and email) sent me a link to yet another Cantor crank. At the time, I didn't feel like writing another Cantor crankery post, so I put it aside. Now, having lost it, I was using Google to try to find the crank in question. I didn't, but I found something really quite remarkably idiotic.
(As a quick side-comment, my queue of bad-math-crankery is, sadly, empty. If you've got any links to something yummy, please shoot it to me at markcc@gmail.com.)
The item in question is this beauty. It's short, so I'll quote the whole beast.
MYTH: Cantor's Set Theorem disproves divine omniscience
God is omniscient in the sense that He knows all that is not impossible to know. God knows Himself, He knows and does, knows every creature ideally, knows evil, knows changing things, and knows all possibilites. His knowledge allows free will.
Cantor's set theorem is often used to argue against the possibility of divine omniscience and therefore against the existence of God. It can be stated:
1. If God exists, then God is omniscient.
2. If God is omniscient, then, by definition, God knows the set of all truths.
3. If Cantor's theorem is true, then there is no set of all truths.
4. But Cantor’s theorem is true.
5. Therefore, God does not exist.
However, this argument is false. The non-existence of a set of all truths does not entail that it is impossible for God to know all truths. The consistency of a plausible theistic position can be established relative to a widely accepted understanding of the standard model of Cantorian set theorem. The metaphysical Cantorian premises imply that Cantor’s theorem is inapplicable to the things that God knows. A set of all truths, if it exists, must be non-Cantorian.
The attempted disproof of God’s omniscience is, from a meta-mathematical standpoint, is inadequate to the extent that it doesn't explain well-known mathematical contexts in which Cantor’s theorem is invalid. The "disproof" doesn't acknowledge standard meta-mathematical conceptions that can analogically be used to establish the relative consistency of certain theistic positions. The metaphysical assertions concerning a set of all truths in the atheistic argument above imply that Cantor’s theorem is inapplicable to a set of all truths.
This is an absolute masterwork of crankery! It's remarkably silly argument on so many levels.
1. The first problem is just figuring out what the heck he's talking about! When you say "Cantor's theorem", what I think of is one of Cantor's actual theorems: "For any set S, the powerset of S is larger than S." But that is clearly not what he's referring to. I did a bit of searching to make sure that this wasn't my error, but I can't find anything else called Cantor's theorem.
2. So what the heck does he mean by "Cantor's set theorem"? From his text, it appears to be a statement something like: "there is no set of all truths". The closest actual mathematical statement that I can come up with to match that is Gödel's incompleteness theorem. If that's what he means, then he's messed it up pretty badly. The closest I can come to stating incompleteness informally is: "In any formal mathematical system that's powerful enough to express Peano arithmetic, there will be statements that are true, but which cannot be proven". It's long, complex, not particularly intuitive, and it's still not a particularly good statement of incompleteness.
Incompleteness is a difficult concept, and as I've written about before, it's almost impossible to state incompleteness in an informal way. When you try to do that, it's inevitable that you're going to miss some of its subtleties. When you try to take an informal statement of incompleteness, and reason from it, the results are pretty much guaranteed to be garbage - as he's done. He's using a mis-statement of incompleteness,and trying to reason from it. It doesn't matter what he says: he's trying to show how "Cantor's set theorem" doesn't disprove his notion of theism. Whether it does or not doesn't matter: for any statement X, no matter what X is, you can't prove that "Cantor's set theorem" or Gödel's incompleteness theorem, or anything else disproves X if you're arguing against something that isn't X.
3. Ignoring his mis-identification of the supposed theorem, the way that he stated it is actually meaningless. When we talk about sets, we're using the word set in the sense of either ZFC or NBG set theory. Mathematical set theory defines what a set is, using first order predicate logic. His version of "Cantor's set theorem" talks about a set which cannot be a set!
He wants to create a set of truths. In set theory terms, that's something you'd define with the axiom of specification: you'd use a predicate ranging over your objects to select the ones in the set. What's your predicate? Truth. At best, that's going to be a second-order predicate. You can't form sets using second-order predicates! The entire idea of "the set of truths" isn't something that can be expressed in set theory.
4. Let's ignore the problems with his "Cantor's theorem" for the moment. Let's pretend that the "set of all truths" was well-defined and meaningful. How does his argument stand up? It doesn't: it's a terrible argument. It's ultimately nothing more than "Because I say so!" hidden behind a collection of impressive-sounding words. The argument, ultimately, is that the set of all truths as understood in set theory isn't the same thing as the set of all truths in theology (because he says that they're different), therefore you can't use a statement about the set of all truths from set theory to talk about the set of all truths in theology.
5. I've saved what I think is the worst for last. The entire thing is a strawman. As a religious science blogger, I get almost as much mail from atheists trying to convince me that my religion is wrong as I do from Christians trying to convert me. After doing this blogging thing for six years, I'm pretty sure that I've been pestered with every argument, both pro- and anti-theistic that you'll find anywhere. But I've never actually seen this argument used anywhere except in articles like this one, which purport to show why it's wrong. The entire argument being refuted is a total fake: no one actually argues that you should be an atheist using this piece of crap. It only exists in the minds of crusading religious folk who prop it up and then knock it down to show how smart they supposedly are, and how stupid the dirty rotten atheists are.
### Share this:
16 responses so far
## Audiophiles and the Need to be Special
I love laughing at audiophiles.
If you're not familiar with the term, audiophiles are people who are really into top-end audio equipment. In itself, that's fine. But there's a very active and vocal subset of the audiophile community that's built up their self-image around the idea that they're special. They don't just have better audio equipment than you do, but they have better appreciation of sound quality than you do. In fact, their hearing is better than yours. They can hear nuances in sound quality that you can't, because they're so very, very special. They've developed this ability, you see, because they care more about music than you do.
It's a very human thing. We all really want to be special. And when there's something that's really important to us - like music is for many people - there's a very natural desire to want to be able to appreciate it on a deep level, a special level reserved only for people who really value it. But what happens when you take that desire, and convince yourself that it's not just a desire? You wind up turning into a sucker who's easy to fleece for huge quantities of money on useless equipment that can't possibly work.
I first learned about these people from my old friend John Vlissides. John died of brain cancer about 5 years ago, which was incredibly sad. But back in the day, when we both worked at IBM Research, he and I were part of a group that ate lunch together every day. John was a reformed audiophile, and used to love talking about the crazy stuff he used to do.
Audiophiles get really nutty about things like cables. For example, John used to have the cables linking his speakers to his amp suspended from the ceiling using non-conductive cord. The idea behind that is that electrical signals are carried, primarily, on the outer surface of the wire. If the cable was sitting on the ground, it would deform slighly, and that would degrade the signal. Now, of course, there's no perceptible difference, but a dedicated audiophile can convince themselves that they can hear it. In fact, this is what convinced John that it was all craziness: he was trained as an electrical engineer, and he sat down and worked out how much the signal should change as a result of the deformation of the copper wire-core, and seeing the real numbers, realized that there was no way in hell that he was actually hearing that tiny difference. Right there, that's an example of the math aspect of this silliness: when you actually do the math, and see what's going on, even when there's a plausible explanation, the real magnitude of the supposed effect is so small that there's absolutely no way that it's perceptible. In the case of wire deformation, the magnitude of the effect on the sound produced by the signal carried by the wire is so small that it's essentially zero - we're talking about something smaller than the deformation of the sound waves caused by the motion of a mosquito's wings somewhere in the room.
John's epiphany was something like 20 years ago. But the crazy part of the audiophile community hasn't changed. I encountered two instances of it this week that reminded me of this silliness and inspired me to write this post. One was purely accidental: I just noticed it while going about my business. The other, I noticed on boing-boing because the first example was already in my mind.
First, I was looking for an HDMI video cable for my TV. At the moment, we've got both an AppleTV and a cable box hooked up to our TV set. We recently found out that under our cable contract, we could get a free upgrade of the cable box, and the new box has HDMI output - so we'd need a new cable to use it.
HDMI is a relatively new standard video cable for carrying digital signals. Instead of old-fashioned analog signals that emulate the signal recieved by a good-old TV antenna like we used to use, HDMI uses a digital stream for both audio and video. Compared to old-fashioned analog, the quality of both audio and video on a TV using HDMI is dramatically improved. Analog signals were designed way, way back in the '50s and '60s for the televisions that they were producing then - they're very low fidelity signals, which are designed to produce images on old TVs, which had exceedingly low resolution by modern standards.
The other really great thing about a digital system like HDMI is that digital signals don't degrade. A digital system takes a signal, and reduces it to a series of bits - signals that can be interpreted as 1s and 0s. That series of bits is divided into bundles called packets. Each packet is transmitted with a checksum - an additional number that allows the receiver to check that it received the packet correctly. So for a given packet of information, you've either received it correctly, or you didn't. If you didn't, you request the sender to re-send it. So you either got it, or you didn't. There's no in-between. In terms of video quality, what that means is that the cable really doesn't matter very much. It's either getting the signal there, or it isn't. If the cable is really terrible, then it just won't work - you'll get gaps in the signal where the bad packets dropped out - which will produce a gap in the audio or video.
In analog systems, you can have a lot of fuzz. The amplitude of the signal at any time is the signal - so noise effects that change the amplitude are changing the signal. There's a very real possibility that interference will create real changes in the signal, and that those changes will produce a perceptible result when the signal is turned into sound or video. For example, if you listen to AM radio during a thunderstorm, you'll hear a burst of noise whenever there's a bolt of lightning nearby.
But digital systems like HDMI don't have varying degrees of degradation. Because the signal is reduced to 1s and 0s - if you change the amplitude of a 1, it's still pretty much going to look like a one. And if the noise is severe enough to make a 1 look like a 0, the error will be detected because the checksum will be wrong. There's no gradual degradation.
But audiophiles... ah, audiophiles.
I was looking at these cables. A basic six-foot-long HDMI cable sells for between 15 and 25 dollars. But on the best-buy website, there's a clearance cable for just $12. Great! And right next to it, there's another cable. Also six feet long. For$240 dollars! 20-times higher, for a friggin' digital cable! I've heard, on various websites, the rants about these crazies, but I hadn't actually paid any attention. But now, I got to see it for myself, and I just about fell out of my chair laughing.
To prolong the entertainment, I went and looked at the reviews of this oh-so-amazing cable.
People who say there is NO difference between HDMI cables are just trying to justify to themselves to go cheap. Now it does depend on what you are connecting the cable between. If you put this Carbon HDMI on a Cable or Satellite box, you probably won't see that much of a difference compared to some middle grade cables.
I connected this cable from my PS3 to my Samsung to first test it, then to my receiver. It was a nice upgrade from my previous Cinnamon cable, which is already a great cable in it's own right. The picture's motion was a bit smoother with gaming and faster action. I also noticed that film grain looked a little cleaner, not sure why though.
The biggest upgrade was with my audio though. Everything sounded a little crisper with more detail. I also noticed that the sound fields were more distinct. Again not sure exactly why, but I will take the upgrade.
All and all if you want the best quality, go Audio Quest and specifically a Carbon HDMI. You never have to upgrade your HDMI again with one of these guys. Downfall though is that it is a little pricey.
What's great about it: Smooth motion and a little more definition in the picture
What's not so great: Price
It's a digital cable. The signal that it delivers to your TV and stereo is not the slightest bit different from the signal delivered by the \$12 clearance cable. It's been reduced by the signal producing system to a string of 1s and 0s - the identical string of 1s and 0s on both cables - and that string of bits is getting interpreted by exactly the same equipment on the receiver, producing exactly the same audio and video. There's no difference. It has nothing to do with how good your ears are, or how perceptive you are. There is no difference.
But that's nothing. The same brand sells a \$700 cable. From the reviews:
I really just bought 3 of these. So if you would like an honest review, here it is. Compared to other Audio Quest cables, like the Vodka, you do not see a difference unless you know what to look for and have the equipment that can actually show the difference. Everyone can see the difference in a standard HDMI to an HDMI with Silver in it if you compare, but the difference between higher level cables is more subtle. Audio is the night and day difference with these cables. My bluray has 2 HDMI outs and I put one directly to the TV and one to my processor. My cable box also goes directly to my TV and I use Optical out of the TV because broadcast audio is aweful. The DBS systems keeps the cable ready for anything and I can tell that my audio is clean instantly and my picture is always flawless. They are not cheap cables, they are 100% needed if you want the best quality. I am considering stepping up to Diamond cables for my theater room when I update it. Hope this helps!
And they even have a "professional quality" HDMI cable that sells for well over \$1000. And the audiophiles are all going crazy, swearing that it really makes a difference.
Around the time I started writing this, I also saw a post on BoingBoing about another audiophile fraud. See, when you're dealing with this breed of twit who's so convinced of their own great superiority, you can sell them almost anything if you can cobble together a pseudoscientific explanation for why it will make things sound better.
This post talks about a very similar shtick to the superexpensive cable: it's a magic box which... well, let's let the manufacturer explain.
The Blackbody ambient field conditioner enhances audio playback quality by modifying the interaction of your gear’s circuitry with the ambient electromagnetic field. The Blackbody eliminates sonic smearing of high frequencies and lowers the noise floor, thus clarifying the stereo image.
This thing is particularly fascinating because it doesn't even pretend to hook in to your audio system. You just position it close to your system, and it magically knows what equipment it's close to and "harmonizes" everything. It's just... magic! But if you're really special, you'll be able to tell that it works!
### Share this:
81 responses so far
## Hydrinos: Impressive Free Energy Crackpottery
Back when I wrote about the whole negative energy rubbish, a reader wrote to me, and asked me to write something about hydrinos.
For those who are lucky enough not to know about them, hydrinos are part of another free energy scam. In this case, a medical doctor named Randell Mills claims to have discovered that hydrogen atoms can have multiple states beyond the typical, familiar ground state of hydrogen. Under the right conditions, so claims Dr. Mills, the electron shell around a hydrogen atom will compact into a tighter orbit, releasing a burst of energy in the process. And, in fact, it's (supposedly) really, really easy to make hydrogen turn into hydrinos - if you let a bunch of hydrogen atoms bump in to a bunch of Argon atoms, then presto! some of the hydrogen will shrink into hydrino form, and give you a bunch of energy.
Wonderful, right? Just let a bunch of gas bounce around in a balloon, and out comes energy!
Oh, but it's better than that. There are multiple hydrino forms: you can just keep compressing and compressing the hydrogen atom, pushing out more and more energy each time. The more you compress it, the more energy you get - and you don't really need to compress it. You just bump it up against another atom, and poof! energy.
To explain all of this, Dr. Mills further claims to have invented a new
form of quantum mechanics, called "grand unified theory of classical quantum mechanics" (CQM for short) which provides the unification between relativity and quantum mechanics that people have been looking for. And, even better, CQM is fully deterministic - all of that ugly probabilistic stuff from quantum mechanics goes away!
The problem is, it doesn't work. None of it.
What makes hydrinos interesting as a piece of crankery is that there's a lot more depth to it than to most crap. Dr. Mills hasn't just handwaved that these hydrino things exist - he's got a very elaborate detailed theory - with a lot of non-trivial math - to back it up. Alas, the math is garbage, but it's garbage-ness isn't obvious. To see the problems, we'll need to get deeper into math than we usually do.
Here is an example of how hydrino supporters explain them:
In 1986 Randell Mills MD developed a theory that hydrogen atoms could shrink, and release lots of energy in the process. He called the resultant entity a "Hydrino" (little Hydrogen), and started a company called Blacklight Power, Inc. to commercialize his process. He published his theory in a book he wrote, which is available in PDF format on his website. Unfortunately, the book contains so much mathematics that many people won't bother with it. On this page I will try to present the energy related aspect of his theory in language that I hope will be accessible to many.
According to Dr. Mills, when a hydrogen atom collides with certain other atoms or ions, it can sometimes transfer a quantity of energy to the other atom, and shrink at the same time, becoming a Hydrino in the process. The atom that it collided with is called the "catalyst", because it helps the Hydrino shrink. Once a Hydrino has formed, it can shrink even further through collisions with other catalyst atoms. Each collision potentially resulting in another shrinkage.
Each successive level of shrinkage releases even more energy than the previous level. In other words, the smaller the Hydrino gets, the more energy it releases each time it shrinks another level.
To get an idea of the amounts of energy involved, I now need to introduce the concept of the "electron volt" (eV). An eV is the amount of energy that a single electron gains when it passes through a voltage drop of one volt. Since a volt isn't much (a "dry cell" is about 1.5 volts), and the electric charge on an electron is utterly minuscule, an eV is a very tiny amount of energy. Nevertheless, it is a very representative measure of the energy involved in chemical reactions. e.g. when Hydrogen and Oxygen combine to form a water molecule, about 2.5 eV of energy is released per water molecule formed.
When Hydrogen shrinks to form a second level Hydrino (Hydrogen itself is considered to be the first level Hydrino), about 41 eV of energy is released. This is already about 16 times more than when Hydrogen and Oxygen combine to form water. And it gets better from there. If that newly formed Hydrino collides with another catalyst atom, and shrinks again, to the third level, then an additional 68 eV is released. This can go on for quite a way, and the amount gets bigger each time. Here is a table of some level numbers, and the energy released in dropping to that level from the previous level, IOW when you go from e.g. level 4 to level 5, 122 eV is released. (BTW larger level numbers represent smaller Hydrinos).
And some of the press:
• IEEE Spectrum, 1/2009.
• Press release from 2009, claiming commercialization within 1 year to 18 months.
• Slashdot story from 2005, claiming to be months away from commercialization
• Village Voice, 1999, complete with a claim to commercialize a hydrino power generator within a year.
Notice a pattern?
The short version of the problem with hydrinos is really, really simple.
The most fundamental fact of nature that we've observed is that everything tends to move towards its lowest energy state. The whole theory of hydrinos basically says that that's not true: everything except hydrogen tends to move towards its lowest energy state, but hydrogen doesn't. It's got a dozen or so lower energy states, but none of the abundant quantities of hydrogen on earth are ever observed in any of those states unless they're manipulated by Mills magical machine.
The whole basis of hydrino theory is Mills CQM. CQM is rubbish - but it's impressive looking rubbish. I'm not going to go deep into detail; you can see a detailed explanation of the problems here; I'll run through a short version.
To start, how is Mills claiming that hydrinos work? In CQM, he posits the existence of electron shell levels closer to the nucleus than the ground state of hydrogen. Based on his calculations, he comes up with an energy figure for the difference between the ground state and the hydrino state. Then he finds other substances that have the property that boosting one electron into a higher energy state would cost the same amount of energy. When a hydrogen atom collides with an atom that has a matching electron transition, the hydrogen can get bumped into the hydrino state, while kicking an electron into a higher orbital. That electron will supposedly, in due time, fall back to its original level, releasing the energy differential as a photon.
On this level, it sort-of looks correct. It doesn't violate conservation of energy: the collision between the two atoms doesn't produce anything magical. It's just a simple transfer of energy. That much is fine.
It's when you get into the details that it gets seriously fudgy.
Right from the start, if you know what you're doing, CQM goes off the rails. For example, CQM claims that you can describe the dynamics of an electron in terms of a classical wave charge-density function equation. Mills actually gives that function, and asserts that it respects Lorentz invariance. That's crucial - Lorentz invariance is critical for relativity: it's the fundamental mathematical symmetry that's the basis of relativity. But his equation doesn't actually respect Lorentz invariance. Or, rather, it does - but only if the electron is moving at the speed of light. Which it can't do.
Mills goes on to describe the supposed physics of hydrinos. If you work through his model, the only state that is consistent with both his equations, and his claim that the electrons orbit in a spherical shell above the atom - well, if you do that, you'll find that according to his own equations, there is only one possible state for a hydrogen atom - the conventional ground state.
It goes on in that vein for quite a while. He's got an elaborate system, with an elaborate mathematical framework... but none of the math actually says what he says it says. The Lorentz invariance example that I cited above - that's typical. Print an equation, say that it says X, even though the equation doesn't say anything like X.
But we can go a bit further. The fundamental state of atoms is something that we understand pretty well, because we've got so many observations, and so much math describing it. And the thing is, that math is pretty damned convincing. That doesn't mean that it's correct, but it does mean that any theory that wants to replace it must be able to describe everything that we've observed at least as well as the current theory.
Why do atoms have the shape that they do? Why are the size that they are? It's not a super easy thing to understand, because electrons aren't really particles. They're something strange. We don't often think about that, but it's true. They're deeply bizarre things. They're not really particles. Under many conditions, they behave more like waves than like particles. And that's true of the atom.
The reason that atoms are the size that they are is because the electron "orbitals" have sizes and shapes that are determined by resonant frequencies of the wave-like aspects of electrons. What Mills is suggesting is that there are a range of never-before observed resonant frequencies of electrons. But the math that he uses to support that claim just doesn't work.
Now, I'll be honest here. I'm not nearly enough of a physics whiz to be competent to judge the accuracy of his purported quantum mechanical system. But I'm still pretty darn confident that he's full of crap. Why?
I'm from New Jersey - pretty much right up the road from where his lab is. Going to college right up the road from him, I've been hearing about his for a long time. He's been running this company for quite a while - going on two decades. And all that time, the company has been constantly issuing press releases promising that it's just a year away from being commercialized! It's always one step away. But never, never, has he released enough information to let someone truly independent verify or reproduce his results. And he's been very deceptive about that: he's made various claims about independent verification on several occasions.
For example, he once cited that his work had been verified by a researcher at Harvard. In fact, he'd had one of his associates rent a piece of equipment at Harvard, and use it for a test. So yes, it was tested by a researcher - if you count his associate as a legitimate researcher. And it was tested at Harvard. But the claim that it was tested by a researcher at Harvard is clearly meant to imply that it was tested by a Harvard professor, when it wasn't.
For something around 20 years, he's been making promises, giving very tightly controlled demos, refusing to give any real details, refusing to actually explain how to reproduce his "results", and promising that it's just one year away from being commercialized!
And yet... hydrogen is the most common substance in the universe. If it really had a lower energy state that what we call it's ground state, and that lower energy state was really as miraculous as he claims - why wouldn't we see it? Why hasn't it ever been observed? Substances like Argon are rare - but they're not that rare. Argon has been exposed to hydrogen under laboratory conditions plenty of times - and yet, nothing anamalous has even been observed. All of the supposed hydrino catalysts have been observed so often under so many conditions - and yet, no anamolous energy has even been noticed before. But according to Mills, we should be seeing tons of it.
And that's not all. Mills also claims that you can create all sorts of compounds with hydrinos - and naturally, every single one of those compounds is positively miraculous! Bonded with silicon, you get better semiconductors! Substitute hydrinos for regular hydrogen in a battery electrolyte, and you get a miracle battery! Use it in rocket fuel instead of common hydrogen, and you get a ten-fold improvement in the performance of a rocket! Make a laser from it, and you can create higher-density data storage and communication systems. Everything that hydrinos touch is amazing
But... not one of these miraculous substances has ever been observed before. We work with silicon all the time - but we've never seen the magic silicon hydrino compound. And he's never been willing to actually show anyone any of these miracle substances.
He claims that he doesn't show it because he's protecting his intellectual property. But that's silly. If hydrinos existed, then just telling us that these compounds exist and have interesting properties should be enough for other labs to go ahead and experiment with producing them. But no one has. Whether he shows the supposed miracle compounds or not doesn't change anyone else's ability to produce those. Even if he's keeping his magic hydrino factory secret, so that no one else has access to hydrinos, by telling us that these compounds exist, he's given away the secret. He's not protecting anything anymore: by publically talking about these things, he's given up his right to patent the substances. It's true that he still hasn't given up the rights to the process of producing them - but publicly demonstrating these alleged miracle substances wouldn't take away any legal rights that he hasn't already given up. So, why doesn't he show them to you?
Because they don't exist.
### Share this:
107 responses so far
## Second Law Silliness from Sewell
Dec 12 2011 Published by MarkCC under Bad Physics, Intelligent Design, Uncategorized
So, via Panda's Thumb, I hear that Granville Sewell is up to his old hijinks. Sewell is a classic creationist crackpot, who's known for two things.
First, he's known for chronically recycling the old "second law of thermodynamics" garbage. And second, he's known for building arguments based on "thought experiments" - where instead of doing experiments, he just makes up the experiments and the results.
The second-law crankery is really annoying. It's one of the oldest creationist pseudo-scientific schticks around, and it's such a terrible argument. It's also a sort-of pet peeve of mine, because I hate the way that people generally respond to it. It's not that the common response is wrong - but rather that the common responses focus on one error, while neglecting to point out that there are many deeper issues with it.
In case you've been hiding under a rock, the creationist argument is basically:
1. The second law of thermodynamics says that disorder always increases.
2. Evolution produces highly-ordered complexity via a natural process.
3. Therefore, evolution must be impossible, because you can't create order.
The first problem with this argument is very simple. The second law of thermodynamics does not say that disorder always increases. It's a classic example of my old maxim: the worst math is no math. The second law of thermodynamics doesn't say anything as fuzzy as "you can't create order". It's a precise, mathematical statement. The second law of thermodynamics says that in a closed system:
where:
1. is the entropy in a system,
2. is the amount of heat transferred in an interaction, and
3. is the temperature of the system.
Translated into english, that basically says that in any interaction that involves the
transfer of heat, the entropy of the system cannot possible be reduced. Other ways of saying it include "There is no possible process whose sole result is the transfer of heat from a cooler body to a warmer one"; or "No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work."
Note well - there is no mention of "chaos" or "disorder" in these statements: The second law is a statement about the way that energy can be used. It basically says that when
you try to use energy, some of that energy is inevitably lost in the process of using it.
Talking about "chaos", "order", "disorder" - those are all metaphors. Entropy is a difficult concept. It doesn't really have a particularly good intuitive meaning. It means something like "energy lost into forms that can't be used to do work" - but that's still a poor attempt to capture it in metaphor. The reason that people use order and disorder comes from a way of thinking about energy: if I can extract energy from burning gasoline to spin the wheels of my car, the process of spinning my wheels is very organized - it's something that I can see as a structured application of energy - or, stretching the metaphor a bit, the energy that spins the wheels in structured. On the other hand, the "waste" from burning the gas - the heating of the engine parts, the energy caught in the warmth of the exhaust - that's just random and useless. It's "chaotic".
So when a creationist says that the second law of thermodynamics says you can't create order, they're full of shit. The second law doesn't say that - not in any shape or form. You don't need to get into the whole "open system/closed system" stuff to dispute it; it simply doesn't say what they claim it says.
But let's not stop there. Even if you accept that the mathematical statement of the second law really did say that chaos always increases, that still has nothing to do with evolution. Look back at the equation. What it says is that in a closed system, in any interaction, the total entropy must increase. Even if you accept that entropy means chaos, all that it says is that in any interaction, the total entropy must increase.
It doesn't say that you can't create order. It says that the cumulative end result of any interaction must increase entropy. Want to build a house? Of course you can do it without violating the second law. But to build that house, you need to cut down trees, dig holes, lay foundations, cut wood, pour concrete, put things together. All of those things use a lot of energy. And in each minute interaction, you're expending energy in ways that increase entropy. If the creationist interpretation of the second law were true, you couldn't build a house, because building a house involves creating something structured - creating order.
Similarly, if you look at a living cell, it does a whole lot of highly ordered, highly structured things. In order to do those things, it uses energy. And in the process of using that energy, it creates entropy. In terms of order and chaos, the cell uses energy to create order, but in the process of doing so it creates wastes - waste heat, and waste chemicals. It converts high-energy structured molecules into lower-energy molecules, converting things with energetic structure to things without. Look at all of the waste that's produced by a living cell, and you'll find that it does produce a net increase in entropy. Once again, if the creationists were right, then you wouldn't need to worry about whether evolution was possible under thermodynamics - because life wouldn't be possible.
In fact, if the creationists were right, the existence of planets, stars, and galaxies wouldn't be possible - because a galaxy full of stars with planets is far less chaotic than loose cloud of hydrogen.
Once again, we don't even need to consider the whole closed system/open system distinction, because even if we treat earth as a closed system, their arguments are wrong. Life doesn't really defy the laws of thermodynamics - it produces entropy exactly as it should.
But the creationist second-law argument is even worse than that.
The second-law argument is that the fact that DNA "encodes information", and that the amount of information "encoded" in DNA increases as a result of the evolutionary process means that evolution violates the second law.
This absolutely doesn't require bringing in any open/closed system discussions. Doing that is just a distraction which allows the creationist to sneak their real argument underneath.
The real point is: DNA is a highly structured molecule. No disagreement there. But so what? In the life of an organism, there are virtually un-countable numbers of energetic interactions, all of which result in a net increase in the amount of entropy. Why on earth would adding a bunch of links to a DNA chain completely outweigh those? In fact, changing the DNA of an organism is just another entropy increasing event. The chemical processes in the cell that create DNA strands consume energy, and use that energy to produce molecules like DNA, producing entropy along the way, just like pretty much every other chemical process in the universe.
The creationist argument relies on a bunch of sloppy handwaves: "entropy" is disorder; "you can't create order", "DNA is ordered". In fact, evolution has no problem with respect to entropy: one way of viewing evolution is that it's a process of creating ever more effective entropy-generators.
Now we can get to Sewell and his arguments, and you can see how perfectly they match what I've been talking about.
Imagine a high school science teacher renting a video showing a tornado sweeping through a town, turning houses and cars into rubble. When she attempts to show it to her students, she accidentally runs the video backward. As Ford predicts, the students laugh and say, the video is going backwards! The teacher doesn’t want to admit her mistake, so she says: “No, the video is not really going backward. It only looks like it is because it appears that the second law is being violated. And of course entropy is decreasing in this video, but tornados derive their power from the sun, and the increase in entropy on the sun is far greater than the decrease seen on this video, so there is no conflict with the second law.” “In fact,” the teacher continues, “meteorologists can explain everything that is happening in this video,” and she proceeds to give some long, detailed, hastily improvised scientific theories on how tornados, under the right conditions, really can construct houses and cars. At the end of the explanation, one student says, “I don’t want to argue with scientists, but wouldn’t it be a lot easier to explain if you ran the video the other way?”
Now imagine a professor describing the final project for students in his evolutionary biology class. “Here are two pictures,” he says.
“One is a drawing of what the Earth must have looked like soon after it formed. The other is a picture of New York City today, with tall buildings full of intelligent humans, computers, TV sets and telephones, with libraries full of science texts and novels, and jet airplanes flying overhead. Your assignment is to explain how we got from picture one to picture two, and why this did not violate the second law of thermodynamics. You should explain that 3 or 4 billion years ago a collection of atoms formed by pure chance that was able to duplicate itself, and these complex collections of atoms were able to pass their complex structures on to their descendants generation after generation, even correcting errors. Explain how, over a very long time, the accumulation of genetic accidents resulted in greater and greater information content in the DNA of these more and more complicated collections of atoms, and how eventually something called “intelligence” allowed some of these collections of atoms to design buildings and computers and TV sets, and write encyclopedias and science texts. But be sure to point out that while none of this would have been possible in an isolated system, the Earth is an open system, and entropy can decrease in an open system as long as the decreases are compensated by increases outside the system. Energy from the sun is what made all of this possible, and while the origin and evolution of life may have resulted in some small decrease in entropy here, the increase in entropy on the sun easily compensates this tiny decrease. The sun should play a central role in your essay.”
When one student turns in his essay some days later, he has written,
“A few years after picture one was taken, the sun exploded into a supernova, all humans and other animals died, their bodies decayed, and their cells decomposed into simple organic and inorganic compounds. Most of the buildings collapsed immediately into rubble, those that didn’t, crumbled eventually. Most of the computers and TV sets inside were smashed into scrap metal, even those that weren’t, gradually turned into piles of rust, most of the books in the libraries burned up, the rest rotted over time, and you can see see the result in picture two.”
The professor says, “You have switched the pictures!” “I know,” says the student. “But it was so much easier to explain that way.”
Evolution is a movie running backward, that is what makes it so different from other phenomena in our universe, and why it demands a very different sort of explanation.
This is a perfect example of both of Sewell's usual techniques.
First, the essential argument here is rubbish. It's the usual "second-law means that you can't create order", even though that's not what it says, followed by a rather shallow and pointless response to the open/closed system stuff.
And the second part is what makes Sewell Sewell. He can't actually make his own arguments. No, that's much too hard. So he creates fake people, and plays out a story using his fake people and having them make fake arguments, and then uses the people in his story to illustrate his argument. It's a technique that I haven't seen used so consistency since I read Ayn Rand in high school.
### Share this:
52 responses so far
## Yet Another Cantor Crank
I get a fair bit of mail from crackpots. The category that I find most annoying is the Cantor cranks. Over and over and over again, these losers send me their "proofs".
What Cantor did was remarkably elegant. He showed that given anything that is claimed to be a one-to-one mapping between the set of integers and the set of real numbers (also sometimes described as an enumeration of the real numbers - the two terms are functionally equivalent), then here's a simple procedure which will produce a real number that isn't in included in that mapping - which shows that the mapping isn't one-to-one.
The problem with the run-of-the-mill Cantor crank is that they never even try to actually address Cantor's proof. They just say "look, here's a mapping that works!"
So the entire disproof of their "refutation" of Cantor's proof is... Cantor's proof. They completely ignore the thing that they're claiming to disprove.
I got another one of these this morning. It's particularly annoying because he makes the same mistake as just about every other Cantor crank - but he also specifically points to one of my old posts where I rant about people who make exactly the same mistake as him.
To add insult to injury, the twit insisted on sending me PDF - and not just a PDF, but a bitmapped PDF - meaning that I can't even copy text out of it. So I can't give you a link; I'm not going to waste Scientopia's bandwidth by putting it here for download; and I'm not going to re-type his complete text. But I'll explain, in my own compact form, what he did.
It's an old trick; for example, it's ultimately not that different from what John Gabriel did. The only real novelty is that he does it in binary - which isn't much of a novelty. This author calls it the "mirror method". The idea is, in one column, write a list of the integers greater than 0. In the opposite column, write the mirror of that number, with the decimal (or, technically, binary) point in front of it:
Integer Real
0 0.0
1 0.1
10 0.01
11 0.11
100 0.001
101 0.101
110 0.011
111 0.111
1000 0.0001
... ...
Extend that out to infinity, and, according to the author, the second column it's a sequence of every possible real number, and the table is a complete mapping.
The problem is, it doesn't work, for a remarkably simple reason.
There is no such thing as an integer whose representation requires an infinite number of digits. For every possible integer, its representation in binary has a fixed number of bits: for any integer N, it's representation is no longer that . That's always a finite integer.
But... we know that the set of real numbers includes numbers whose representation is infinitely long. so this enumeration won't include them. Where does the square root of two fall in this list? It doesn't: it can't be written as a finite string in binary. Where is π? It's nowhere; there's no finite representation of π in binary.
The author claims that the novel property of his method is:
Cantor proved the impossibility of both our enumerations as follows: for any given enumeration like ours Cantor proposed his famous diagonal method to build the contra-sample, i.e., an element which is quasi omitted in this enumeration. Before now, everyone agreed that this element was really omitted as he couldn't tell the ordinal number of this element in the give enumeration: now he can. So Cantor's contra-sample doesn't work.
This is, to put it mildly, bullshit.
First of all - he pretends that he's actually addressing Cantor's proof - only he really isn't. Remember - what Cantor's proof did was show you that, given any purported enumeration of the real numbers, that you could construct a real number that isn't in that enumeration. So what our intrepid author did was say "Yeah, so, if you do Cantor's procedure, and produce a number which isn't in my enumeration, then I'll tell you where that number actually occurred in our mapping. So Cantor is wrong."
But that doesn't actually address Cantor. Cantor's construction specifically shows that the number it constructs can't be in the enumeration - because the procedure specifically guarantees that it differs from every number in the enumeration in at least one digit. So it can't be in the enumeration. If you can't show a logical problem with Cantor's construction, then any argument like the authors is, simply, a priori rubbish. It's just handwaving.
But as I mentioned earlier, there's an even deeper problem. Cantor's method produces a number which has an infinitely long representation. So the earlier problem - that all integers have a finite representation - means that you don't even need to resort to anything as complicated as Cantor to defeat this. If your enumeration doesn't include any infinitely long fractional values, then it's absolutely trivial to produce values that aren't included: 1/3, 1/7, 1/9.
In short: stupid, dull, pointless; absolutely typical Cantor crankery.
### Share this:
112 responses so far
## Hold on tight: the world ends next saturday!
(For some idiot reason, I was absolutely certain that today was the 12th. It's not. It's the tenth. D'oh. There's a freakin' time&date widget on my screen! Thanks to the commenter who pointed this out.)
A bit over a year ago, before the big move to Scientopia, I wrote about a loonie named Harold Camping. Camping is the guy behind the uber-christian "Family Radio". He predicted that the world is going to end on May 21st, 2011. I first heard about this when it got written up in January of 2010 in the San Francisco Chronicle.
And now, we're less than two weeks away from the end of the world according to Mr. Camping! So I thought hey, it's my last chance to make sure that I'm one of the damned!
Continue Reading »
### Share this:
27 responses so far
## An Open Letter to Glen Beck from a non-Orthodox Jew
Hey, Glen.
Look, I know we don't get along. We don't agree on much of anything. But still, we really need to talk.
The other day, you said some really stupid, really offensive, and really ignorant things about Jews. I know you're insulted - after all, four hundred Rabbis from across the spectrum came together to call you out for being an antisemitic asshole, and that's gotta hurt.
But that's no excuse for being a pig-ignorant jackass.
Continue Reading »
### Share this:
88 responses so far
## Another Crank comes to visit: The Cognitive Theoretic Model of the Universe
When an author of one of the pieces that I mock shows up, I try to bump them up to the top of the queue. No matter how crackpotty they are, I think that if they've gone to the trouble to come and defend their theories, they deserve a modicum of respect, and giving them a fair chance to get people to see their defense is the least I can do.
A couple of years ago, I wrote about the Cognitive Theoretic Model of the Universe. Yesterday, the author of that piece showed up in the comments. It's a two-year-old post, which was originally written back at ScienceBlogs - so a discussion in the comments there isn't going to get noticed by anyone. So I'm reposting it here, with some revisions.
Stripped down to its basics, the CTMU is just yet another postmodern "perception defines the universe" idea. Nothing unusual about it on that level. What makes it interesting is that it tries to take a set-theoretic approach to doing it. (Although, to be a tiny bit fair, he claims that he's not taking a set theoretic approach, but rather demonstrating why a set theoretic approach won't work. Either way, I'd argue that it's more of a word-game than a real theory, but whatever...)
The real universe has always been theoretically treated as an object, and specifically as the composite type of object known as a set. But an object or set exists in space and time, and reality does not. Because the real universe by definition contains all that is real, there is no "external reality" (or space, or time) in which it can exist or have been "created". We can talk about lesser regions of the real universe in such a light, but not about the real universe as a whole. Nor, for identical reasons, can we think of the universe as the sum of its parts, for these parts exist solely within a spacetime manifold identified with the whole and cannot explain the manifold itself. This rules out pluralistic explanations of reality, forcing us to seek an explanation at once monic (because nonpluralistic) and holistic (because the basic conditions for existence are embodied in the manifold, which equals the whole). Obviously, the first step towards such an explanation is to bring monism and holism into coincidence.
Continue Reading »
### Share this:
1,012 responses so far
## E. E. Escultura and the Field Axioms
As you may have noticed, E. E. Escultura has shown up in the comments to this blog. In one comment, he made an interesting (but unsupported) claim, and I thought it was worth promoting up to a proper discussion of its own, rather than letting it rage in the comments of an unrelated post.
What he said was:
You really have no choice friends. The real number system is ill-defined, does not exist, because its field axioms are inconsistent!!!
This is a really bizarre claim. The field axioms are inconsistent?
I'll run through a quick review, because I know that many/most people don't have the field axioms memorized. But the field axioms are, basically, an extremely simple set of rules describing the behavior of an algebraic structure. The real numbers are the canonical example of a field, but you can define other fields; for example, the rational numbers form a field; if you allow the values to be a class rather than a set, the surreal numbers form a field.
So: a field is a collection of values F with two operations, "+" and "*", such that:
1. Closure: ∀ a, b ∈ F: a + b in F ∧ a * b ∈ f
2. Associativity: ∀ a, b, c ∈ F: a + (b + c) = (a + b) + c ∧ a * (b * c) = (a * b) * c
3. Commutativity: ∀ a, b ∈ F: a + b = b + a ∧ a * b = b * a
4. Identity: there exist distinct elements 0 and 1 in F such that ∀ a ∈ F: a + 0 = a, ∀ b ∈ F: b*1=b
5. Additive inverses: ∀ a ∈ F, there exists an additive inverse -a ∈ F such that a + -a = 0.
6. Multiplicative Inverse: For all a ∈ F where a != 0, there a multiplicative inverse a-1 such that a * a-1 = 1.
7. Distributivity: ∀ a, b, c ∈ F: a * (b+c) = (a*b) + (a*c)
So, our friend Professor Escultura claims that this set of axioms is inconsistent, and that therefore the real numbers are ill-defined. One of the things that makes the field axioms so beautiful is how simple they are. They're a nice, minimal illustration of how we expect numbers to behave.
So, Professor Escultura: to claim that that the field axioms are inconsistent, what you're saying is that this set of axioms leads to an inevitable contradiction. So, what exactly about the field axioms is inconsistent? Where's the contradiction?
### Share this:
531 responses so far
Older posts »
• Scientopia Blogs
• ## Recent Comments
• eric on Probability and Interpretations
• Bard Bloom on Probability and Interpretations
• Jonas on Probability and Interpretations
• John Miller on Probability and Interpretations
• Brandon Wilson on Probability and Interpretations
• ## Tags
Bad Behavior has blocked 955 access attempts in the last 7 days.
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9649382829666138, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/tagged/matrix+algorithm
|
# Tagged Questions
0answers
53 views
### Fast calculation of commute distances on large graphs (i.e. fast computation of the pseudo-inverse of a large Laplacian / Kirchhoff matrix)
I have a large, locally connected and undirected graph $G$ with $\approx 10^4$ vertices and $\approx 10^5$ to $\approx 10^6$ edges. Moreover I can bound the maximum vertex degree as $Q_{max}$. I ...
1answer
217 views
### Efficient method for inverting a block tridiagonal matrix
Is there a better method to invert a large block tridiagonal Hermitian block matrix, other than treating it as a ordinary matrix? For example: ...
3answers
278 views
### How do we solve Eight Queens variation using primes?
Using a $p_n$x $p_n$ matrix, how can we solve the Eight queens puzzle to find a prime in every row and column? ...
1answer
354 views
### Computing Slater determinants
I need to compute Slater determinants. I'm wondering if I would benefit from assigning each of my functions to a variable prior to computation. I'm working with Slater determinants, but my question ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8856667280197144, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/potential-theory?sort=unanswered&pagesize=15
|
# Tagged Questions
The potential-theory tag has no wiki summary.
1answer
91 views
### Poisson integral on $\mathbb{H}$ for boundary data which is orientation-preserving homeomorphism of $\mathbb{R}$
Let $f$ be a real-valued function (in my case, an orientation-preserving homeomorphims of $\mathbb{R}$) on the real line $\mathbb{R}$ which is not in any $L^p$ -space. Let us take the simplest example ...
1answer
47 views
### Problem on Yukawa Potential
One definition of the Yukawa potential on $R^n$ is the solution $G$ in the sense of distributions to $(-\Delta + \mu^2)G = \delta$. This 'green's function' is given by \begin{align*} G(x) = ...
0answers
145 views
### Potential theory: discrete-time Markov processes
Recently I've found lecture notes on "Analysis on Graphs" where the potential theory methods were used to study discrete-time, time-reversible Markov chains (i.e. the state space is countable). ...
0answers
82 views
### Harmonic measure or harmonic kernel
In the theory of discrete-time stochastic processes on a measurable space $(\mathscr X,\mathscr B(\mathscr X))$ one usually starts with a Markov kernel P:\mathscr X\times \mathscr B(\mathscr ...
0answers
69 views
### A finely open set, not open up to polar set?
Is there a (simple) example of a finely open set (i.e. w.r.t. the fine topology in potential theory) $O$ in $\mathbb R^n$, which is not open up to a polar set (i.e. zero capacity), i.e., there does ...
0answers
40 views
### Tight bounds for harmonic measure
I recently came across a question concerning harmonic measure here, and was wondering if there is a good reference summarizing different methods of estimating harmonic measure? Specifically, I would ...
0answers
27 views
### discrete harmonic extension (an exercise of Grimmett's “probability on graphs”)
I'm struggling with exercise 1.3 in Grimmett's book "probability on graphs". Take $G = (V,E)$ a finite connected graph with given positive conductances $(w_e)_{e \in E}$, and let $(x_v)_{v \in V}$ be ...
0answers
51 views
### Inequality for harmonic extension : Is $\int_{t\in S^1} |t-\zeta|^{\alpha}p(z,t) |dt| \leq K|z-\zeta|^{\alpha}, 0< \alpha < 1$ for uniform $K$?
Let $\zeta\in S^1$(unit circle in the complex plane) and $z\in \mathbb{D}$. Fix $0< \alpha < 1$. Then, is the following true ? (Question 1) Let $p(z,t) = \frac{1}{2\pi}.\frac{1-|z|^2}{|z-t|^2}$ ...
0answers
86 views
### Evaluate integral: $\int_{-1}^{1} \frac{\log|z-x|}{\pi\sqrt{1-x^2}}dx$
Show that $$\int_{-1}^{1} \frac{\log|z-x|}{\pi\sqrt{1-x^2}}dx = \log{\frac{|z+\sqrt{z^2-1}|}{2}},\quad z \in \mathbb{C}$$ How can I apply the Joukowski conformal map to this problem? Thanks.
0answers
28 views
### Reference in capacity theory
I am studying capacity theory in the chapter two of the book "Nonlinear Potential Theory of Degenerate Elliptic Equations". The authors are Juha Heinonen, Tero Kilpelainen and Olli Martio. Someone ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8674611449241638, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/6444/how-long-for-a-simple-random-walk-to-exceed-sqrtt/38621
|
## How long for a simple random walk to exceed sqrt(T)?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let {R_n} be a simple random walk with R_0 = 0, and let T be the smallest index such that k * sqrt(T) < |R_T| for some positive k. What is an expression for the probability distribution of T?
-
I wrote the question myself. I am curious to know what is the answer and if the question is well-formulated. – Dan Brumleve Nov 22 2009 at 9:28
It's not that it doesn't seem interesting - it's that it seems like the sort of thing where someone with access to a maths library could go and try to look it up, or just sit down and work it out. It smells like the sort of question that must have been answered somewhere without recourse to very advanced work. How about Feller's book, for instance? – Yemon Choi Nov 22 2009 at 12:47
6
I thin I disagree. It might be that easy, but it might take a bit of work, especially for someone who doesn't work in probability. Why don't we just wait and see whether an expert comes by and knows this? – David Speyer Nov 22 2009 at 14:30
@David: Fair enough. I guess it's a question of phrasing: I am always more sympathetic to questions which ask "is this known?" or "I've tried this, but it didn't work, should I look at something else?" Questions are easy (I have 50+ pages of my own, unanswered); attainable questions less so... – Yemon Choi Nov 22 2009 at 14:36
2
If you ask for the smallest index greater than T_0, this is a very natural question. Consider $k=2$: Suppose you are trying to reject a null hypothesis on, say, whether a coin is fair. If the coin actually is fair, how much data do you need to collect before you can incorrectly reject the null hypothesis at the 2 standard deviation level of significance? I don't think this is a homework-level question for an undergraduate probability course, and it would require hints in a graduate course. – Douglas Zare Jan 31 2010 at 21:11
show 4 more comments
## 4 Answers
For a Brownian motion, Novikov finds an explicit expression for any real moments (positive and negative) of the random variable $(\tau(a,b,c)+c)$, where $$\tau(a,b,c) = \inf(t \geq 0, W(t) \leq -a +b(t+c)^{1/2})$$ with $a \geq 0$, $c \geq 0$, and $bc^{1/2} < a$. Shepp provides similar results but with W(t) replaced by |W(t)| in the definition, and the range of permissible $a,b,c$ restricted accordingly. Shepp also cites papers by Blackwell and Freedman (1964), Chow, Robbins, and Teicher (1965), and Chow and Teicher (1965), which look like they prove similar but weaker results when the Brownian motion is replaced by a random walk with finite variance. I don't have time to read those references at the moment but I figure these papers should lead you to your answer.
-
This answer is also about a slightly different question but it has the most new information for me, so I am accepting it as the bounty is almost over. – Dan Brumleve Aug 23 2010 at 0:14
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I doubt whether you can write down an exact formula for the distribution of T.
If you are interested in large values of k, the law of the iterated logarithm will enter into the picture. The typical value of T (say, the expectation) should be of order $exp(exp(k^2/2))$.
-
3
I haven't looked at this too closely, but I think the expectation should be infinite. The median or other quantiles may be suggested by a version of the law of the iterated logarithm. – Douglas Zare Jan 31 2010 at 20:32
2
Humm, you are absolutely right, the expectation is infinite for k larger than 1 (Theorem 1.7, Chapter 3 in Durrett's book "Theory and Examples"). – Guillaume Aubrun Feb 5 2010 at 12:21
Building on Yemon, who suggests that the solution is some distribution of a hitting time, if we assume the threshold for the 'hit' is sqrt(T), and that variance, u, of the random variable is directly proportional to the square root of time, then the mean time for that variable to exceed sqrt(T) may equal k * u * sqrt(T) * Phi(1)/2, or approximately k * u * sqrt(T) * 0.16, distributed lognormally.
-
Following a naive heuristic that rescaling a simple random walk (on the line, say) will give us Brownian motion, the question looks like a discrete version of the following question: what is the distribution of the hitting time for a standard Brownian motion starting at the origin?
The question as posed might have a messier answer, but one might be able to make progress more directly. For instance, the probability that $T> n$ is the probability that $R_j^2 \leq k^2j$ for all $j=1,2,\dots, n$, and one might be able to calculate or at least estimate that probability directly by a brute-force counting argument. (I'm sure there should be a better way, though, involving judicious use of conditional probabilities.)
-
3
For any $k \gt 0$, the infimum of the times where a Brownian motion is more than $k \sqrt T$ is 0. This may be counterintuitive, but it's a consequence of the combination of time inversion ($t W(1/t)$ is also Brownian) and the law of the iterated logarithm. – Douglas Zare Jan 31 2010 at 20:30
Thanks! it really has been too long since I learned/practiced any of this... – Yemon Choi Jan 31 2010 at 20:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468916058540344, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/200236-steel-drum-print.html
|
# Steel Drum
Printable View
• June 20th 2012, 02:20 PM
kjvalm
Steel Drum
Hello! I am having problems in solving this problem. I don't know where to star but I have some formulas that I started with and I don't know what to do from there.
Your task is to build a steel drum (right circular cylinder) of fixed volume. This time the consideration of waste material is added, but the material cost is still the same for the top and the sides (same gage and same cost) the tops and the bottoms will be cut from sheet metal from squares of length 2r. Use calculus to determine the amount of metal used is minimized when h/r=8/π.
I have just the formulas:
--Area of a cirlce
--Volume of a cylinder
--Area of a square
• June 21st 2012, 07:18 AM
mfb
Re: Steel Drum
With the radius r and the height h, how much material is used?
Using the formula for the volume (and the fixed volume V), can you find a relation between r and h?
Can you express the used material as function of a single variable (r or h) only?
Do you know how to find the minimum of this function?
These steps are not specific for your problem here, they can be used for all problems of this type.
• June 23rd 2012, 01:46 PM
HallsofIvy
Re: Steel Drum
I presume the barrel can be made just by bending a rectangle of width h and length $2\pi r$, the circumference of the circle. Now, since we have to pay for waste as well as the material used, from "sheet metal from squares of length 2r" we have to pay for the full $2(2r)^2$. Now what is the total material used for both barrel and ends in terms of h and r? Use the formula for volume to reduce that to one variable.
Because the problem asks for a relation between r and h, rather than explicit values, you might try the "Laplace multiplier" method. It tends to give relations first which could then be solved for explicity values. Do you know that method?
However, doing it both ways, I get the same answer but NOT " $8/\pi$"!
All times are GMT -8. The time now is 04:14 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9373382925987244, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/19015-how-find-equation-polynomial-graph.html
|
# Thread:
1. ## how to find equation of polynomial from a graph
how to find equation of polynomial from a graph
2. Originally Posted by supersaiyan
how to find equation of polynomial from a graph
That depends. Do you have the graph to show us?
A polynomial of degree n can always be expressed as
$f(x) = a(x - r_1)(x - r_2)...(x - r_n)$
where the r's are the x-intecepts of the graph. The problem is that roots are not always real, and thus do not show up on the graph.
It would be best if you showed us your problem so we could tell you what cases you need to consider, etc.
-Dan
3. You can also pick (n+1) points (assuming you know its degree at least) and create a system of equations and solve them. (Which is extremely messy in general but it always works).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9679400324821472, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/46879/the-role-of-gravity-among-the-fundamental-forces-of-nature
|
# The Role of Gravity among the Fundamental Forces of Nature
If we look at the standard model, we have 4 fundamental forces which include
1. Gravity,
2. Electromagnetism,
3. Nuclear weak force,
4. Nuclear strong force.
I would like to look at Gravity for a minute. Scientists are still searching for what causes gravity. But is it possible that gravity is only a property of the space-time fabric?
For example take a blanket that is stretched tight and put different weight balls on the blanket. As one would see, the balls sink into the blanket differently. If you look at the space-time fabric as the blanket and the different balls as objects in the space-time fabric when it comes to planets and more on the macro scale, thinking gravity as a property of space-time fabric seems to work.
Since the standard model deals more at the quantum level though and we are looking at it here on earth where gravity is $9.8\,m/s^2$, it seems to me that the standard model would also need to stand up to say if we had gravity that was $11.4\, m/s^2$ or something more absurd like $20.6\,m/s^2$ . Also could we not say that gravity is really just a property of space-time fabric?
-
1
Three is the preferred number these days. EM and weak have been considered unified for quite a while, even without the Higgs being nailed down in the summer of 2012. Therefore three shall be the number thou shalt count, and the number of the counting shall be three. Four shalt thou not count, neither shalt thou count two, except in proceeding to three. Five is right out. – DarenW Dec 15 '12 at 4:12
Putting heavy balls on a big blanket or rubber sheet to demonstrate gravity as curved spacetime always struck me as odd - it uses gravity to explain gravity! – DarenW Dec 15 '12 at 4:14
3
@DarenW It's worse than that. It distorted the geometry of the blanket in the wrong way: it makes space bigger near the heavy mass... – dmckee♦ Dec 15 '12 at 4:16
3
This is NOT "the Standard Model". The SM does not include gravity. It is a very specific field theoretical model with the group symmetries of SU(3)xSU(2)xU(1) which organize the grand majority of elementary particle data in a consistent and convenient way, by fitting its (SM) parameters to the known data. Gravity and its concomitant data is NOT included in the SM. The current theoretical effort studies how to unify gravity with the other three forces within a string model, which allows for quantization of gravity and a uniform representation by embedding the SM in its group structure. – anna v Dec 15 '12 at 7:21
1
@DarenW I disagree with your dismissive comments, it is legitimate to ask and investigate how all of the FOUR forces can be unified. I'm sorry that the fact that curvature of spacetime corresponds to gravity acoording to Einstein's equations struck you as odd. But this is how nature works, independent from the question if you like it or not. It is as it is. – Dilaton Dec 15 '12 at 11:23
show 1 more comment
## 3 Answers
This is the grand story of physics since the time of Maxwell up to today. Maxwell found ways to think of electric fields and magnetic fields as being aspects of one thing - the electromagnetic field. He had it easy because the characteristic energy relevant to unifying these is zero - the mass of the photon.
When physicists found that neutrons could change into protons, and protons into neutrons, measurements of energies and momenta of the particles involved indicated a characteristic mass of somewhere around 100GeV. The early theories using a "Fermi constant" fell apart, but Weingberg, Glashow and Salam, using an idea from Peter Higgs, came up with a good theory having W and Z bosons. These and the photon are aspect of one multidimensional field. EM + weak were then unified - after sufficient experimental verification, of course.
The strong force came to be seen as a matter of gluons. No one has yet unified the strong force with the electroweak. The energy range involved in this unification many times higher. Despite much research effort since the 1970s, and quarks and gluons commonly accepted as real, there's too little we know about how these unify with the leptons and electroweak interactions.
The key idea is that when something sufficiently complex vibrates, classical or quantum, the different modes of vibrations might start off seeming all the same, but they usually pair up or combine in ways leading to qualitatively different phenomena. Those are vague, mushy words. An example to illustrate: imagine two identical pendulums side by side, with a wimpy rubber band or spring connecting them. The pendulums could start off swinging with any different amplitudes or any relative phase, but wimpy spring will cause them to eventually swing together with identical amplitudes. This is the lowest energy state. Another state is for them to swing exactly opposite, same amplitudes. This state oscillates faster, and is easily perturbed into its lowest energy state + waste heat, sound, photons or however the system gets rid of excess energy.
With photons, W and Z - what is it that's doing the waving? No one knows. At some fundamental level, something in the "fabric" of spacetime is shaking about. In some way it has a zero-mass mode where it all swings together, with frequency time wavelength equal to the fundamental speed constant c. It has also higher energy modes where this is not so, but instead with waves characterized by masses of about 80 or 90 GeV. Something about these vibrations involves opposite motion, analogous to the pendulums swinging oppositely.
The old rubber sheet illustration of gravity is bogus. Spacetime does not have longitudinal vibrations (movement along the surface) at least it doesn't match up well with any theory. Vibrations and bending perpendicular to the surface (meaning along 4th or more dimensions outside our 3D) correspond to gravitational waves and gravitational fields, although missed the point. Gravity, outside of crazy places like black holes, is mostly a matter of rate-of-time. This is a hard concept to explain without getting into General Relativity and differential geometry. If someone has a stopwatch they'll find it takes one minute for a minute to pass by, no matter where they are. It's all about comparing nearby points of space - things seem to happen a bit faster for you as seen by me. Read about "gravitational redshift".
To write $\psi=A sin(2\pi f t)$ is to describe an oscillation. This tells us frequency and amplitude, but does not tell us if it's a steel bolt hanging on a string, a buoy floating in the see, or an avionics control system that's gone unstable.
To try to understand what it is that's doing the vibrating, you might as well pick up a book on new-age metaphysics. Even charge, plain old + and - of electrons and quarks, is a mystery what it really is. Some kind of pucker or topological twist of the fundamental substance? Electromagnetic fields are some kind of strain in that stuff? We can only write the equations describing overall behavior as waves.
Whatever space-time-energy-matter is made of fundamentally, little overlapping patches of space-thought-goop, quivering liquidy 3D membranes in higher dimensional space, strings or M-branes, who knows, all that physicists can do (for now) is mathematically describe the symmetries and energies and coupling constants of the vibrations.
-
Can you recommend some literature on new-age metaphysics :-) – Mikhail Dec 15 '12 at 6:21
On a large (i.e. non-quantum) scale describing gravity as a property of spacetime works well, and in fact it's exactly what General Relativity does. The problem is that the equation that describes the curvature of space-time is:
$$G_{ab} = 8\pi T_{ab}$$
where the quantity on the left, $G_{ab}$, describes the curvature and the quantity on the right, $T_{ab}$, describes the matter that is present.
The problem is that the matter, i.e. $T_{ab}$, is described by the Standard Model and we know it is quantised. So the equation has a non-quantised $G_{ab}$ on the left and a quantised $T_{ab}$ on the right, and equating a non-quantised to a quantised quantity doesn't make mathematical sense.
So while the GR description of gravity works well at large scales it must break down at scales where quantum effects become important. At these scales gravity has to be quantised and described by some theory that includes the Standard Model and General Relativity as low energy approximations.
-
First the standard model does not include gravity, but only the other interactions. Precisely the search of a unified theory of interactions is the search of a theory that joins the standard model with gravitation.
The cause of gravity is the stress-energy-momentum $\Theta_{ab}$. Anything with stress-energy-momentum generates gravity and feels gravity.
General relativity, which is a metric theory, describes gravitation as spacetime curvature via the Hilbert Einstein equations $G_{ab} = 8 \pi G T_{ab}$, where $T_{ab}$ is the stress-energy-momentum for matter and radiation alone. Gravitation can be described in alternative non-geometrical forms. In the field approach to gravitation (pioneered by Feynman and Weinberg among others), gravitation is related to a gravitational field associated to quanta of gravity that we call gravitons.
At the macroscopic level and for the usual applications both approaches, the geometrical and the non-geometrical, give the same answers and can be somewhat considered equivalent. The differences appear at the quantum level where general relativity breaks down.
I would finally add that although gravitation is widely considered a fundamental interaction, some authors speculate that it is not fundamental but derived from the other interactions. Such approaches to gravity are usually named "emergent gravity" approaches, and are under active research today.
-
I would agree that gravity is caused by the interactions of the space-time fabric and the objects in the space-time fabric. The example I would give is a back-hole in the space-time fabric makes a large gravitational pull than does our Star (aka the Sun). As far as some of the current research goes it could also explain why very strong gravitational field could be created too – Christopher S. Bullock Dec 15 '12 at 19:19
@ChristopherS.Bullock: As explained in my answer, gravitation can be described without "interactions of the space-time fabric". As Weinberg puts it in his textbook on general relativity the spacetime curvature is only a "geometrical analogy", not a real physical phenomenon. – juanrga Dec 15 '12 at 19:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942240834236145, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/138357-mean-data.html
|
# Thread:
1. ## mean of data.
If you have 7 sets of numbers
$<br /> \left (4,4,3,3 \right )=A$
$\left (NA,NA,4,4 \right )=B$
$\left (4,3,4,2 \right )=C$
$\left (4,4,4,5 \right )=D$
$<br /> \left (3,3,4,4 \right )=E$
$\left (2,4,3,3 \right )=F$
$\left ( 4,4,4,4 \right )=G$
If we find the mean each set, then find the mean of all the sets to that:
(S=total # of sets M=mean)
$\frac{A+B+...+G}{S}=M$
How would we account for the two missing values in B?
Would it be wrong to only use the two values?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8902973532676697, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/inertial-frames+classical-mechanics
|
# Tagged Questions
4answers
117 views
### How to create frame of reference?
Is this possible to create a inertial frame of reference in the earth? How it is possible?
2answers
157 views
### Foucault pendulum
The equations of motions for a Foucault pendulum are given by: $$\ddot{x} = 2\omega \sin\lambda \dot{y} - \frac{g}{L}x,$$ $$\ddot{y} = -2\omega \sin\lambda \dot{x} - \frac{g}{L}y.$$ What are the ...
1answer
278 views
### The form of Lagrangian for a free particle
I've just registred here, and I'm very glad that finally I have found such a place for questions. I have small question about Classical Mechanics, Lagrangian of a free particle. I just read Deriving ...
2answers
1k views
### Deriving the Lagrangian for a free particle
I'm a newbie in physics. Sorry, if the following questions are dumb. I began reading "Mechanics" by Landau and Lifshitz recently and hit a few roadblocks right away. Proving that a free particle ...
2answers
271 views
### What is the inertial frame that explains the Foucault Pendulum?
I know that the Foucault pendulum rotation in relation to Earth is a proof that the object is inertial in relation to the distant stars. But what makes them more important than the Earth? Are they an ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266521334648132, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/45199/thermodynamic-relations-from-gibbs-duhem
|
# Thermodynamic relations from Gibbs-Duhem
Given the Gibbs-Duhem relation $V dp = S dT - N d \mu$, I am having trouble deriving the following identity:
$\ (\frac{\partial N}{\partial \mu})_{V,T} = N (\frac{\partial \rho}{\partial p})_T$
The problem is that the variables $N$ and $\rho$ don't appear as infinitisemals in the equation. How can I proceed?
-
Start by noting that the density is inversely proportional to volume: ρ = m / V, and also that the mass m equals the substance's molar mass M multiplied by the number of moles N: m = M N. Together we have, ρ = M (N / V) (M will be carried through the rest of the derivation as an arbitrary constant with dimensions kg / mol.) – David H Nov 27 '12 at 21:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209743738174438, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/95790/product-of-two-cuspforms-is-not-a-cuspform/95803
|
## Product of two cuspforms is not a cuspform
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $f$ and $g$ be two cuspforms on $\Gamma \backslash \mathbb{H}$. They could be Maass cuspforms, or holomorphic modular forms. Let us say that they are holomorphic and also that $\Gamma = \operatorname{SL}_2(\mathbb{Z})$ for simplicity. The product $f \overline g$ is not necessarily a cuspidal function on $\Gamma \backslash \mathbb{H}$ although it decays even faster than $f$ or $g$ as it approaches a cusp. One can see that the function is not cuspidal by taking $f = g$ and noting that, $$\int_0^1 |f|^2(x+iy) dx \neq 0.$$
I guess this is not a question but rather a surprised statement about cuspidality not being synonymous with vanishing at cusps.
It seems like the two statements are equivalent only when the function in question is also an eigenfunction of the Laplacian.
Could you elaborate on this issue, for example could you find a cuspidal function, which does not vanish at the cusps? Of course the function has to be not an eigenfunction.
-
## 2 Answers
I don't think it's possible to find a "nice" (say, smooth) function $f \in L_2(\Gamma \backslash \mathbb{H})$ such that $(1) \int_0^{1} f(x+iy) dx = 0$ for all $y > 0$ and $\lim_{y\rightarrow \infty} f(x+iy) \neq 0$. This may be total overkill, but consider the spectral decomposition of such an $f$, namely $$(2) \qquad f(z) = \sum_{j} \langle f, u_j \rangle u_j(z) + \frac{1}{4\pi } \int_{\mathbb{R}} \langle E(\cdot, 1/2 + it), f\rangle E(z, 1/2 + it) dt.$$ By unfolding, the inner product of $f$ with the Eisenstein series $E(z,s)$ is zero by the assumption (1); initially this is easy for the real part of $s$ large but then follows by analytic continuation. By inserting (2) into (1) we see that $\langle f, u_0 \rangle = 0$, that is $f$ is orthogonal to the constant eigenfunction. Now in (2) take $z= x+iy$ with $y$ large. Each term in the sum is very small since all the Maass forms vanish at the cusp, and the projections of $f$ onto the constant eigenfunction and the Eisenstein series are zero.
-
I think that the proof you have given in fact shows that no such $f$ could exist in $L^2(\Gamma \backslash \mathbb{H})$, not only smooth ones. I also realize that I can find an automorphic function $f \in L^2(\Gamma \backslash \mathbb{H})$ such that $\int_0^1 f(x+iy) dx = 0$ for $y >2$ say, but having $\lim_{y\to \infty}f(x+iy) \neq 0$. That would be obtained by averaging the function $\phi(z) = e^{2\pi i x} \mathbf{1}_{[-1/2,1/2]\times [2,\infty]}(z)$ with respect to the group. $$f(z) = \sum_{\gamma \in \Gamma} \phi(\gamma z).$$ With some more work I think $f$ can be made smooth. – Eren Mehmet Kiral May 2 2012 at 21:49
At the last stage of my proof one has to interchange a limit and an infinite sum, so there needs to be some kind of continuity/smoothness assumption. As a counterexample, you can change $f$ on a set of measure zero, say making it be $1$ on some vertical line. – Matt Young May 3 2012 at 0:41
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This sort of issue is significant when we're trying to get a grip on the analysis of automorphic forms, but/and, already non-automorphic situations provide much insight. I can't help but comment that adding "automorphic" in any discussion creates enough cognitive dissonance (at some level) that otherwise-classical examples and counterexamples often get lost.
Yes, one should think in terms of automorphic spectral decompositions, and note that eigen-cuspforms are of rapid decay, and that a not-vanishing-at-infinity but vanishing-constant-term function must necessarily have unpleasantly-behaving spectral decomposition coefficients.
That a (very nicely convergent, at least uniformly pointwise on compacts) sum of rapidly-decreasing functions need not be decreasing at all, etc., (yes, I know this is a somewhat different issue) is illustrated by $\sum x^n e^{-nx}/n! = 1$. This suggests something about the spectral analysis.
Similarly, more directly, (but, yes, I know, somewhat differently), an $L^2$ function on the real line can have ever-narrower spikes parading out to infinity, so not go to $0$ at infinity. (Of course, if it had any limit at all, it would have to be 0, on the real line... though this is not true for automorphic forms, because of finite volume at infinity.)
In the automorphic case, take $f(x+iy)$ to be $0$ for $y<2$, and for $y\ge 2$ let $f$ be $e^{2\pi ix}h(y)$ with $h(y)=y^{1/3}$. Can smooth this out, too. And can make the lim sup be $+\infty$ by spikes.
In particular, here, note that dropping the eigenfunction condition gives us license to look at the much simpler "tapering cone" that is the image of a high-up Siegel set under the quotient map.
In summary, it is (with hindsight, sure) boringly easy to make a "cuspidal" automorphic form, in $L^2$, that does not go to $0$ at infinity. But, when we see what the possibilities are, they do not disprove other important principles.
Edit: should have said that the lim sup can grow arbitrarily fast by (narrow) spikes.
Edit: That is, a natural argument based on a spectral expansion is incomplete without further information on the rapidly decreasing functions (here, cuspforms), since a sum of rapidly decreasing functions need not be decreasing at all. Of course, some sums of rapidly-decreasing are rapidly decreasing... but if the sups occur further and further out, this need not be so, as in $\sum {2^n\over n!}\,x^ne^{-x} = e^x$.
Although, for example, being in $L^2(\mathbb R)$ does not imply a function goes to $0$ at infinity pointwise, if it is in a Sobolev space (has an $L^2$ derivative) then by the fundamental theorem of calculus it has a bound something like $\sqrt{x}$. To actually have decay requires more.
Sums of cuspforms that are in $L^2$ can easily map to non-$L^2$ "cuspidal" things under $\Delta$, or even under "first derivatives" coming from the Lie algebra acting on the right. This does not necessarily contravene "going to $0$", but it shows that a simple argument fails, as in the sum of $x^n e^{-x}$'s.
In fact, I think Iwaniec' "spectral theory of afms" book has some remarks in it about the sups of cuspforms occurring further and further out, which creates the danger alluded to above. There was a paper of Iwaniec-Sarnak in which an infelicity about something of this sort occurred, remarked upon in Sarnak's letter to Morawetz. The extent to which this enables wild-ish growth at infinity of (smooth?) $L^2$ "cuspidal" things would require computation, at least, and sharp answers may depend on serious unproven things, now that I think about it...
Edit again: yes, some sort of smoothness hypothesis presumably makes a spectral argument work, even with "relatively easy" estimates on sups of an orthonormal basis of cuspforms. An easy sort of smoothness assumption is not merely smoothness, but that $\Delta f$ is in $L^2$, and/or $\Delta^\ell f$ is in $L^2$ for some sufficient $\ell$. A Sobolev-ish condition. Already on the real line the analogous issue is present: smooth functions in $L^2$ without constraints on integrability of derivatives need not decay.
-
What are the important principles that you say? In the case of vanishing at cusps but not being cuspidal case, the function I had found had good properties in terms of its spectral decomposition. (I know this doesn't violate any theorem, but did violate my intuition about cuspidal functions). – Eren Mehmet Kiral May 4 2012 at 1:41
@Paul Yes there is a potential issue with reversing the limit as $y\rightarrow \infty$, and the sum over $j$. If $f$ is sufficiently smooth then using the fact that the Laplacian is self-adjoint, one can show that the spectral coefficients decay as the reciprocal of some polynomial of the Laplace eigenvalue. The sup norm of a Maass form grows like a fixed polynomial in the Laplace eigenvalue, and also $u_j(x+iy)$ is exponentially small once $y$ is a little larger than the square-root of the Laplace eigenvalue. Maybe I'll go back and edit my answer when I get some time... – Matt Young May 4 2012 at 14:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461215138435364, "perplexity_flag": "head"}
|
http://sbseminar.wordpress.com/2008/03/29/sfpa-the-temperley-lieb-algebra/
|
jump to navigation
## SF&PA – the Temperley-Lieb algebra March 29, 2008
Posted by emilypeters in guest post, introductions, planar algebras, small examples, subfactors.
trackback
Hi all,
First, I’d like to thank the organizers for inviting me to post on their blog, and apologize for the low tech pictures in what follows.
As Noah mentioned, my name is Emily, I study subfactors and planar algebras, and that’s the back of my head at the top of this page (still). While Noah is taking you through the delights of subfactors sans analysis, I’ll say a few words about planar algebras to set the stage for their later appearance in subfactorland. For now, let’s leave definitions to a future post, and say a little bit about my favorite planar algebra: the Temperley-Lieb algebra.
To get a Temperley-Lieb picture, arrange $n$ points at the bottom of your page, and $n$ points at the top, and connect the points up among themselves in a non-crossing way:
We only consider such pictures up to isotopy — then the number of such pictures is exactly the $n^{th}$ Catalan number (since you can, for instance, read matching parenthesizations as directions for connecting up the $2n$ points). Now, form a vector space $TL_n$ whose basis is Temperley-Lieb pictures on $2n$ points. For instance,
We turn this vector space into an algebra by defining multiplication: The product of two boxes is the picture you get by stacking them:
But what about that loop in the middle? It’s not part of the data of a Temperley-Lieb picture, so we have to throw it out — but let’s remember it was there by multiplying the resulting picture by $\delta$ (If there had been $k$ circles, we’d have multiplied the picture by $\delta^k$).
If you enjoy multiplying Temperley-Lieb pictures, try this fun exercise: show that Temperley-Lieb is multiplicatively generated by elements $e_i$, which consist of $n-2$ through strings and a cup and a cap starting at the $i^{th}$ string:
and satisfy the relations $e_i^2 = \delta e_i$, $e_i e_j = e_j e_i$ if $|i-j|>1$ and $e_i e_{i \pm 1} e_i = e_i$ (hmm, don’t those last two relations sort of remind you of the braid group?)
One of the reasons we subfactoralists (subfactorers?) like Temperley-Lieb is that it has a lot of structure to it. For instance, we can define an involution $^*$ on $TL_n$ by horizontal reflection: So, for example:
and we can also define a trace by connecting the top points to the bottom points — the result is some number of loops in a $TL_0$ diagram, ie a power of $\delta$:
We call this a trace because it doesn’t care about the order of multiplication (just slide the bottom picture along the strings until it ends up on top).
This combination of a trace and an involution is pretty powerful, as it lets us define a bilinear form $\left< x, y \right> := \text{tr}(y^* x)$ on $TL_n$. Here’s a hard one for you: For which values of $\delta$ is this form positive definite?
Maybe that’s a good place to stop for now. Coming soon: why is Temperley-Lieb a planar algebra, instead of a just plain algebra?
About these ads
Like Loading...
## Comments»
1. Ben Webster - March 29, 2008
I would opt for “subfactotum/subfactota.”
2. Anonymous - March 30, 2008
Thanks this is a nice post.
3. muz - April 9, 2008
The old-skool drawings look pretty cool. Can we have some more?
4. TQFTs via Planar Algebras « Secret Blogging Seminar - December 6, 2008
[...] This talk is about how you can use Planar algebras planar techniques to construct 3D topological quantum field theories (TQFTs) and is supposed to be introductory. We’ve discussed planar algebras on this blog here and here. [...]
5. hadi - August 16, 2010
hi if posible for you please sent for me leter or any writing about representation of temperley-lieb algebra that study n*n matricess and satisfied the constraints of this algebra
thank you very much
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9090700149536133, "perplexity_flag": "middle"}
|
http://rjlipton.wordpress.com/2012/01/24/ode-to-the-math-monthly/
|
a personal view of the theory of computation
tags: matrix, Monthly, Proofs
by
Some fun results on matrices
Olga Taussky-Todd was one of the leading experts on all things related to matrix and linear algebra in the middle and late 1900′s. She was born in the Austro-Hungarian Empire in what is now the Czech Republic, and obtained her doctorate in Vienna in 1930. She attended the Vienna Circle while fellow student Kurt Gödel was proving his greatest results, and recalled that Gödel was very much in demand for help with mathematical problems of all kinds. She left Austria in 1934, worked a year at Bryn Mawr near Philadelphia, then held appointments at the universities of Cambridge and London until after World War II, when she and her husband John Todd emigrated to America.
Today I want to present a couple of simple, but very cool results about matrices.
Taussky-Todd once said in the American Mathematical Monthly—from now on the Monthly:
I did not look for matrix theory. It somehow looked for me.
That is to say, her doctorate was on algebraic number theory, and then she progressed to functional analysis. Heading in the direction of continuous mathematics was the available path. According to these biographical notes, the field of matrix theory did not really exist at the time. The notes hint that matrix theory was too light to be a main subject for graduate education unto itself, so perhaps the Monthly was a needed vehicle to help to launch it.
Matrix and Monthly
One of Taussky-Todd’s great papers is “A recurring theorem in determinants,” which proved a variety of simple, but fundamental, theorems about matrices. It appeared, as did many of her papers, in the Monthly. One of these theorems is a famous non-singularity condition:
Theorem: Let ${A}$ be a complex ${n \times n}$ matrix, and let ${A_{i}}$ stand for the sum of the absolute values of the non-diagonal elements in row ${i}$, namely
$\displaystyle A_{i} = \sum_{j \neq i} |a_{ij}|, \ \ i=1,\dots,n.$
If ${A}$ is diagonally dominant, meaning.
$\displaystyle |a_{ii}| > A_{i}, \ \ i=1,\dots,n,$
then
$\displaystyle \det(A) \neq 0.$
I recall as a student “meeting” Taussky-Todd’s work in a book on algebraic number theory. Somehow many of the results in that area could be reconstructed as theorems on matrices, and the resulting proofs sometimes were much more transparent.
Ivars Peterson has a nice discussion of her work here. The same biographical notes referenced above have this revealing passage:
Olga Taussky always wished to ease the way of younger women in mathematics, and was sorry not to have more to work with. She said so, and she showed it in her life. Marjorie Senechal recalls giving a paper at an AMS meeting for the first time in 1962, and feeling quite alone and far from home. Olga turned the whole experience into a pleasant one by coming up to Marjorie, all smiles introducing herself, and saying, “It’s so nice to have another woman here! Welcome to mathematics!”
Let’s look at two simple but I believe interesting results about matrices. One is from the Monthly, and perhaps Olga would appreciate them.
A Matrix Result
Consider the two matrices ${A}$ and ${B}$ over the integers:
$\displaystyle A = \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix} \text{ and } B = \begin{bmatrix} 0 & 1 \\ 1 & 1 \end{bmatrix}.$
The question is it possible for a sequence of ${A}$‘s and ${B}$‘s to equal another distinct sequence of ${A}$‘s and ${B}$‘s? For example, is:
$\displaystyle ABABBBABBABBAAA = BAAABBAABBBA \ ?$
The answer is ${\bf no}$, and see the next paragraph for the cool proof. One way to think about this is that the two matrices generate the free semigroup. There are pairs of matrices that generate the free group, but the proof that they do that is harder, in my opinion. I once used that the free group is generated by matrices to solve an open problem—see here. What is cool is that the proof is quite unexpected.
$\displaystyle \S$
The key is to look at the action of the matrices on positive vectors: the vector ${v}$ is positive provided
$\displaystyle v = \begin{bmatrix} x \\ y \end{bmatrix}$
where ${x > 0}$ and ${y > 0}$. Note that both ${A}$ and ${B}$ map positive vectors to positive vectors.
Also define ${\mathsf{TOP}}$ to be those vectors whose first coordinate is strictly larger than its second and define ${\mathsf{BOT}}$ to be those whose second coordinate is strictly larger than its first. Thus,
$\displaystyle \begin{bmatrix} 11 \\ 9 \end{bmatrix} \in \mathsf{TOP} \text{ and } \begin{bmatrix} 4 \\ 23 \end{bmatrix} \in \mathsf{BOT}.$
Let ${S}$ and ${T}$ be distinct sequences of the matrices ${A}$ and ${B}$ that are equal: we plan to show that ${S = T}$. If they both start with the same matrix, since the matrices are invertible, we can find shorter such sequences. Thus, we can assume that ${S = AS'}$ and ${T=BT'}$ for some ${S'}$ and ${T'}$. Let ${v}$ be any positive vector. Define
$\displaystyle x = S'v \quad\text{ and }\quad y = T'v.$
It follows that both vectors ${x}$ and ${y}$ are positive. But we note that ${Ax}$ is in ${\mathsf{TOP}}$ and ${By}$ is in ${\mathsf{BOT}}$, which is impossible since ${Ax = By}$ by assumption.
This neat argument is due to Reiner Martin answering a question in the Monthly—the question was raised by Christopher Hillar and Lionel Levine.
Another Matrix Result
There are many normal forms for matrices of all kinds. I recently ran into the following question, which was quickly solved by Mikael de la Salle. Let ${M}$ be an ${n \times n}$ matrix. Prove that there is a ${\lambda>0}$ and two unitary matrices ${U}$ and ${V}$ so that
$\displaystyle \lambda M = (U + V)/2.$
This says that any matrix, up to scaling, is the average of two unitary matrices. It seems this should be a useful fact, but I have not applied it yet.
$\displaystyle \S$
We can find unitary matrices ${U}$ and ${V}$ and a real diagonal matrix ${D}$ so that
$\displaystyle M = UDV.$
This is the famous Singular Value Decomposition. We can assume that the values on the diagonal are all at most ${1}$ in absolute value, by using ${\lambda}$ to re-scale ${M}$ if needed. The key is that
$\displaystyle D = D^{(1)} + D^{(2)},$
where each ${D^{(1)}}$ and ${D^{(2)}}$ are unitary. This insight is based on the fact that if ${ |r| \le 1}$ for a real ${r}$, then there is a real ${s}$ so that
$\displaystyle r = (z + \bar{z})/2,$
where ${z = r + is}$ has absolute value ${1}$. This follows since there is a real ${s}$ so that ${r^{2} + s^{2} = 1}$. We can now use this term-by-term on the diagonal of ${D}$ to construct the diagonal matrices ${D^{(1)}}$ and ${D^{(2)}}$. Then it follows that
$\displaystyle M = (UD^{(1)}V + UD^{(2)}V)/2.$
Open Problems
Which matrices are averages—or sums—-of some given fixed number of unitary matrices? With ${\lambda = 1}$, that is. Is there a good way to characterize them?
Do you have your own favorite matrix results that have a simple proof, but may not be well known to all? If so please share them with all of us.
Finally, I would suggest that you read the Monthly regularly, since it is filled with gems.
Like this:
from → History, People, Proofs
17 Comments leave one →
1. January 24, 2012 8:32 am
My favorite matrix result is C. Jordan’s lemma of 1875 on the relationship of two subspaces. Let P and Q be the projectors onto two subspaces, then there exists an orthonormal basis, such that P is diagonal and Q decomposes into the direct sum of simple 2×2 blocks and some more trivial 1×1 blocks. Considering how alternating applications of the two projectors P and Q (or their associated reflections (1-2P) and (1-2Q)) act on some input vector can often be reduced to the simple special case of how they act on a single 2×2 block. Understanding this single lemma explains essentially all quadratic speed-ups in quantum computing, including Grover search and Quantum Walks.
2. Wim van Dam
January 24, 2012 12:53 pm
For those who are as confused as I was: replace
“But we note that Ax is in TOP and CX is in BOT…”
by
“But we note that Ax has to be in TOP and By has to be in BOT because of the specific forms of A and B and the positivity of x and y…”
3. John Sidles
January 24, 2012 12:54 pm
Yet another nice thing about matrices is their association to mathematical problems that are simple and natural, yet undecidable.
4. Ørjan Johansen
January 24, 2012 4:11 pm
I believe a matrix M = UDV is the average of n > 1 unitaries precisely when it has spectral norm ≤ 1.
If M is the average of unitaries, it must have spectral norm ≤ 1 since each unitary does. For the other direction, if M has spectral norm ≤ 1, note that D has the same norm as M, meaning all the diagonal elements have absolute value ≤ 1, and the rest is essentially the same as your proof above.
• Ørjan Johansen
January 26, 2012 6:55 am
In case it wasn’t clear, this was meant to answer the first “open problem” above.
5. January 24, 2012 6:39 pm
The first theorem about non-singularity of diagonally dominant matrices has a useful generalization. If A=(a_{i,j}) is an n-by-n real matrix with a_{i,i}=1 and |a_{i,j}|≤ε, then rank(A) is at least n/(2+2nε^2). There’s a really nice paper by Alon describing applications of this fact.
6. anonymous
January 25, 2012 6:52 am
My favourite:
There exists a decomposition
M = S1 S2,
with M, S1, S2 square matrices and S1, S2 both symmetric.
7. M. C.
January 25, 2012 1:44 pm
Correct me if I’m wrong.
I thought the non-singularity theorem (which is beautiful) is fairly easy to prove as follows. Suppose det(A)=0, there exists a non-trivial solution to Ax=0. Let x=[x_1, ..., x_n]^{t}. Pick among x_i the one with the largest absolute value, say x_k. |x_k|>=|x_i| for all 1<=i0. Consider row k of A multiplied by x using Einstein’s summation convention (where if a subscript appears twice, it’s meant to be summed over {1,…, n}).
|A_{k,i} * x_i| > | |A_{k,k}*x_k| – A_k*|x_k| | > 0
Hence, Ax \neq 0. Contradiction.
• rjlipton *
January 25, 2012 3:08 pm
M.C.
It is easy to prove. Some times easy is still a great result.
• slt
February 1, 2012 8:01 pm
Another way is to note that it is an immediate consequence of the Gershgorin disk theorm (http://en.wikipedia.org/wiki/Gershgorin_circle_theorem). i.e. if A is diagonally dominant then the origin does not lie in any Gershgorin disk and zero is not an eigenvalue, hence the deterninant is non-zero.
• rjlipton *
February 1, 2012 9:56 pm
sit
Yes this is another nice approach. Thanks for the connection.
8. M. C.
January 26, 2012 10:02 am
Any matrix M=UDV can be written a sum of unitary matrices.
Let n=2*ceiling(norm(M)/2). Then M/n is a matrix of spectral norm <= 1. According to Ørjan Johansen, M/n=(U'+V')/2, where U' and V' are unitary. Hence, M=n(U'+V')/2, which is a sum of n unitary matrices.
Note that the lower bound on number of matrices in a sum must be no less than ceiling(norm(M)). So the above construction is with a constant +1 of the lower bound.
My question is whether M can be written as a sum of distinct unitary matrices. And the answer is probably no if the number of matrices is required to achieve the lower bound. Consider M=nU, where U is unitary. Following a similar result in the vector space, I suppose it takes more than n distinct unitary matrices to have a sum equal to M.
9. January 26, 2012 3:36 pm
from the today-I-learned dept.
The word matrix or matrices has an etymology related to womb. I first read this meaning in an essay by Hamann that has a very nice description of space and time in terms of a critique on Kant’s pure reason…
late 14c., from O.Fr. matrice, from L. matrix (gen. matricis) “pregnant animal,” in L.L. “womb,” also “source, origin,” from mater (gen. matris) “mother.”
• January 27, 2012 11:44 am
That is wonderful etymology to know: surely matrices have “given birth” to many great and wonderful theorems, and even today they are “pregnant” with many further offspring!
• January 27, 2012 8:16 pm
Actually I considered but rejected making a play on that in the section titled “Matrix and Monthly” above, which has the “it’s nice to have another woman here” quotation.
10. January 27, 2012 11:27 am
Anonymous writes:
My favourite [is] there exists a decomposition ${M = S_1 S_2}$, with ${M, S_1, S_2}$ square matrices and ${S_1, S_2}$ both symmetric.
This theorem is a great favorite of engineers, and moreover, it is (arguably) the sole algebraic theorem to win a Nobel Prize, namely Lars Onsager’s 1968 Nobel Chemistry award, given for Onsager’s 1931 discovery of what is today called “Onsager Reciprocity.”
Concretely, under the following identification:
${M \Leftrightarrow}$ (cotransport and diffusion matrix)
${S_1 \Leftrightarrow}$ (matrix of kinetic coefficients)
${S_2 \Leftrightarrow}$ (Hessian of the entropy function)
the Symmetric Matrix Decomposition Theorem asserts (in effect) that “An entropy-like function always exists such that the matrix of kinetic coefficients is symmetric.”
It is striking that Onsager’s reciprocity principle escapes being a sterile, physics-free mathematical tautology solely by its identification of the above “entropy-like function” with the physical entropy.
The thirty-seven year delay in Onsager’s receiving a Nobel is due largely to controversies among chemists and physicists with respect to the physical content (or the absence of physical content) respecting the reciprocity of the kinetic transport coefficients — controversies that have not entirely abated even today, and that are associated equally to both classical and quantum transport mechanisms.
Thus, taken all-in-all, the Symmetric Matrix Decomposition Theorem ranks among the deepest, most fascinating, most controversial, most colorful, and yet most practically relevant theorems of mathematics, physics, and chemistry … it would make a terrific article for Math Monthly.
11. January 27, 2012 3:27 pm
By the way, regarding the section “Another Matrix Result”, it’s known that any real matrix can be written as a linear combination of four real orthogonal matrices, and it’s conjectured that four is tight. I forget the reference.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 81, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452726244926453, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/176419/tournament-algorithm-for-quartets
|
# Tournament Algorithm for Quartets
I'm currently trying to find an algorithm to place players during a Mahjong tournament.
Here are the requirements :
• Number of players in the tournament : $n$ with $n \equiv 0 \pmod 4$
• Number of tables in the tournament : $n/4$ (4 players per table)
• A tournament has $x$ rounds (determinated by the algorithm according to $n$)
• A player must never meet another player twice
The goal is to find how many rounds I can set for $n$ players.
I found some documentation about Swiss-system tournament, and I would like to know if an equivalent exists for quartet instead of pairs.
-
## 2 Answers
Each player meets 3 players each round, and each player has $n-1$ other possible players to meet, so there can't be more than $(n-1)/3$ rounds. Whether it is actually possible to schedule as many as $(n-1)/3$ rounds, I don't know. People who organize bridge tournaments have to know about this kind of thing, and there is a lot of on it. I'd search for "bridge movements" and/or for "whist movements". Also, the "social golfer problem" is a good search term, as golfers go out in parties of 4.
-
Well this is a good start. I've already worked on a similar case for Bridge movements, but the problem is totally different, in the sens that Bridge players are always together (teams of 2), then it can be considered as a $1/1$ tournament, and not a $1/1/1/1$. Golfers's problem seems to be closer from what I'm searching for, I'll have a look. – zessx Jul 29 '12 at 13:16
You could look at the list of bridge movements under individual movements. Also at this page which has a lot of information on how to construct them.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9807953238487244, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=110352
|
Physics Forums
## Have I understood Quaternions correctly???
Hi all,
I have been trying to understand how quaternions work when rotating coordinate axes in 3-D. I think I may have finally succeeded but I would appreciate if someone who really knows how to use them could confirm if I am correct.
I think that the 3 vector coordinates represent a single point in space and so represent a vector coming from the origin to the point.
This vector actually represents an axis which coincides I suspect with the NEW x axis.
Finally, the real component of the quaternion represents the angle through which the coordinate axes is rotated about the NEW x-axis.
The thing which worries me about the way I understand it is that its too simple. Surely of all the dozens of sites I looked at someone would have put in a diagram to express it this way if it were this simple????
Thanks for any help,
H_man
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Mentor One typically uses unit quaternions to represent rotations of coordinate axes in 3-space. Such quaternions are closely related to eigenrotations. An eigenrotation is a rotation about some rotation axis $\hat n$ by some angle $\theta$. The unit quaternion corresponding to such an eigenrotation is $$q = \cos\frac\theta 2 \pm \sin\frac\theta 2 \hat n$$ The reason for the uncertainty in the sign of the "imaginary" part is that the sign differs depending on whether one uses left or right quaternions to represent the rotation. Your thinking regarding the use of unit quaternions for representing the rotation of coordinate axes in $\mathbb{R}^3$ is thus partially correct. The real part of a unit quaternion does indeed represent the angle through which the axes are rotated, but not directly. Instead, $\operatorname{Re} q = \cos\frac\theta 2$. However, the vector or imaginary part of a unit quaternion used to represent a rotation does not represent the new X axis. The imaginary part instead represents the rotation axis but scaled such that the resulting quaternion is indeed a unit quaternion. Unit quaternions are but a subset of the quaternions. It is helpful to understand the quaternions in their entirety. One is otherwise left with a bunch of plug-and-chug formulae without this understanding.
Thanks for the explanation! I think based on what you have said that I now understand it. BUT, could you confirm that what I now picture in my mind is correct... The unit vector (represented by the complex components) is a rotation axis. Thus all 3 of the coordinate axis are transformed simultaneously by being rotated about this axis. And that they are rotated by an amount "theta". I know it may seem that I am stating the obvious but I want to make sure that I do understand it correctly. Cheers.
Thread Tools
| | | |
|-----------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Have I understood Quaternions correctly??? | | |
| Thread | Forum | Replies |
| | General Astronomy | 7 |
| | General Math | 0 |
| | Introductory Physics Homework | 13 |
| | General Discussion | 0 |
| | General Physics | 27 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335187673568726, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/2284/how-to-value-a-floor-when-a-loan-is-callable?answertab=oldest
|
# How to value a floor when a loan is callable?
Certain bank loans pay a spread above a floating-rate interest rate (typically LIBOR) subject to a floor. I would like to find the value of this floor to the investor. Assume for this example that the loan does not have any default (credit) risk.
As a first pass, the value of the floor could be approximated using the Black model to price a series of floorlets maturing on the loan's payment dates. However, the borrower has a right to refinance (call) the loan (suppose it is callable at par), subject to a refinancing cost. If interest rates decline and the floor is in the money, the borrower is more likely to call the loan. Thus the floor (along with the rest of the loan) is more likely to go away precisely when the investor values it more.
How can this floor be valued?
-
Have you considered a binomial tree (or a richer monte carlo simulation) where each node is the probability of the interest rate rising or falling? – Quant Guy Oct 30 '11 at 17:31
@QuantGuy I have, and I'm not sure how the binomial tree helps me here (there are at least two sources of variation), whereas the Monte Carlo is a possibility but I'm looking for a more reduced form solution, or at least some advice on what others have done for this kind of problem. – Tal Fishman Oct 31 '11 at 2:43
@Tal: Hi, I think it would help if you could write explicitly the cashflows. First you could write them without call,then using a backward induction by setting the callability option at the last fixing,and adding callable dates, you should be able to obtain by dynamic principle a solution for the problem. Best regards. – TheBridge Oct 31 '11 at 16:32
## 1 Answer
As with most derivatives that have early exercise, you are going to want to price this using a grid scheme. I have priced callable loans with floors using the Generalized Vasicek model at my old hedge fund, and it is fairly easy to handle. As a matter of fact my students are doing that very problem as homework this week, and my reference implementation using explicit finite differences is 15 lines of Python/Numpy. (Sorry, guys, I am not going to post it here).
Allow me to make the following suggestions:
1. Do not ignore credit spread. Instead, consider modeling (credit spread + interest rate) as your basic Vasicek "short rate" variable $r$. Credit spread provides about half the rate volatility in your typical security so ignoring it severely mis-estimates option value.
2. Just do an explicit FD scheme unless you really need speed. It is actually easier than making a tree. If speed becomes a problem go to Crank-Nicolson.
3. If you are really only interested in the incremental value of the floor to the bondholder, then you can make a darn good approximation even if you ignore the term structure of interest rates. That lets you revert to the straight Vasicek model which is super-simple to deal with.
4. Use Neumann boundary conditions
As a reminder, to construct the Vasicek finite differencing scheme, you simply finite-difference the PDE in $\tau=T-t$
$$\frac{\partial P}{\partial\tau} = \frac12 \sigma(\tau)^2 \frac{\partial^2 P}{\partial\tau^2} + \kappa(\theta(\tau)-r) \frac{\partial P}{\partial r} - rP$$ and then apply your exercise conditions at each timestep. If you are willing to ignore term structures, then $\sigma$ and $\theta$ become constants. You can estimate the volatility historically, and fit the $\theta$ to market rates using the best fit to the risky zero rate curve and the expectation formula $$\widehat{E}\left[ \int_t^T r_s ds \right] = \frac{1-e^{-\kappa \tau}}{\kappa} r_t + \left( \tau-\frac{1-e^{-\kappa \tau}}{\kappa} \right) \theta$$
By Neumann boundary conditions, I mean essentially assuming that at the upper and lower short-rate limits, you should pretend that the second derivative is zero. This is equivalent to setting $\sigma=0$ but only at those limits.
-
4
Brian, let them post the solution for extra credit. – Ryogi Oct 31 '11 at 19:13
Thanks for the suggestion. I don't get how I can get away with just one source of stochasticity (credit spread + interest rate) when the strike price of the floorlets is in terms of the interest rate, and the callability of the loan itself depends only on the (floor-adjusted) spread (but not directly on interest rates). – Tal Fishman Nov 2 '11 at 0:07
Well, if you assume a contant credit spread of 0, then I suppose you are in the actual situation descibed in your question. You are quite right though...I forgot this is a floater. The interest paid is only on the rate stochastic variable so my suggestion to model the combined variable does not work. Naturally you could do a 2-d grid but that is much more involved, so maybe you want to stick with just modeling $r$ or just modeling $h$, and maybe goose the vol a bit. In any case, a grid is the right way to handle this problem. – Brian B Nov 3 '11 at 19:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398934245109558, "perplexity_flag": "middle"}
|
http://cotpi.com/p/47/
|
Sunday, April 15, 2012
## Extra coin
Akio has one more coin than Bansi. They throw all of their coins and count the number of heads. If all the coins are fair, what is the probability that Akio obtains more heads than Bansi?
[SOLVED]
### 3 comments
#### John Grint solved this puzzle:
The probability is $$\frac{1}{2}$$.
Let $$H$$ be the event that Akio obtains more heads than Bansi, and let $$T$$ be the event that Akio obtains more tails than Bansi.
The four possible combinations of outcomes are $$(H \text{ and } T)$$, $$(H \text{ and } \lnot T)$$, $$(\lnot H \text{ and } T)$$ and $$(\lnot H \text{ and } \lnot T)$$. But at least one of the events $$H$$ or $$T$$ must occur since Akio has more coins than Bansi, hence $$P(\lnot H \text{ and } \lnot T) = 0$$. We cannot have $$(H \text{ and } T)$$ since this would require Akio to have at least 2 more coins than Bansi, hence $$P(H \text{ and } T) = 0$$. Therefore the only two possible outcomes are $$(H \text{ and } \lnot T)$$ or $$(\lnot H \text{ and } T)$$, i.e. the events $$H$$ and $$T$$ are mutually exclusive and precisely one of them must occur.
Since all the coins are fair, the probabilities of the two events $$H$$ and $$T$$ are equal, by symmetry. Therefore $$P(H) = P(T)$$, and we have $$P(H) + P(T) = 1$$.
Therefore $$P(H) = P(T) = \frac{1}{2}$$, i.e. the probability that Akio obtains more heads than Bansi is $$\frac{1}{2}$$.
#### Ed Murphy solved this puzzle:
Label Akio's coins $$A_1, A_2, \dots, A_n, A_{n + 1}$$. Label Bansi's coins $$B_1, B_2, \dots, B_n$$.
For all the coins except $$A_{n + 1}$$, there are $$2^{2n}$$ ways for them to land, all equally probable, and these can be divided into three sets:
1. Akio has more heads than Bansi among those coins.
2. Akio has the same number of heads as Bansi among those coins.
3. Akio has fewer heads than Bansi among those coins.
Each way in (1) is the mirror image of a way in (3), so those sets have the same size.
For each way in (1), Akio has more heads than Bansi overall, regardless of how coin $$A_{n+1}$$ lands.
For each way in (2), Akio has a $$\frac{1}{2}$$ probability of having more heads than Bansi overall, depending on how coin $$A_{n+1}$$ lands.
For each way in (3), Akio does not have more heads than Bansi overall, regardless of coin $$A_{n+1}$$ lands. At most, he might have the same number of heads as Bansi overall.
Let $$x = \frac{m}{2^{2n}}$$ where $$m$$ is the number of ways in set (1). Then the overall probability of Akio obtaining more heads than Bansi is $$1 \cdot x + \frac{1 - 2x}{2} + 0 \cdot x = \frac{1}{2}$$.
#### Christian Bau solved this puzzle:
Akio throws $$n + 1$$ coins, Bansi throws $$n$$ coins. We put Akio's last coin aside, comparing Akio's $$n$$ coins with Bansi's $$n$$ coins. If the probability that Akio has more heads is $$p$$, then the probability that Bansi has more heads is also $$p$$. The probability that both have the same number of heads is $$1 - 2p$$.
Looking at Akio's $$n + 1$$ coins and Bansi's $$n$$ coins, Akio has more heads if he either had more heads with the first $$n$$ coins, or if he had the same number of heads as Bansi's with the first $$n$$ coins and the $$(n + 1)$$th coin is heads. The probability for this is $$p + \frac{1 - 2p}{2} = p + (0.5 - p) = 0.5$$.
### Credit
This puzzle is taken from:
• Hobson, Nick. "Solution to puzzle 18: One extra coin." Nick's Mathematical Miscellany. 9 June 2004. 21 Apr. 2012 <http://www.qbyte.org/puzzles/p018s.html>.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 46, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482890963554382, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/195243-hypothesis-testing-two-proportions-print.html
|
# Hypothesis Testing for Two Proportions
Printable View
• January 13th 2012, 02:10 PM
Ajhah
Hypothesis Testing for Two Proportions
I have a problem I am doing for hw for my Data Analysis II course, which involves the use of Minitab and need help. I solved it, but need to know if I had done it correctly.
Problem: A random sample of 78 women ages 21-29 in Denver showed that 23 have a college degree. Another random sample of 73 men in Denver in the same age group showed that 20 have a college degree. Based on information from Educational Attainment in the United States, Bureau of the Census, does this indicate that the proportion of Denver women ages 21-29 with college degrees is more than Denver men in this same age group?
My Minitab output:
Test and CI for Two Proportions
Sample X N Sample p
1 23 78 0.294872
2 20 73 0.273973
Difference = p (1) - p (2)
Estimate for difference: 0.0208992
95% lower bound for difference: -0.0998659
Test for difference = 0 (vs > 0): Z = 0.28 P-Value = 0.388
My Answer:
Decision: 0.388 > 0.05 , so Fail to reject the null hypothesis
Conclusion: There exists sufficient evidence at the 5% level of significance that the true proportion of Denver women ages 21-29 with college degrees is more than the true proportion of Denver men ages 21-29 with college degrees.
• January 13th 2012, 02:31 PM
pickslides
Re: Hypothesis Testing for Two Proportions
Decision is correct, fail to reject $H_0$ and therefore conclude there is no evidence to suggest the true proportion of women with college degrees is greater than the true proportion of men with college degrees.
• January 14th 2012, 01:52 PM
Ajhah
Re: Hypothesis Testing for Two Proportions
Sorry, I meant to type insufficient evidence. My bad...thanks for the help!
All times are GMT -8. The time now is 02:01 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128087162971497, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/207520-continuity.html
|
Thread:
1. Continuity
Suppose f,g:R->R are continuous functions such that f(r)=f(r) for all r in Q, that is, f and g are equal on the rational numbers. Prove that f(x)=g(x) for all x in R.
Not sure what to do. Any guidance would be appreciated.
2. Re: Continuity
Hey lovesmath.
Continuity implies the that the limit for each point is equal to the function value evaluated at that point.
You might want to show that if the limit exists then the epsilon delta definition of continuity makes sure that the function equals the definition at all numbers.
There is a result that relates the existence of rational numbers and the construction of real numbers between rationals, but I'm guessing that as these become dense enough, then you will have a situation where the limit to each rational number from either side has to have the right properties.
One may be able to show this specifically by saying that if you have a rational number r, then no matter how different you choose a number q where q > r then there exists a smaller number q* where q* > r but q* < q.
Then using this you can use continuity definitions on the function f and conclude that if it is continuous then all real points correspond to the function value.
I forget the whole delta epsilon thingy formulation and I'm not a pure mathematician, but hopefully these give you ideas to bounce off.
3. Re: Continuity
Originally Posted by lovesmath
Suppose f,g:R->R are continuous functions such that f(r)=f(r) for all r in Q, that is, f and g are equal on the rational numbers. Prove that f(x)=g(x) for all x in R.
Suppose that $f(a)\ne g(a)$ for some real number $a$.
There is a sequence of rational numbers $r_n$ such $(r_n)\to a$.
But $f(r_n)=g(r_n)\to f(a)$. That is a contradiction.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9263656139373779, "perplexity_flag": "head"}
|
http://statistics.ats.ucla.edu/stat/r/dae/nbreg.htm
|
# Welcome to the Institute for Digital Reseach and Education
Institute for Digital Reseach and Education Home
Help the Stat Consulting Group by giving a gift
# R Data Analysis Examples: Negative Binomial Regression
Negative binomial regression is for modeling count variables, usually for over-dispersed count outcome variables.
This page uses the following packages. Make sure that you can load them before trying to run the examples on this page. If you do not have a package installed, run: `install.packages("packagename")`, or if you see the version is out of date, run: `update.packages()`.
```require(foreign)
require(ggplot2)
require(MASS)
```
```Version info: Code for this page was tested in R Under development (unstable) (2013-01-06 r61571)
On: 2013-01-22
With: MASS 7.3-22; ggplot2 0.9.3; foreign 0.8-52; knitr 1.0.5
```
Please note: The purpose of this page is to show how to use various data analysis commands. It does not cover all aspects of the research process which researchers are expected to do. In particular, it does not cover data cleaning and checking, verification of assumptions, model diagnostics or potential follow-up analyses.
## Examples of negative binomial regression
Example 1. School administrators study the attendance behavior of high school juniors at two schools. Predictors of the number of days of absence include the type of program in which the student is enrolled and a standardized test in math.
Example 2. A health-related researcher is studying the number of hospital visits in past 12 months by senior citizens in a community based on the characteristics of the individuals and the types of health plans under which each one is covered.
## Description of the data
Let's pursue Example 1 from above.
We have attendance data on 314 high school juniors from two urban high schools in the file `nb_data`. The response variable of interest is days absent, `daysabs`. The variable `math` gives the standardized math score for each student. The variable `prog` is a three-level nominal variable indicating the type of instructional program in which the student is enrolled.
Let's look at the data. It is always a good idea to start with descriptive statistics and plots.
```dat <- read.dta("http://www.ats.ucla.edu/stat/stata/dae/nb_data.dta")
dat <- within(dat, {
prog <- factor(prog, levels = 1:3, labels = c("General", "Academic", "Vocational"))
id <- factor(id)
})
summary(dat)
```
```## id gender math daysabs
## 1001 : 1 female:160 Min. : 1.0 Min. : 0.00
## 1002 : 1 male :154 1st Qu.:28.0 1st Qu.: 1.00
## 1003 : 1 Median :48.0 Median : 4.00
## 1004 : 1 Mean :48.3 Mean : 5.96
## 1005 : 1 3rd Qu.:70.0 3rd Qu.: 8.00
## 1006 : 1 Max. :99.0 Max. :35.00
## (Other):308
## prog
## General : 40
## Academic :167
## Vocational:107
##
##
##
##
```
```ggplot(dat, aes(daysabs, fill = prog)) + geom_histogram(binwidth = 1) + facet_grid(prog ~
., margins = TRUE, scales = "free")
```
Each variable has 314 valid observations and their distributions seem quite reasonable. The unconditional mean of our outcome variable is much lower than its variance.
Let's continue with our description of the variables in this dataset. The table below shows the average numbers of days absent by program type and seems to suggest that program type is a good candidate for predicting the number of days absent, our outcome variable, because the mean value of the outcome appears to vary by `prog`. The variances within each level of `prog` are higher than the means within each level. These are the conditional means and variances. These differences suggest that over-dispersion is present and that a Negative Binomial model would be appropriate.
```with(dat, tapply(daysabs, prog, function(x) {
sprintf("M (SD) = %1.2f (%1.2f)", mean(x), sd(x))
}))
```
```## General Academic Vocational
## "M (SD) = 10.65 (8.20)" "M (SD) = 6.93 (7.45)" "M (SD) = 2.67 (3.73)"
```
## Analysis methods you might consider
Below is a list of some analysis methods you may have encountered. Some of the methods listed are quite reasonable, while others have either fallen out of favor or have limitations.
• Negative binomial regression -Negative binomial regression can be used for over-dispersed count data, that is when the conditional variance exceeds the conditional mean. It can be considered as a generalization of Poisson regression since it has the same mean structure as Poisson regression and it has an extra parameter to model the over-dispersion. If the conditional distribution of the outcome variable is over-dispersed, the confidence intervals for the Negative binomial regression are likely to be narrower as compared to those from a Poisson regression model.
• Poisson regression - Poisson regression is often used for modeling count data. Poisson regression has a number of extensions useful for count models.
• Zero-inflated regression model - Zero-inflated models attempt to account for excess zeros. In other words, two kinds of zeros are thought to exist in the data, "true zeros" and "excess zeros". Zero-inflated models estimate two equations simultaneously, one for the count model and one for the excess zeros.
• OLS regression - Count outcome variables are sometimes log-transformed and analyzed using OLS regression. Many issues arise with this approach, including loss of data due to undefined values generated by taking the log of zero (which is undefined), as well as the lack of capacity to model the dispersion.
## Negative binomial regression analysis
Below we use the `glm.nb` function from the `MASS` package to estimate a negative binomial regression.
```summary(m1 <- glm.nb(daysabs ~ math + prog, data = dat))
```
```##
## Call:
## glm.nb(formula = daysabs ~ math + prog, data = dat, init.theta = 1.032713156,
## link = log)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.155 -1.019 -0.369 0.229 2.527
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 2.61527 0.19746 13.24 < 2e-16 ***
## math -0.00599 0.00251 -2.39 0.017 *
## progAcademic -0.44076 0.18261 -2.41 0.016 *
## progVocational -1.27865 0.20072 -6.37 1.9e-10 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for Negative Binomial(1.033) family taken to be 1)
##
## Null deviance: 427.54 on 313 degrees of freedom
## Residual deviance: 358.52 on 310 degrees of freedom
## AIC: 1741
##
## Number of Fisher Scoring iterations: 1
##
##
## Theta: 1.033
## Std. Err.: 0.106
##
## 2 x log-likelihood: -1731.258
```
• R first displays the call and the deviance residuals. Next, we see the regression coefficients for each of the variables, along with standard errors, z-scores, and p-values. The variable `math` has a coefficient of -0.006, which is statistically significant. This means that for each one-unit increase in `math`, the expected log count of the number of days absent decreases by 0.006. The indicator variable shown as `progAcademic` is the expected difference in log count between group 2 and the reference group (`prog`=1). The expected log count for level 2 of `prog` is 0.44 lower than the expected log count for level 1. The indicator variable for `progVocational` is the expected difference in log count between group 3 and the reference group.The expected log count for level 3 of ```
prog``` is 1.28 lower than the expected log count for level 1. To determine if `prog` itself, overall, is statistically significant, we can compare a model with and without `prog`. The reason it is important to fit separate models, is that unless we do, the overdispersion parameter is held constant.
• ```m2 <- update(m1, . ~ . - prog)
anova(m1, m2)
```
```## Likelihood ratio tests of Negative Binomial Models
##
## Response: daysabs
## Model theta Resid. df 2 x log-lik. Test df LR stat.
## 1 math 0.8559 312 -1776
## 2 math + prog 1.0327 310 -1731 1 vs 2 2 45.05
## Pr(Chi)
## 1
## 2 1.652e-10
```
• The two degree-of-freedom chi-square test indicates that `prog` is a statistically significant predictor of `daysabs`.
• The null deviance is calculated from an intercept-only model with 313 degrees of freedom. Then we see the residual deviance, the deviance from the full model. We are also shown the AIC and 2*log likelihood.
• The theta parameter shown is the dispersion parameter. Note that R parameterizes this differently from SAS, Stata, and SPSS. The R parameter (theta) is equal to the inverse of the dispersion parameter (alpha) estimated in these other software packages. Thus, the theta value of 1.033 seen here is equivalent to the 0.968 value seen in the Stata Negative Binomial Data Analysis Example because 1/0.968 = 1.033.
## Checking model assumption
As we mentioned earlier, negative binomial models assume the conditional means are not equal to the conditional variances. This inequality is captured by estimating a dispersion parameter (not shown in the output) that is held constant in a Poisson model. Thus, the Poisson model is actually nested in the negative binomial model. We can then use a likelihood ratio test to compare these two and test this model assumption. To do this, we will run our model as a Poisson.
```m3 <- glm(daysabs ~ math + prog, family = "poisson", data = dat)
pchisq(2 * (logLik(m1) - logLik(m3)), df = 1, lower.tail = FALSE)
```
```## 'log Lik.' 2.157e-203 (df=5)
```
In this example the associated chi-squared value is 926.03 with one degree of freedom. This strongly suggests the negative binomial model, estimating the dispersion parameter, is more appropriate than the Poisson model.
We can get the confidence intervals for the coefficients by profiling the likelihood function.
```(est <- cbind(Estimate = coef(m1), confint(m1)))
```
```## Waiting for profiling to be done...
```
```## Estimate 2.5 % 97.5 %
## (Intercept) 2.615265 2.2421 3.012936
## math -0.005993 -0.0109 -0.001067
## progAcademic -0.440760 -0.8101 -0.092643
## progVocational -1.278651 -1.6835 -0.890078
```
We might be interested in looking at incident rate ratios rather than coefficients. To do this, we can exponentiate our model coefficients. The same applies to the confidence intervals.
```exp(est)
```
```## Estimate 2.5 % 97.5 %
## (Intercept) 13.6708 9.4127 20.3470
## math 0.9940 0.9892 0.9989
## progAcademic 0.6435 0.4448 0.9115
## progVocational 0.2784 0.1857 0.4106
```
The output above indicates that the incident rate for `prog` = 2 is 0.64 times the incident rate for the reference group (`prog` = 1). Likewise, the incident rate for `prog` = 3 is 0.28 times the incident rate for the reference group holding the other variables constant. The percent change in the incident rate of `daysabs` is a 1% decrease for every unit increase in `math`.
The form of the model equation for negative binomial regression is the same as that for Poisson regression. The log of the outcome is predicted with a linear combination of the predictors:
$ln(\widehat{daysabs_i}) = Intercept + b_1(prog_i = 2) + b_2(prog_i = 3) + b_3math_i$ $\therefore$ $\widehat{daysabs_i} = e^{Intercept + b_1(prog_i = 2) + b_2(prog_i = 3) + b_3math_i} = e^{Intercept}e^{b_1(prog_i = 2)}e^{b_2(prog_i = 3)}e^{b_3math_i}$
The coefficients have an additive effect in the $$ln(y)$$ scale and the IRR have a multiplicative effect in the y scale. The dispersion parameter in negative binomial regression does not effect the expected counts, but it does effect the estimated variance of the expected counts. More details can be found in the Modern Applied Statistics with S by W.N. Venables and B.D. Ripley (the book companion of the `MASS` package).
For additional information on the various metrics in which the results can be presented, and the interpretation of such, please see Regression Models for Categorical Dependent Variables Using Stata, Second Edition by J. Scott Long and Jeremy Freese (2006).
## Predicted values
For assistance in further understanding the model, we can look at predicted counts for various levels of our predictors. Below we create new datasets with values of `math` and `prog` and then use the `predict` command to calculate the predicted number of events.
First, we can look at predicted counts for each value of `prog` while holding `math` at its mean. To do this, we create a new dataset with the combinations of `prog` and `math` for which we would like to find predicted values, then use the `predict` command.
```newdata1 <- data.frame(math = mean(dat$math), prog = factor(1:3, levels = 1:3,
labels = levels(dat$prog)))
newdata1$phat <- predict(m1, newdata1, type = "response")
newdata1
```
```## math prog phat
## 1 48.27 General 10.237
## 2 48.27 Academic 6.588
## 3 48.27 Vocational 2.850
```
In the output above, we see that the predicted number of events (e.g., days absent) for a general program is about 10.24, holding `math` at its mean. The predicted number of events for an academic program is lower at 6.59, and the predicted number of events for a vocational program is about 2.85.
Below we will obtain the mean predicted number of events for values of `math` across its entire range for each level of `prog` and graph these.
```newdata2 <- data.frame(
math = rep(seq(from = min(dat$math), to = max(dat$math), length.out = 100), 3),
prog = factor(rep(1:3, each = 100), levels = 1:3, labels =
levels(dat$prog)))
newdata2 <- cbind(newdata2, predict(m1, newdata2, type = "link", se.fit=TRUE))
newdata2 <- within(newdata2, {
DaysAbsent <- exp(fit)
LL <- exp(fit - 1.96 * se.fit)
UL <- exp(fit + 1.96 * se.fit)
})
ggplot(newdata2, aes(math, DaysAbsent)) +
geom_ribbon(aes(ymin = LL, ymax = UL, fill = prog), alpha = .25) +
geom_line(aes(colour = prog), size = 2) +
labs(x = "Math Score", y = "Predicted Days Absent")
```
The graph shows the expected count across the range of math scores, for each type of program along with 95 percent confidence intervals. Note that the lines are not straight because this is a log linear model, and what is plotted are the expected values, not the log of the expected values.
## Things to consider
• It is not recommended that negative binomial models be applied to small samples.
• One common cause of over-dispersion is excess zeros by an additional data generating process. In this situation, zero-inflated model should be considered.
• If the data generating process does not allow for any 0s (such as the number of days spent in the hospital), then a zero-truncated model may be more appropriate.
• Count data often have an exposure variable, which indicates the number of times the event could have happened. This variable should be incorporated into your negative binomial regression model with the use of the `offset` option. See the `glm` documentation for details.
• The outcome variable in a negative binomial regression cannot have negative numbers.
• You will need to use the `m1$resid` command to obtain the residuals from our model to check other assumptions of the negative binomial model (see Cameron and Trivedi (1998) and Dupont (2002) for more information).
## References
• Long, J. S. 1997. Regression Models for Categorical and Limited Dependent Variables. Thousand Oaks, CA: Sage Publications.
• Long, J. S. and Freese, J. 2006. Regression Models for Categorical Dependent Variables Using Stata, Second Edition. College Station, TX: Stata Press.
• Cameron, A. C. and Trivedi, P. K. 2009. Microeconometrics Using Stata. College Station, TX: Stata Press.
• Cameron, A. C. and Trivedi, P. K. 1998. Regression Analysis of Count Data. New York: Cambridge Press.
• Cameron, A. C. Advances in Count Data Regression Talk for the Applied Statistics Workshop, March 28, 2009. http://cameron.econ.ucdavis.edu/racd/count.html .
• Dupont, W. D. 2002. Statistical Modeling for Biomedical Researchers: A Simple Introduction to the Analysis of Complex Data. New York: Cambridge Press.
• Venables, W.N. and Ripley, B.D. 2002. Modern Applied Statistics with S, Fourth Edition. New York: Springer.
## See also
• R online documentation: glm
The content of this web site should not be construed as an endorsement of any particular web site, book, or software product by the University of California.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8865571618080139, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/17617?sort=oldest
|
## Why are the sporadic simple groups HUGE?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm merely a grad student right now, but I don't think an exploration of the sporadic groups is standard fare for graduate algebra, so I'd like to ask the experts on MO. I did a little reading on them and would like some intuition about some things.
For example, the order of the monster group is over $8\times 10^{53}$, yet it is simple, so it has no normal subgroups...how? What is so special about the prime factorization of its order? Why is it $2^{46}$ and not $2^{47}$? Why is it not possible to extend it to obtain that additional power of 2 without creating a normal subgroup? Some of the properties seem really arbitrary, and yet must be very fundamental to the algebra of groups.
I don't think I'm the only person curious about this, but I hesitated posting due to my relative inexperience.
-
9
One might even ask, why are there sporadic simple groups? If you are looking for intuition, maybe you should add the [soft-question] tag and make it wiki? – Sonia Balagopalan Mar 9 2010 at 16:34
25
I feel like being perverse and remarking that all but finitely many simple groups are larger than any sporadic group, so the sporadic groups are actually comparatively small... I've a feeling the answer is going to be a disappointing something along the lines of how the smaller groups don't have enough room in which to be sporadic. – some guy on the street Mar 9 2010 at 16:51
the other thing that occurs to me is that the sporadics I can sort-of remember are built as automorphism groups of certain combinatorial structures; now, those structures themselves look like generalizing in a few ways, but doing so could lead to several possibilities: e.g. their symmetries might split too much, so that the simple subgroups are too small; the desired generalization might not always *exist*; or the symmetry groups of the new structures may for arcane reasons be enumerated already among the classical groups. – some guy on the street Mar 9 2010 at 17:00
4
As a general comment, there are all sorts of "low-dimensional" obstructions restricting the isomorphism types of groups of small order, such as those coming from the Sylow theorems, and these obstructions become vanishingly less likely as you look at groups of large order. – Qiaochu Yuan Mar 9 2010 at 17:22
## 5 Answers
The question seems to be made of several smaller questions, so I'm afraid my answer may not seem entirely coherent.
I have to agree with the other posters who say that the sporadic simple groups are not really so large. For example, we humans can write down the full decimal expansions of their orders, where a priori one might think we'd have to resort to crude upper bounds using highly recursive functions. (In contrast, one could say that almost of the groups in the infinite families are too large for their orders to have a computable description that fits in the universe.) Furthermore, as of 2002 we can load matrix representatives of elements into a computer, even for the monster. Noah pointed out that the monster has a smaller order than `$A_{50}$`, but I think a more apt comparison is that the monster has a smaller order than even the smallest member of the infinite `$E_8$` family. Of course, one could ask why `$E_8$` has dimension as large as 248...
There was a more explicit question: how is it possible that a group with as many as `$8 \times 10^{ 53 }$` elements doesn't have any normal subgroups? I think the answer is that the order of magnitude of a group says very little about its complexity. There are prime numbers very close to the order of the monster, and there are simple cyclic groups of those orders, so you might ask yourself why that fact doesn't seem as conceptually disturbing. Perhaps slightly more challenging is the fact that there aren't any elements of order greater than 119, but again, there is work on the bounded and restricted Burnside problems that shows that you can have groups of very small exponent that are extremely complicated.
A second point regarding the large lower bound on order is that there are smaller groups that could be called sporadic, in the sense that they fit into reasonably natural (finite) combinatorial families together with the sporadics, but they aren't designated as sporadic because small-order isomorphisms get in the way. For example, the Mathieu group `$M_{10}$` is the symmetry group of a certain Steiner system, much like the simple Mathieu groups, and it is an index 11 subgroup of `$M_{11}$`. While it isn't simple, it contains `$A_6$` as an index 2 subgroup, and no one calls `$A_6$` sporadic. Similarly, we describe the 20 "happy family" sporadic subquotients of the monster, but we forget about the subquotients like `$A_5$`, `$L_2(11)$`, and so on. Since the order of a nonabelian simple group is bounded below by 60, there isn't much room to maneuver before you get to 7920, a.k.a. "huge" range.
The question about the why the 2-Sylow subgroup has a certain size is rather subtle, and I think a good explanation would require delving into the structure of the classification theorem. A short answer is that centralizers of order 2 elements played a pivotal role in the classification after the Odd Order Theorem, and there was a separation into cases by structural features of centralizers. One of the cases involved a centralizer that ended up having the form `$2^{1 + 24} . Co1$`, which has a 2-Sylow subgroup of order `$2^{46}$` (and naturally acts on a double cover of the Leech lattice). This is the case that corresponds to the monster.
Regarding the prime factorization of the order of the monster, the primes that appear are exactly the supersingular primes, and this falls into the general realm of "monstrous moonshine". I wrote a longer description of the phenomenon in reply to Ilya's question, but the question of a general conceptual explanation is still open.
I'll mention some folklore about the organization of the sporadics. There seems to be a hierarchy given by
• level 0: subquotients of `$M_{24}$` = symmetries of the Golay code
• level 1: subquotients of `$Co1$` = symmetries of the Leech lattice, mod `$\{ \pm 1 \}$`
• level 2: subquotients of the monster = conformal symmetries of the monster vertex algebra
where the groups in each level naturally act on (objects similar to) the exceptional object on the right. I don't know what explanatory significance the sequence [codes, lattices, vertex algebras] has, but there are some level-raising constructions that flesh out the analogy a bit. One interesting consequence of the existence of level 2 is that for some finite groups, the most natural (read: easiest to construct) representations are infinite dimensional, and one can reasonably argue using lattice vertex algebras that this holds for some exceptional families as well. John Duncan has some recent work constructing structured vertex superalgebras whose automorphism groups are sporadic simple groups outside the happy family.
I think one interesting question that has not been suggested by other responses (and may be too open-ended for MO) is why the monster has no small representations. There are no faithful permutation representations of degree less than `$9 \times 10^{ 19 }$` and there are no faithful linear representations of dimension less than 196882. Compare this with the cases of the numerically larger groups `$A_{50}$` and `$E_8(\mathbb{F}_2)$`, where we have linear representations of dimension 49 and 248. This is a different sense of hugeness than in the original question, but one that strongly impacts the computational feasibility of attacking many questions.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The sporadic finite simple groups aren't that big! The Monster group is smaller than the alternating group on 50 letters.
I think a better place for you to start (rather than wondering about general intuition for very particular objects) would be to look at Griess's Twelve Sporadic Groups. My recollection is that that book is quite accessible, and it'll let you get a glimpse of what's going on here (mostly through the smallest examples of sporadic groups rather than the largest).
Another good place to start would be to understand the exceptional Lie groups. They're similar in spirit, but much easier to understand.
-
19
Just to comment on sizes of groups: the symmetric group on 52 letters arises "naturally" in the study of shuffling the standard deck of cards. – Michael Lugo Mar 9 2010 at 17:06
I guess I should have been more specific. So far my intuition tells me that groups that are simple, and maybe this is suggested by the nomenclature, should be smaller than groups which are not simple, simply because a group which is not simple appears to have richer structure. From this perspective, the existence of a finite simple group like Monster, with combinatorially large order and a superficially random integer factorization of that order, is baffling. I will check out that book though. Hopefully my library has it. – REDace0 Mar 9 2010 at 18:48
5
Non-simple groups can have much less structure than simple groups of comparable order. Consider, for example, large cyclic groups, or more generally groups that are the direct product of their Sylow subgroups. And simple groups can have a lot of structure - normal subgroups are not the only thing that constitutes "structure"! – Qiaochu Yuan Mar 9 2010 at 19:07
2
In another way, sporadic finite simple groups are extremely large, compared to our understanding of general groups. A sylow subgroup of size $2^{46}$? No one even knows how many p-groups of size $p^8$ there are, let alone what they are, and the Higman PORC Conjecture is still a conjecture. – Ben Mar 9 2010 at 20:22
2
@Ben The Sylow 2-subgroups of the finite simple groups are very highly studied and quite well understood. Brauer's approach, which was very successful, started by considering an involution and its centralizer. Many of the classification results were something like, "Any finite simple group whose Sylow 2-subgroup is abelian/dihedral/A wreath B must be of the following form: X(q), or of order n." – Douglas Zare Mar 10 2010 at 11:39
I am not an expert, but... First, not all of the sporadic groups are that huge -- the smallest Mathieu group, for instance, has order about 8000 (this is off the top of my head), and I think the Janko groups have order on the order of $10^6$ or $10^7$.
Second, a lot of the sporadic groups are connected with the automorphism group of the Leech lattice -- and one would expect a "very symmetric" 24-dimensional lattice to have a really big automorphism group! So to some extent I think there's just a combinatorial explosion because 24 is small but 24!, for instance, is pretty big. Also keep in mind that, to some people (Ramsey theorists, analytic number theorists), $10^{53}$ is tiny. It's all relative.
Finally, as Sonia pointed out, you can ask "why are there sporadic simple groups at all?" The most satisfying answer would seem to me to be that the way we think about the classification of finite simple groups is wrong, in which case the sizes of the sporadic groups might just be an accident of history.
P.S. This would probably be better in a comment, but don't worry about being "just" a graduate student -- I'm an undergraduate myself, and I'm far from the only one. We've even had some contributions from high schoolers on MO. If your questions and/or answers are good, it doesn't matter if you're a cockroach.
-
11
I disagree with the idea that the way we classify finite simple groups is the reason there are sporadic finite simple groups. There infinite families are cyclic groups of prime order, alternating groups, and groups of Lie type over a finite field. (In fact, there should be some way to view the alternating groups as degenerate groups of Lie type over a field with one element.) The sporadic simple groups don't fit into these categories. If we meet alien mathematicians, they will agree that these are sporadic. – Douglas Zare Mar 9 2010 at 16:57
9
I wouldn't be quite so confident. Take the example of Lie algebras. If someone manages to give a good construction of the Vogel plane (and the universal Lie algebra object) then the exceptional Lie algebras start to look a lot less exceptional. What if aliens knew about finite type knot invariants before Lie algebras? – Noah Snyder Mar 9 2010 at 17:07
12
I really want to meet an alien mathematician so I can see if they actually agree with us about the things we say they'll agree with us about. (I agree with you here, but I'm not an alien.) – Michael Lugo Mar 9 2010 at 17:08
6
@Douglas: I'd like to play the alien devil's advocate. Suppose I am totally uninterested in vector spaces over finite fields but am very interested in some other set of structures with interesting automorphism groups (say block designs, to fix ideas) and I classify the "classical" simple groups this way. Is there any reason this classification should agree with the classification of groups of Lie type? – Qiaochu Yuan Mar 9 2010 at 17:16
12
@Qiaochu Yuan: There are deep reasons for collecting together the finite groups of Lie type, not explained by the elementary viewpoint that they are described as "matrix groups over finite fields". By thinking in structural terms (e.g., $BN$-pairs), one can dispose of many properties of these groups (e.g., prove their simplicity!) by a single argument. It is as if there is only one connected reductive group $G$, and often one can avoid explicit calculations until reducing to some special case with ${\rm{SL}}_2$ or ${\rm{PGL}}_2$. The techniques don't easily apply to sporadics. – BCnrd Mar 9 2010 at 20:22
show 8 more comments
The primes dividing the order of the Monster group are precisely the primes $p$ so that the surface $\mathbb H^2/\Gamma_0(p)^*$ has genus $0$, as was observed by Ogg. See Monstrous Moonshine, which was about the deep connections between number theory and the Monster group. I hope an expert elaborates.
According to Peter McMullen, regular polytopes are "wayside shrines at which one should worship on the way to higher things." Further along the road are E8, the Leech lattice, and then the Monster group.
-
Indeed the question is too vague for a precise answer, but nevertheless somehat natural ;-)
I want to give some more details and clearifications to the "hierarchy", that has been broached by Carnahan above. The "generic" simple groups are the Lie type groups of arbitrary size and the alternating groups. Furthermore, the main induction step of the classification theorem was, that the centralizer of an involution of a simple group (an order-2-element/involution is the "only" thing we have "a-priori" in an arbitrary simple group by Feit-Thomson) is close (!) to simple. So there is the chance of sporadic group branching off from a Lie-Type or alternating group and inductively proceed for some steps until it terminates.
This inductive process of constructing a much larger simple group from it's involution centralizer being a prescribed (already large) simple group in extremely rare (!) situations could be thought of some sort of answer to your question. It is by the way one reason for the incredible length of the classification result (a tremendous case-by-case argument)...and for my personal view on the meta-debate above, that sporadics are more sporadic (not more unnatural!) than others, as much to us as to species 8472 ;-) ;-)
Most examples go only one step (and still are very large!), e.g. almost all so-called pariahs:
• $J_1,J_3\leftarrow A_5$
• $Ly \leftarrow A_{11}$
• $ON \leftarrow SL_3(4)$
• $Ru \leftarrow\;^2SO_5(8)$ "twisted" Lie-type (alike the unitaries over finite fields)
• $J_4\leftarrow M_{22}\leftarrow\ldots$ branches off already one induction step beyond Lie (see below)
Note that most of these cases already appear as involution centralizers of Lie-type groups, which is somewhat miraculous and was often the reason to study this particular class and find only distinct "irradic" different choices. E.g. $^2G(3^{2n+1})\leftarrow SL_2(3^n)$ and the only other possible case $SL_2(4)\cong SL_2(5)\cong A_5$ lead Janko 1965 to the first new sporadic $J_1$ in almost a century.
On the other hand there is a VERY remarkable string of induction steps to the Monster and with modifications to the other sporadic groups "involved" in it. It goes roughly as
$M\leftarrow Co_1 \leftarrow M_{24}\leftarrow SL_3(4)$
and heavily relies on the already mentioned Golay-Code resp. Steiner System $S(24,8,5)$ - beautiful, very sporadic and purely combinatoric objects! Along the induction steps, the combinatorical objects with these groups as automorphism groups can be extended as well, very roughly like
Griess-Algebra $\leftarrow$ Leech-Lattice $\leftarrow$ Steiner-System $\leftarrow$ Projective-Plane
One striking numerical reason for this construction and the very exotic behaviour to work exactly for $24$ dimensions (also responsible for the $2^{24}$-factor mentioned above) is:
$1^2+2^2+\ldots+23^2+24^2=70^2$
This is provably impossible for larger numbers (by hard number theory) and is the striking numerical coincidence used in the side-by-side construction of the Golay code, the Steiner System and the $24$-dimensional Leech lattice, which is the most dense sphere packing of all dimensions and the reason e.g. the kissing number is known for this dimension!
Hope that gives some intuition and "personality" for the various sporadics ;-) ;-)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515458345413208, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/27337/list
|
## Return to Answer
1 [made Community Wiki]
Some high degree polynomials appears in Delsarte scheme for estimating kissing numbers in $R^n$. Here is the excerpt from the Florian Pfender and Gunter M. Ziegler. Kissing numbers, sphere packings, and some unexpected proofs:
Theorem 3 (Delsarte, Goethals and Seidel [11]). If $$f(t)=\sum_{k=0}^d c_k G_k^{(n)}(t)$$ is a nonnegative combination of Gegenbauer polynomials, with $c_0 > 0$ and $c_k ≥ 0$ otherwise, and if $f (t) ≤ 0$ holds for all $t \in [−1, 1/2 ]$ , then the kissing number for $R^n$ is bounded by $$\kappa(n)\leq \frac{f(1)}{c_0}$$
For example for $n=24$ polynomial $f_{24}(t)=(t-\frac{1}{2})(t-\frac{1}{4})^2 t^2 (t+\frac{1}{4})^2 (t+\frac{1}{2})^2(t+1)$ gives the precise number for kissing number in $R^{24}$ - 196 560.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8033772110939026, "perplexity_flag": "middle"}
|
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevE.85.031102
|
# Synopsis:
Quantum Pistons
#### Validity of nonequilibrium work relations for the rapidly expanding quantum piston
H. T. Quan and Christopher Jarzynski
Published March 1, 2012
In equilibrium, the change in free energy, $Δ$F, of a system as it transitions between two states sets a limit on the work, $W$, that can be realized in the process. Theorists have searched for similar exact relations between the work done on or by a system and its change in free energy in nonequilibrium processes, and some of these relations have been verified in experiments on small, effectively classical systems, such as macromolecules.
Showing the relations are also valid in nonequilibrium quantum systems is of fundamental importance. A case in point is the “Jarzynski equality” derived by Christopher Jarzynski at the University of Maryland, College Park, which states that, classically, the statistical average of exp[$-W/KBT$] is equivalent to exp[$-ΔF/KBT$]. Whether the equality applies to a quantum piston—a quantum particle in a one-dimensional box, with one of the walls moving at a fixed velocity—has remained an open question.
Writing in Physical Review E, Jarzynski and Haitao Quan, also at the University of Maryland, utilize a solution to the time-dependent Schrödinger equation for this quantum machine that shows the Jarzynski equality is in fact satisfied. Their result is not intuitively obvious, as there are important differences between the classical and quantum pistons; for example, the work performed on a classical particle is always negative in an expanding piston, but quantum fluctuations lead to the possibility of positive work in the quantum case. – Ronald Dickman
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356180429458618, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/tagged/finite-field?sort=active&pagesize=15
|
# Tagged Questions
A finite field is a mathematical construct based on a set of axioms which are held to be true. A number of interesting and useful properties arise from finite fields that makes them particularly suitable for use in cryptography, notably in block ciphers. Questions concerning finite fields should use ...
1answer
148 views
### Understanding Feldman's VSS with a simple example
I'm trying to understand Feldman's VSS Scheme. The basic idea of that scheme is that one uses Shamir secret sharing to share a secret and commitments of the coefficients of the polynomial to allow the ...
1answer
129 views
### Solving hard problems in $\mathbb Z_{p}^{*}$ when $\mathbb p$ is close to $\mathbb 2^{n}$
Suppose, for some security parameter $n$ you choose a prime $p$ such that $p = 2^n+c$ for some relatively small $|c| < 2^m << 2^n$. I have seen such primes being called Pseudo-Mersenne Primes ...
0answers
158 views
### Security of pairing-based cryptography over binary fields regarding new attacks
In the last week, the discrete logarithm problem was broken for the binary fields $\mathbb{F}_{2^{(14 \times 127)}}$ and $\mathbb{F}_{2^{(27 \times 73)}}$. Pairing-based cryptography using binary ...
5answers
817 views
### Galois fields in cryptography
I don't really understand Galois fields, but I've noticed they're used a lot in crypto. I tried to read into them, but quickly got lost in the mess of heiroglyphs and alien terms. I understand they're ...
1answer
274 views
### Necessity for finite field arithmetic and the prime number p in Shamir's Secret Sharing Scheme
Shamir's original paper (PDF, 197kb) describing a threshold secret sharing scheme states: To make this claim more precise, we use modular arithmetic instead of real arithmetic. The set of ...
1answer
90 views
### inverse element in Paillier cryptosystem
As I know, in Paillier cryptosystem, the encryption $c$ of a message $m$ is calculated as $c=g^m r^n \bmod n^2$. Now, I am wondering if I can derive $g^m \bmod n^2$ given that I know $c$, $r$, and ...
2answers
215 views
### Additive ElGamal cryptosystem using a finite field
I'm trying to implement a modified version of the ElGamal cryptosystem as specified by Cramer et al. in "A secure and optimally efficient multi-authority election scheme", which possesses additive ...
2answers
280 views
### Finding the LFSR and connection polynomial for binary sequence.
I have written a C implementation of the Berlekamp-Massey algorithm to work on finite fields of size any prime. It works on most input, except for the following binary GF(2) sequence: $0110010101101$ ...
2answers
218 views
### Factoring a polynomial over a GF [closed]
I have the following question: What polynomial, when factored over the field $GF (2^8)$ based on the irreducible polynomial that is used in Rijndael, will factor into all the polynomials in the ...
3answers
201 views
### Complexity of arithmetic in a finite field?
I am wondering what the complexities are of adding/subtracting and muliplying/dividing numbers in a finite field $\mathbb{F}_q$. I need it to understand an article I am reading. Thank you
1answer
208 views
### Best choice of finite field for AES on a 4-bit microcontroller?
As the finite field of $GF(2^8)$ are isomorphic to $GF((2^4)^2)$, $GF((2^2)^4)$ and $GF(((2^2)^2)^2)$, which of the fields is best suited and most efficient for 4-bit MCU and why? Would it be ...
2answers
514 views
### Design properties of the Rijndael finite field
So we've already had a question on replacing the Rijndael S-Box. My question is - can we use a different finite field other than the one given by $x^8 + x^4 + x^3 + x + 1$ in $GF(2^8)$. In other ...
3answers
747 views
### How robust is discrete logarithm in GF(2^n) ?
"Normal" discrete logarithm based cryptosystems (DSA, Diffie-Hellman, ElGamal) work in the finite field of integers modulo a big prime p. However, there exist other finite fields out there, in ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177216291427612, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/64215/list
|
## Return to Answer
2 deleted 6 characters in body
Take $G=\mathbb{Z}$. Then computing $|\operatorname{Hom}(G, H)|=|H|$ is the same as computing the size of a finitely presented group, and is thus wildly undecidable. This eliminates both the general case you seem to ask about, and the case of fundamental groups of surfaces (replacing $\mathbb{Z}$ with, say $\mathbb{Z}\oplus \mathbb{Z}$ and letting your surface $S$ be a torus).
In other words, this problem seems essentially intractable as you've asked it. On the other hand, if you restrict $H$ to lie in the class of finite groups, then the complexity is bounded above by $$|\text{# $|H|^{|\text{# of generators of } G|\cdot |H|\cdot G|}\cdot \sum_r t_H(|r|)$$ where the sum is taken over the relations of the given presentation of $G$, and where $t_H(|r|)$ is the time complexity of deciding the word problem in $H$ for a word of length $|r|$. To see this, consider the algorithm which considers all maps $${\text{generators of $G$}\to H}$$ of which there are $$|\text{# $|H|^{|\text{# of generators of } G|\cdot |H|,$$ G|}, and for each map, checks whether the relations of $G$ are satisfied in $H$. This algorithm has the time complexity described.
So essentially your question is identical to finding the time complexity of solving the word problem in whatever class of groups $H$ belongs to, about which there is tons of literature.
1
Take $G=\mathbb{Z}$. Then computing $|\operatorname{Hom}(G, H)|=|H|$ is the same as computing the size of a finitely presented group, and is thus wildly undecidable. This eliminates both the general case you seem to ask about, and the case of fundamental groups of surfaces (replacing $\mathbb{Z}$ with, say $\mathbb{Z}\oplus \mathbb{Z}$ and letting your surface $S$ be a torus).
In other words, this problem seems essentially intractable as you've asked it. On the other hand, if you restrict $H$ to lie in the class of finite groups, then the complexity is bounded above by $$|\text{# of generators of } G|\cdot |H|\cdot \sum_r t_H(|r|)$$ where the sum is taken over the relations of the given presentation of $G$, and where $t_H(|r|)$ is the time complexity of deciding the word problem in $H$ for a word of length $|r|$. To see this, consider the algorithm which considers all maps $${\text{generators of $G$}\to H}$$ of which there are $$|\text{# of generators of } G|\cdot |H|,$$ and for each map, checks whether the relations of $G$ are satisfied in $H$. This algorithm has the time complexity described.
So essentially your question is identical to finding the time complexity of solving the word problem in whatever class of groups $H$ belongs to, about which there is tons of literature.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520823955535889, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/2246/what-weights-should-be-used-when-adjusting-a-correlation-matrix-to-be-positive-d
|
# What weights should be used when adjusting a correlation matrix to be positive definite?
I have a correlation matrix $A$ for an equity market that is not positive definite. Higham (2002) proposes the Alternating Projections Method, minimising the weighted Frobenius norm $||A-X||_W$ where $X$ is the resulting positive definite matrix.
How should one choose the weight matrix $W$?
The easy alternative is to weigh them equally (W is an identity matrix), but if one has exposures to a portfolio, wouldn't it be natural to weigh the correlations according to your weights of exposure in the different assets, in order to alter their historical correlation less than for those assets you have little exposure in? Or is there a more natural choice?
-
Hi Osloguten, welcome to quant.SE and thanks for submitting this very relevant question. – Tal Fishman Oct 26 '11 at 18:41
Thanks. Well, so far I have not found any solution and are currently running unweighted approximations. I find this about alright, but as I am approximating correlations from some stocks that are somewhat illiquid it would be satisfying knowing that these will be altered more than the main stocks in our portfolios.. – AdAbsurdum Nov 8 '11 at 8:18
## 1 Answer
You may want to have a look at a later paper by Borsdorf, Higham, and Raydan (2010). I believe a variant of the same method may apply in your case. That is, you may want to account for some of the factor structure of your correlation matrix before you apply an unweighted Frobenius norm. Otherwise, using unweighted norms has often given me fine results anyhow, and this is often used only as a quick fix to slightly adjust matrices that are just barely not positive definite. A full approach should definitely be applying some factor structure (see a previous question of mine, as well as others on the site).
-
Yes, I agree, but I have received some pretty nasty results while weighting. Also, how should one evaluate correlation pairs? – AdAbsurdum Feb 7 '12 at 13:05
@AdAbsurdum can you be more specific about the correlation pairs? Perhaps post a new question, if you feel it is worthy of its own question. – Tal Fishman Feb 7 '12 at 15:26
Well. Thinking within the model, one would believe that some correlation pairs are more trustworthy than other (e.g. high liquidity implies good data implies a better correlation coefficient). Thinking outside a model, looking at how much the correlation have changed in history could also be a parameter that determine the weight of a pair (high variability in time implies lower weight). Not sure if its worth an own question. Rather it is a part of the overall, although the quesiton above consist of technical and a economical considerations. – AdAbsurdum Feb 7 '12 at 16:15
@AdAbsurdum your thinking sounds pretty solid to me, both liquidity and variability seem like decent weighting functions, but I haven't seen anything objective on that point. I agree, not really a new question, but could be added to the current question. – Tal Fishman Feb 7 '12 at 16:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487997889518738, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/146674-sum-roots-number.html
|
# Thread:
1. ## Sum of roots of a number
Is there a theorem related to the sum of the roots of a positive number?
For example, for any integer n=1,2,3.....N, and positive number $\lambda$, such that $\lambda _k=\left | \lambda \right |^{1/n}exp^{2\pi ik/n} ...... k=0,1,2....n$ are the nth roots of $\lambda$, what is the sum of the roots,
i.e. $\sum_{k=0}^{n-1}\lambda _k = ?$.
In other words, is there a closed form, and general solution for the sum of roots, i.e. $\left | \lambda \right |^{1/n}\sum_{k=0}^{n-1}exp^{2\pi ik/n}$
2. Yes, it's $0$, unless $n=1$!
Hint : the sum of the roots of a polynomial $f(z)$ is (up to sign) the coefficient of $z$, and the roots of a complex number $\lambda$ are the roots of $f(z)=z^n-\lambda$.
3. So, are you saying the sum of the roots of $\lambda^{1/n}$ is 0 for n not equal 1, and, for n=1, the sum must be simply $\lambda$? Thanks.. looking at the $\sum$, I guess that makes sense.. I had long since forgotten, if I ever even knew it, that the sum of roots equals coefficient of Z in the polynomial.
4. Originally Posted by GeoC
So, are you saying the sum of the roots of $\lambda^{1/n}$ is 0 for n not equal 1, and, for n=1, the sum must be simply $\lambda$?
Yes!
Here's another way to prove it : if $n>1$, let $\omega = e^{2\pi i / n}$ and $S=1+\omega + \dots + \omega^{n-1}$. Then, since $\omega^n=1$, we have $\omega S = \omega + \dots + \omega^n = S$. Since $\omega \neq 1$, we must have $S=0$.
5. ## Comment for Bruno J
6. In general if we have a polynomial of degree $n$ in $z$ it can be written as...
$p(z) = z^{n} + a_{n-1} z^{n-1} + \dots + a_{1} z + a_{0} = \prod_{k=0}^{n-1} (z-z_{k})$ (1)
... where the $z_{k}$, $k=0,1,\dots , n-1$ are the roots of the polynomial. If the roots are all distinct then from (1) is easy to derive that is...
$\sum_{k=0}^{n-1} z_{k} = - a_{n-1}$ (2)
If $p(z) = z^{n} - \lambda$ with $\lambda>0$ is...
$\sum_{k=0}^{n-1} z_{k} = \sum_{k=0}^{n-1} \lambda^{\frac{1}{n}} e^{2 \pi i \frac{k}{n}} =0$ (3)
Kind regards
$\chi$ $\sigma$
7. Originally Posted by wonderboy1953
Thanks! Do you have an idea what it's of?
8. ## MS Escher
Originally Posted by Bruno J.
Thanks! Do you have an idea what it's of?
Taking a stab.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416326880455017, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/177962/question-on-trace-weighted-sums-for-irrep-of-finite-group/177966
|
Question on trace-weighted sums for irrep of finite group
For a finite group $G$, is the following true, where $\rho$ is a finite-dimensional complex unitary irreducible representation? $$\sum _{g \in G} \mathrm{Tr} (\rho(g)) \rho(g) = \frac{|G|}{n} \mathrm{id}_{\mathbb{C} ^n}$$ I would like a proof or a counterexample.
-
2 Answers
I think the correct version would be
$$\sum _{g \in G} {\chi(g)}^\ast \rho(g) = \frac{|G|}{n} \mathrm{id}_{\mathbb{C} ^n}$$
where $\chi(g) = \mathrm{Tr}(\rho(g))$ is the character of $\rho$ and $z^\ast$ denotes the complex conjugate of $z$.
Then the map $\phi = \sum_{g\in G} {\chi(g)}^\ast\, \rho(g)$ is $G$-invariant, so by Schur's lemma you get $\phi = \lambda\, \mathrm{id}_{\mathbb C^n}$, where $\lambda = \mathrm{Tr}(\phi)/n$. But now
$$\mathrm{Tr}(\phi) = \sum_{g\in G} \chi(g)^\ast \chi(g) = |G|\cdot \underbrace{ (\chi|\chi)}_{=1} = |G|$$
hence $\phi = \frac{|G|}{n}\mathrm{id}_{\mathbb C^n}$.
-
A counterexample is given by $G=\mathbb Z_3$ and $\rho(n)=\mathrm e^{2\pi\mathrm in/3}$, in which case the left-hand side vanishes. Perhaps you intended to include a complex conjugation somewhere?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160618782043457, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/58870/what-should-be-taught-in-a-1st-course-on-smooth-manifolds/58996
|
## What should be taught in a 1st course on smooth manifolds?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am teaching a introductory course on differentiable manifolds next term. The course is aimed at fourth year US undergraduate students and first year US graduate students who have done basic coursework in point-set topology and multivariable calculus, but may not know the definition of differentiable manifold. I am following the textbook Differential Topology by Guillemin and Pollack, supplemented by Milnor's book.
My question is: What are good topics to cover that are not in assigned textbooks?
-
2
You mean to cover more topics than are covered in those books combined?! How about Morse Thoery, the h-cobordism theorem, and the Smale-Hirsch theory of immersions? – Mark Grant Mar 18 2011 at 20:50
11
@Mark: you must be kidding. The students jlk mentions are only 1st graduate students & undergraduates, and you are suggesting Smale-Hirsch theory as a topic! – John Klein Mar 18 2011 at 22:05
7
Personally I would be quite happy with just a course that covers the material in Guillemin and Pollack thoroughly... – Qiaochu Yuan Mar 18 2011 at 22:53
5
Based on Mark's punctuation, I think he is making more of a rhetorical point than an actual suggestion (though I didn't read the original question the way he seems to have had) – Yemon Choi Mar 18 2011 at 23:56
5
@John, Yemon: You're quite right, I forgot to hit the sarcasm button. – Mark Grant Mar 19 2011 at 10:56
show 4 more comments
## 13 Answers
I nominate Ehresmann's theorem according to which a proper submersion between manifolds is automatically a locally trivial bundle. It is incredibly useful, in deformation theory for example, but is sadly neglected in introductory courses and books on manifolds. It is completely elementary: witness these lecture notes by Peter Petersen, where it is proved in a few lines on page 9, the prerequisites being about two pages long.
Bjørn Ian Dundas and our friend Andrew Stacey also have online documents proving this theorem.
-
1
IMO it makes a fine homework problem in an introductory manifolds course, right around when one learns the proof of the tubular neighbourhood theorem. – Ryan Budney Mar 19 2011 at 6:14
1
I agree; for students who are going on to other fields (e.g. algebraic geometry) where differential topology plays a role (both as techincal background in some situations, and as more general motivational background), this is one of the most useful results to take away from a manifolds course. Also, although it is not particularly a differential topology course, just teaching the definition of proper map would be helpful (and greatly appreciated by students going on to algebraic geometry!). – Emerton Mar 19 2011 at 20:14
2
Dear jlk, I think that Ehresmann's theorem is the natural point at which properness appears. I think that the "morphisms as families" point of view is not usually emphasized in courses in differential topology the way it is in algebraic geometry, but I don't see why you couldn't discuss it. (And thus explain that it is important to have a notion --- i.e. proper submersion --- which captures the idea of a smooth family of compact manifolds without requiring that the base or total space themselves be proper.) Note also that the case of Ehresmann's theorem with equidimensional source and ... – Emerton Mar 20 2011 at 20:42
1
... target (i.e. zero dimensional fibres) ties in with covering space theory, which the students already know (presumably --- if not, then that might be a better topic than Ehresmann). I'm sorry that I can't give a more interesting answer. Best wishes, Matthew – Emerton Mar 20 2011 at 20:44
2
Dear jlk, You're welcome. Note though that my comment about differential topology courses not emphasizing "morphisms as families" may have been a bit hasty, since this is one of the focuses of Morse theory. So from an algebraic geometer's perspective (and for other reasons too), Morse theory is another excellent possibility to consider. Regards, Matthew – Emerton Mar 21 2011 at 13:39
show 3 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If this is the first course in diffgeometry, you should not go further than Gauss--Bonnet for surfaces. (I would not even consider anything with dimension >2.)
You can cover more but there is no reason to force your students. At the level of Gauss--Bonnet they have enough to play with and if they like diffgeometry, they could take an other course.
-
2
I completely agree with this! – Deane Yang Mar 19 2011 at 2:57
3
I could not agree more. In particular, I would avoid all of the big bureaucracy sometimes confused with differential geometry (there is absolutely no need to inflict upon students the general definition of tensors, vector bundles and what not!) – Mariano Suárez-Alvarez Mar 19 2011 at 6:19
Agree with Mariano too. – Deane Yang Mar 19 2011 at 12:49
8
For most graduate students who won't be specializing in geometry/topology their first course on manifolds will also be their last. Focusing on dimension 2 is a great approach for an upper level undergraduate course, but not for the first year graduate one. – Igor Belegradek Mar 19 2011 at 13:07
The undergraduate topology/geometry requirement that we have at my school has a term on intro to algebraic topology, a second term on intro to diff.geometry which is similar to what this post describes, and a last term on diff. topology which incidentally uses the same textbooks as in the OP, and I think that's similar to what is being asked here. – Gjergji Zaimi Mar 19 2011 at 20:43
show 1 more comment
The problem will be that the students do not have a firm grasp of multivariable calculus.
You should probably start with a rigorous review of multivariable calculus including the definition of the differentiable, C^1 implies differentiable on open sets, mixed partials are equal, inverse function theorem, local immersion theorem, local submersion theorem. That will allow you to segue into the definition of smooth manifold as a parametrized subset of R^n as in Guilleman and Pollack.
Guillemann and Pollack is a softening of Milnor's "Topology from a Differentiable Viewpoint" and as such is about the lowest level approach you can take to introducing the students to the "stuff" of topology. The exercises are good. I like to have the students divide up the long guided exercise sections to present at the board. I like to supplement the book by proving the Morse Lemma, having a discussion of linking number, and proving the the Hopf fibration is not homotopic to a constant map using linking numbers. I also like touching on complex variables by proving the argument principle. Finally, I like proving that two maps from a closed oriented n-manifold to the n-sphere are homotopic if and only if they have the same degree. I don't do all of these in any one year as there is not time. I generally key off of what seems to interest the particular group of students in the class that year.
Be careful in the section on integration, they leave out (or left out in an earlier edition) that you need to be using orientation preserving parametrizations to define the integral.
After teaching such a course for about 15 years, I changed directions and started teaching the foundations of smooth manifolds in the place of the Guilleman and Pollack course, so that students could learn a more mature definition of smooth manifold, and introduce vector bundles, tensors, and Lie Groups. I have used both the books by Jack Lee and by Boothby. Each has its strong points and weak points (at least in use with graduate students at Iowa.) This turned out to be better for the graduate program as a whole because kids who wanted to do representation theory or PDE could get exposed to the ideas they would see in their research. It also allowed the Differential Geometry sequence to run more regularly. If you decided to go that route, it would still be wise to start with multivariable calculus, as really, very few kids going to graduate school in math have a sufficient background in the calculus.
However, the students are much less happy about taking the foundations of smooth manifolds, because it does not offer the immediate gratification of studying degree and winding number. In fact, when I teach the course as foundations of smooth manifolds, there will always be a block of 3 or 4 students who resent having taken the class. When I teach out of Guilleman and Pollack, even the students who never develop a clue, still enjoy the experience.
-
2
Thanks! These are very good suggestions. – jlk Mar 21 2011 at 5:01
I've heard rumours of a 2nd edition to Guillemin and Pollack being near completion. – Ryan Budney Mar 21 2011 at 17:13
1
Great answer, especially since it is based on real experience. – Deane Yang Mar 21 2011 at 18:01
I want to second the motion that Jack Lee's book <i>Introduction to Smooth Manifolds</i> is beautifully written. Our UW QSE Group has translated substantial portions of Nielsen and Chuang's <i>Quantum Computation and Quantum Information</i> into the geometric language of Lee's <i>Introduction to Smooth Manifolds</i> ... the high quality of both texts made this quite pleasurable. As a reading experience, Lee's text is like rafting the Mississippi river ... a river that is large and somewhat slow ... and yet, with patience, its flow carries the reader smoothly for an immense distance. – John Sidles Mar 21 2011 at 21:06
I think there are two ways to approach a first course on manifolds: one can focus on either their geometry or their topology.
If you want to focus on geometry, then I think Anton Petrunin's suggestion is the end of the story. I'm a fourth year graduate student, and practically every time I find myself confused about something in differential geometry I realize that the root cause of my confusion is that I never properly learned surfaces. And I've taken lots of geometry courses.
If you want to focus on topology, I really think it makes a lot of sense to teach some Morse theory. It's rather elementary, it's extremely powerful and virtually ubiquitous in differential topology, and most of all it really feels like topology in a way that differential forms don't.
Finally, from looking at only the two books you mentioned in your question, I would be a little worried that your students won't have a lot of examples to work with. What about introducing Lie groups?
-
The suggestion of Morse theory is a good one. I wrote in a comment on another answer that in differential topology courses the idea of "morphisms as families" is often not emphasized. However, that comment was probably a bit hasty: this is the focus of Morse theory, and thinking about how the topology of the level sets change as the parameter varies is not only very interesting in itself, but provides good preparation for later arguments in lots of different contexts. – Emerton Mar 21 2011 at 13:38
1
That's a really good point - I have never made that connection until now. Yet another good reason to introduce Morse theory! – Paul Siegel Mar 21 2011 at 15:13
I'm not sure if the original question is about a one semester or year course.
If this is the first course the students have ever had in differential geometry, then I still agree with Anton that at least the first semester should be about only 2-dimensional manifolds embedded in $R^3$ and Gauss-Bonnet. The point here is that everything can be understood visually, but you learn how to deploy linear algebra and calculus to prove what seems obvious visually. The full power of differential geometry is displayed very nicely. Guillemin and Pollack provides a nice textbook to base the course on. I also like O'Neill's elementary differential geometry textbook.
I would not introduce the more abstract machinery until the second semester, and even then try to be selective about what is discussed because there is just too much. It seems best to focus on basic Riemannian geometry and what, say, sectional curvature means (this builds nicely on what was done in the first semester). It is of course important to introduce many different examples. Although the basic abstract definitions and properties of Lie groups and algebras could be introduced, I believe the focus should be on how to build interesting geometric spaces from standard matrix groups ($GL(n)$, $SL(n)$, $SO(n)$, $SU(n)$).
-
I am with you. A first course in Differential Geometry should be very concrete. Generally, what I am up against is that the students have very little experience with multivariable calculus, and they need to be forced to work lots of concrete examples, so that they can get that into their heads. – Charlie Frohman Mar 20 2011 at 9:59
Thierry Aubin's book "A course in differential geometry" is really good for an introductory course. It covers the basic definitions of manifolds and vector bundles, orientability and integration (Stokes formula) and then focuses on Riemannian geometry defining the Levi-Civita connection, curvature tensor etc...
The only important missing topics are Lie groups and de Rham cohomology. Many courses in differential geometry don't talk about these subjects leaving them to specialised courses in Lie theory or Algebraic topology but I think it's a mistake.
-
This is in agreement with Igor's comment on Anton's answer, but became too long.
I'd say whatever approach you ultimately take, for a first-year grad course it surely has to be done 'properly', i.e. starting from intrinsic definition of a smooth manifold and using the 'modern' language and general definitions of tensor bundles, connections etc.
Absolutely crucially (and here's what inspired this comment), the course simply has to teach people that there is more to manifolds than 2D surfaces because that's why the theory is quite so useful and so prominent in modern mathematics. The whole point is surely the sheer diversity of objects amenable to geometric thought (whatever that means). The job of the teacher would then be to maintain the intuition of "surfaces in R^3" while using general definitions. I believe this can be done. If it cannot, then what on Earth are we all doing?
By the look of the books mentioned in the question, it certainly looks like a course on what I would call "Differential Topology". Sure, there is nothing wrong with a good course on Differential Topology! However, it doesn't seem to me to be synonymous with "A First Course on Smooth Manifolds". My go to book for the latter is John Lee's Introduction to Smooth Manifolds.
-
1
I taught a course like this last year, and I used Lee's book. It is a great book, because of the fact that it is so wordy. (Others may disagree, but I am a firm believer that wordy is better than terse for textbooks that students are expected to read and learn from.) – Spiro Karigiannis Mar 21 2011 at 14:03
In a first course aiming to introduce differentiable manifolds as the spaces on which do calculus, you could give to the students the notion of connection at least on vector bundles.
In order to reflect on the reason for this choice, I report the words of S.S.Chern closing the introduction of Global Differential Geometry, MAA Studies in Math.27, 1989:
The Editor is convinced that the notion of a connection in a vector bundle will soon find its way into a class on advanced calculus, as it is a fundamental notion and its applications are wide-spread. His chapter, "Vector Bundles with a Connection," hopefully will show that it is basically an elementary concept.
-
I do have one addition to make to the above. At our university we usually use a combination of Guillemin and Pollack and Milnor. There is another approach at a first course which some have found useful: Bott and Tu's book,
```` Differential forms in algebraic topology
````
This text covers an alternative set of topics that overlap both manifold theory and algebraic topology.
Disclaimers: (1) I have never used the text myself, but several colleagues have said in the past that it is a good book to use---and I am personally a big fan of Bott's approach to mathematical writing.
(2) If one uses Bott and Tu, then one has to sacrifice
```` transversality.
````
Andrew Ranicki once told me that transversality counts as one of the most important gems of 20th century mathematics.
-
2
I'm not sure Bott-Tu is right for the first course, but it has my vote as a great introduction to many important and fundamental topics in differential topology. – Deane Yang Mar 21 2011 at 20:35
I don't believe either of those books covers distributions and the theorem of Frobenius. Connections to partial differential equations in general I think are good topics.
Guillemin and Pollack is a book I like a lot, but chapters 2 & 3 (transversality and intersection) always seemed a bit specialized for a first course. Although, the title is, after all, "Differential Topology". My experience is that people tend to cover just chapters 1 & 4.
The definition of a manifold in G&P is as a subset of $\mathbb{R}^n$ (as in Milnor). As I recall the the definition of diffeomorphism is such that a cube and a sphere are considered not to be diffeomorphic. This is because G&P define a map at point of a manifold to be smooth if it can be extended to a map on an open set of the ambient space that is smooth in the sense that it is a map from an open set in $\mathbb{R}^n$ to $\mathbb{R}^m$. I never understood, or saw, how this approach can be used to think about different differentiable structures on manifolds. Since there is only one differential structure on $S^2$, the definition I mention above of diffeomorphism seems to at odds with the general one, given for example in Spivak volume 1. (If anyone could explain this to me I'd be grateful. As a student I found this confusing and still do.)
What I am getting at in the above paragraph is that an additional topic might be the general definition of differentiable manifold. It's nice have projective spaces and Grassmanians at least in ones collection of examples.
-
5
If you take the point of view, as in these books, that smooth manifolds are certain subsets of $\mathbb R^n$ and inherit their smooth structure from $\mathbb R^n$, then there is no such thing as two smooth structures on the same set. But there is such a thing, obviously, as two smooth manifolds related by a homeomorphism that is not a diffeomorphism. And there is also such a thing, not obviously, as two smooth manifolds related by a homeomorphism but not by any diffeomorphism. – Tom Goodwillie Mar 19 2011 at 12:24
Thanks for that clarification. So it really is a different definition of diffeomorphism, and so perhaps is deserving of a different name, like 'ambient diffeomorphism'. – R. Andrew Hicks Mar 19 2011 at 16:16
No, I don't believe it is a different notion. Perhaps the confusion is that a cube in the sense of G&P is not a smooth manifold--thinking of a sphere as homeomorphically embedded in R^n as a cube does not put a smooth manifold structure on the sphere. – Jack Huizenga Mar 19 2011 at 20:24
Another way of putting it is this: every smooth manifold has an embedding in some R^n, in such a way that the smooth functions on the manifold are (pullbacks of) those functions on the image which can be locally extended to smooth functions on open neighborhoods. – Jack Huizenga Mar 19 2011 at 22:21
4
Every smooth manifold in $\mathbb R^n$ in the sense of the Milnor book ("concrete manifold") is canonically a smooth manifold in the abstract sense. If $M$ and $N$ are concrete manifolds, then the smooth maps between them in the concrete sense are precisely the smooth maps between them in the abstract sense. That is, we have a full and faithful functor from the one category to the other. In fact, it is an equivalence of categories -- that is, every abstract manifold is diffeomorphic to some concrete manifold -- that is, every abstract manifold can be smoothly embedded in some $\mathbb R^n$. – Tom Goodwillie Mar 20 2011 at 3:54
show 5 more comments
I think fibre bundles should be introduced to give a modern viewpoint of tensor analysis.
-
Differential forms.
Books by Darling (Differential forms and connections) and Madsen-Tornehave (From calculus to cohomology: de Rham cohomology and characteristic classes) may help.
-
Update: it may be Spivak's new book Physics for Mathematicians: Mechanics I covers most of the material that this answer had in mind. I've just ordered a copy, and will report on it when it arrives.
Neither Milnor's book nor Guillemin and Pollack's book contains the word "symplectic" ... which is a great pity!
Since the manifolds under study are smooth, they have a cotangent bundle; this bundle is associated to a tautological one-form whose exterior derivative is a (canonical) symplectic form.
If in addition the base manifold has a metric, then a canonical (quadratic) Hamiltonian function too is defined on the tangent bundle.
Hmmm ... what might be the integral curves of this Hamiltonian function? It is instructive for students to discover for themselves that the curves are simply the geodesics of the base manifold.
In this way, students gain an appreciation that all of dynamics (both classical and quantum) is intimately linked to the geometry and topology of smooth manifolds ... this appreciation is good preparation for many careers in math, science, and engineering.
-
1
I don't have a problem with a course introducing symplectic geometry via physics. It seems to me that a first course should choose one focused topic and goal and do just enough to achieve the goal. Guillemin and Pollack chose the Gauss-Bonnet theorem and do a beautiful job of staying focused on that. Another possibility is Hamiltonian mechanics. But what would be the goal (analogous to Gauss-Bonnet)? – Deane Yang Mar 21 2011 at 16:55
What would be the goal? Hmmmm ... that would depend upon the class. For a class of engineers and/or scientists, I would suggest ... hmmm ... thermostatic flow s(Liouville's Theorem), with applications in synthetic biology (because that's where the jobs are). For mathematicians, maybe ... hmmm .... de Rham cohomology? Definitely, the pedagogic challenge here is not too few good options for continued study, but rather, far too many of them. – John Sidles Mar 21 2011 at 17:06
On further consideration of Deane Yang's (excellent) question "What would be the goal", a very useful final two weeks of of lectures might survey the topic "Some origins of metric and symplectic structures in mathematics, science, and engineering." And yet, an entire course surely could be devoted to this topic alone. – John Sidles Mar 21 2011 at 17:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562915563583374, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2012/01/30/amperes-law/?like=1&source=post_flair&_wpnonce=a34a077b61
|
# The Unapologetic Mathematician
## Ampère’s Law
Let’s go back to the way we derived the magnetic version of Gauss’ law. We wrote
$\displaystyle B(r)=\nabla\times\left(\frac{\mu_0}{4\pi}\int\limits_{\mathbb{R}^3}\frac{J(s)}{\lvert r-s\rvert}\,d^3s\right)$
Back then, we used this expression to show that the divergence of $B$ vanished automatically, but now let’s see what we can tell about its curl.
$\displaystyle\begin{aligned}\nabla\times B&=\frac{\mu_0}{4\pi}\nabla\times\nabla\times\left(\int\limits_{\mathbb{R}^3}\frac{J(s)}{\lvert r-s\rvert}\,d^3s\right)\\&=\frac{\mu_0}{4\pi}\left(\nabla\left(\nabla\cdot\int\limits_{\mathbb{R}^3}\frac{J(s)}{\lvert r-s\rvert}\,d^3s\right)-\nabla^2\int\limits_{\mathbb{R}^3}\frac{J(s)}{\lvert r-s\rvert}\,d^3s\right)\end{aligned}$
Let’s handle the first term first:
$\displaystyle\begin{aligned}\nabla_r\left(\nabla_r\cdot\int\limits_{\mathbb{R}^3}\frac{J(s)}{\lvert r-s\rvert}\,d^3s\right)&=\nabla_r\int\limits_{\mathbb{R}^3}J(s)\cdot\nabla_r\frac{1}{\lvert r-s\rvert}\,d^3s\\&=-\nabla_r\int\limits_{\mathbb{R}^3}J(s)\cdot\nabla_s\frac{1}{\lvert r-s\rvert}\,d^3s\\&=-\nabla_r\int\limits_{\mathbb{R}^3}\nabla_s\frac{J(s)}{\lvert r-s\rvert}-\frac{1}{\lvert r-s\rvert}\nabla_s\cdot J(s)\,d^3s\\&=-\nabla_r\int\limits_{\mathbb{R}^3}\nabla_s\frac{J(s)}{\lvert r-s\rvert}\,d^3s+\nabla_r\int\limits_{\mathbb{R}^3}\frac{\nabla_s\cdot J(s)}{\lvert r-s\rvert}]\,d^3s\end{aligned}$
Now the divergence theorem tells us that the first term is
$\displaystyle-\nabla_r\int\limits_S\frac{J(s)}{\lvert r-s\rvert}\cdot dS$
where $S=\partial V$ is some closed surface whose interior $V$ contains the support of the whole current distribution $J(s)$. But then the integrand is constantly zero on this surface, so the term is zero.
For the other term (and for the moment, no pun intended) we’ll assume that the whole system is in a steady state, so nothing changes with time. The divergence of the current distribution at a point — the amount of charge “moving away from” the point — is the rate at which the charge at that point is decreasing. That is,
$\displaystyle\nabla\cdot J=-\frac{\partial\rho}{\partial t}$
But our steady-state assumption says that charge shouldn’t be changing, and thus this term will be taken as zero.
So we’re left with:
$\displaystyle\nabla\times B(r)=-\frac{\mu_0}{4\pi}\int\limits_{\mathbb{R}^3}J(s)\nabla^2\frac{1}{\lvert r-s\rvert}\,d^3s$
But this is great. We know that the gradient of $\frac{1}{\lvert r\rvert}$ is $\frac{r}{\lvert r\rvert^3}$, and we also know that the divergence of this function is (basically) the “Dirac delta function”. That is:
$\displaystyle\nabla^2\frac{1}{\lvert r\vert}=-4\pi\delta(r)$
So in our case we have
$\displaystyle\nabla\times B(r)=\frac{\mu_0}{4\pi}\int\limits_{\mathbb{R}^3}J(s)4\pi\delta(r-s)=\mu_0J(r)$
This is Ampère’s law, at least in the case of magnetostatics, where nothing changes in time.
## 6 Comments »
1. [...] we worked out Ampères law in the case of magnetostatics, we used a certain [...]
Pingback by | February 1, 2012 | Reply
2. [...] magnetism. The third is directly equivalent to Faraday’s law of induction, while the last is Ampère’s law, with Maxwell’s correction. Share this:StumbleUponDiggRedditTwitterLike this:LikeBe the first [...]
Pingback by | February 1, 2012 | Reply
3. [...] we can use Ampère’s law to [...]
Pingback by | February 14, 2012 | Reply
Pingback by | February 17, 2012 | Reply
5. [...] the components of we get Ampère’s law with Maxwell’s [...]
Pingback by | July 16, 2012 | Reply
6. How can this be derived without the use of the delta distribution? Just curious
Comment by Matthew Kvalheim | August 28, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9035448431968689, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/6238/what-is-rsa-key-normalization/6242
|
# What is RSA key normalization?
How can I understand the term RSA key normalization? What exactly is done in the process?? Please explain.
-
## 2 Answers
RSA key normalization is merely the convention that when comparing public keys we view them as two numbers where the first number is the public exponent and the second is the public modulus. Now in the case of RSA if you're given the public key as two numbers then the big composite one is the modulus and the smaller one which is often 65537 is the exponent so it's pretty obvious which is which.
The need for normalization is more obvious with DSA keys which involve four numbers of which two (the generator and the public value) would be impossible to distinguish if $h$ was chosen at random when the generator $g=h^{(p–1)/q)}$ was calculated). In all cases with DSA if you're just given the four parameters in any order you have to do a bit of extra calculation to make sense of it and normalization takes care of the whole issue.
-
The term "RSA key normalization" seems unusual. I only find it in RFC 2792.
In that context, it is about putting an RSA public key into a form suitable for interchange and comparison. To use an image: 7; "7"; 007; 07h; 0x07; 111; 11100000; 00000111; seven; sept; or the base-64 encoding of a zip file containing any of the above; all represent the integer formerly known as VII. But for the purpose of computer interchange, we should put that in some normalized form.
In the context of RFC 2792, that normalized form is `ASN.1`. I prefer not to try to explain what that is exactly, except by an adjective: nightmarish.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489772319793701, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/39689/electromagnetic-4-potential-and-basic-index-contraction
|
Electromagnetic 4-potential and basic index contraction
I'm trying to learn about relativistic electrodynamics on my own, and I am struggling with derivatives of the 4-potential and index (Einstein) notation.
I think I understand expressions such as $\partial_\mu A^\mu$. The index is repeated and is once up and once down, so I would expand the sum as: $\partial_0 A^0 + \partial_1 A^1 + \partial_2 A^2 + \partial_3 A^3$, which gives me a scalar.
1. How am I to interpret something like this: $(\partial_\mu A_\nu)(\partial^\mu A^\nu)$ ? We are summing over the two indices this time, which is fine. What confuses me is that we are taking a covariant derivative of a covariant vector. Does one need to "convert" $A_\nu$ in the first term to contravariant, like so: $(\partial_\mu A^\rho \eta_{\nu\rho})(\partial^\mu A_\sigma\eta^{\nu\sigma})$?
I guess my doubts arise from the fact that I see a covariant vector as being an entirely different object from a contravariant one. The covariant derivative $\partial_\mu = \frac{\partial}{\partial x^\mu}$ differentiates with respect to the components of the contravariant vector $x$. So I don't understand how such an operation can be applied to a vector that isn't also contravariant.
2. How should I interpret terms such as $(\partial_\mu A^\mu)^2$ ? Is it just $\left(\partial_0 A^0 + \partial_1 A^1 + \partial_2 A^2 + \partial_3 A^3\right)^2$ or is there something else going on?
3. According to some textbook, $(\partial_\mu \phi)^2 = \eta^{\mu\nu}\partial_\mu \phi\partial_\nu\phi$, but I don't understand why. For me $\partial_\mu \phi$ is just the derivative of a scalar $\phi$ with respect to some (unspecified) component $\mu$ of a contravariant 4-vector $x$. Instead, judging from the right-hand side, it is to be interpreted as a vector $(\frac{\partial}{\partial x^0},\boldsymbol{\nabla})\phi$ which is then squared. Is it just sloppy notation, or am I being stupid?
Thanks.
EDIT:
1. Are the following then true? $$\frac{\partial}{\partial(\partial_\mu A_\nu)} \left(\partial_\mu A_\nu\right) = 1$$ $$\frac{\partial}{\partial(\partial_\mu A_\nu)} \left(\partial^\mu A^\nu\right) = 0$$
2. Can I also raise and lower the indices of a partial derivative?
-
yes, you can definitely raise and lower the indices of a partial derivative, but $\partial_{\mu}$ is the one that you know and love. – Jerry Schirmer Oct 12 '12 at 22:08
1 Answer
1) You have the right idea about $\partial_{\mu}A^{\mu}$ and 2) $(\partial_{\mu}A^{\mu})^2$.
For the rest of it, you're goign to have to be careful about covariant and contravariant indices of $\partial_\mu$ and $A^{\mu}$. Since you are already going with $A$ having vector indices, I think you should stick to that as your "base" version of $A$ for now. Now, you correctly seem to note that lowering and raising is done with $\eta_{\mu\nu}$ and its inverse $\eta^{\mu\nu}$ (which are the same matrix for Mikowskian coordinates). Thus, you will find that an expression like $A_{\mu}A^{\mu} = \eta_{\mu\nu}A^{\mu}A^{\nu} = -(A^{0})^{2} + (A^{1})^{2} + (A^{2})^{2} + (A^{3})^{2}$, and you can extend this to most of the other examples you cite.
3) applying $\partial_\mu$ to a scalar $\phi$ gives you a result $\partial_{\mu}\phi$ that is best interpreted as a one form, not as a scalar--that's because you have the free index $\mu$ floating around, and if you change coordinates, you will have to apply the chain rule to get the right answer for your new $\partial_{\mu}\phi$. this is why you have to apply the inverse metric to "square" $\partial_{\mu}\phi$.
4) If you are taking variations of vector valued quantities, you should leave a dummy index for your variational term:
$\begin{equation} \frac{\delta}{\delta A^{\mu}}A^{\nu} = \delta^{\nu}{}_{\mu} \end{equation}$
This will give you terms like
$\begin{equation} \frac{\delta}{\delta A^{\alpha}} A^{\mu}A_{\mu} = \frac{\delta}{\delta A^{\alpha}}(\eta_{\mu\nu}A^\mu A^{\nu}) = \eta_{\mu\nu}\delta^{\mu}_{\alpha}A^{\nu} +\eta_{\mu\nu}A^{\mu}\delta^{\nu}_{\alpha} = 2 A_{\alpha} \end{equation}$
which should seem pretty reasonable from your calculus I based intuition. Similarly, you should probably think of $\frac{\delta}{\delta \partial_{\alpha}A_{\beta}}\partial_{\mu}A_{\nu} = \delta^{\alpha}_{\mu}\delta^{\beta}_{\nu}$. If you're taking a variation of the up version of a quantity with respect to the down version of the quantity, just use the metric to raise/lower the target before taking the variation, and fix everything after the fact using raising and lowering conventions and the Kroneker deltas.
-
Thank you so much, that was most helpful and I am finally starting to makes sense of it. I have added a #4 to my questions, I hope I am not abusing your kindness. – kdfc Oct 12 '12 at 21:48
@kdfc: there's a lazy attempt at an answer. – Jerry Schirmer Oct 12 '12 at 22:08
Again, thank you very much! – kdfc Oct 12 '12 at 22:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9599448442459106, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/45404/quantization-of-nambugoto-action-in-multiples-of-plancks-constant?answertab=oldest
|
# Quantization of Nambu–Goto action in multiples of Planck's constant?
Isn't it possible? Quantization of Nambu–Goto action $$\mathcal{S} ~=~ -\frac{1}{2\pi\alpha'} \int \mathrm{d}^2 \Sigma \sqrt{{\dot{X}} ^2 - {X'}^2}~=~nh\qquad n \in\mathbb{Z}.$$
-
## 1 Answer
First, quite generally, it is not true that the action $S$ is ever required to be a multiple of Planck's constant $h$. What quantum mechanics implies is pretty much exactly the opposite thing. The formulation of quantum mechanics (any quantum mechanical theory) that uses the action is the Feynman path integral where the action enters via the exponent $$\exp(iS/\hbar)$$ Note that the exponential doesn't change if we additively change $$S \to S+2\pi \hbar = S+h$$ For this reason, the action in quantum mechanics is defined "modulo $h$": it's the fractional part of $S/h$ that may be nonzero and that is important, while the integer part is completely unphysical! You got it upside down. $S$ may be $3.76h$ or $3.24h$ and the difference matters; however, the difference between $S=6.64h$ and $S=8.64h$ doesn't affect (quantum) physics, a fact that's very important e.g. in Chern-Simons theories.
Second, yes, when one is doing it right, the (theory defined by the) Nambu-Goto action may be quantized and what one gets is known as string theory (well, the first insights of string theory about a single free string, and so on). But it would be extremely awkward and ambiguous to quantize the action with the square roots etc. "directly". The clever way is to first introduce an auxiliary world sheet metric $h_{\alpha\beta}$ whose equations of motion say that it is the induced metric from the spacetime $$h_{\alpha\beta} = K\cdot \partial_\alpha X^\mu\cdot \partial_\beta X_\mu$$ where the overall normalization $K$ is deliberately left ambiguous and using $h$, the Nambu-Goto action may be rewritten as $$S= -\frac{1}{2\pi\alpha'} \int d^2 \sigma \sqrt{-\det h}\, h^{\alpha\beta}\partial_\alpha X_\mu\partial_\beta X^\mu$$ You may check that the previous equation for $h_{\alpha\beta}$ with a certain $K$ automatically follows from this action by varying $h$. And if you substitute this value of $h$ back to the action, you get the Nambu-Goto action. So by integrating out $h_{\alpha\beta}$, you get the Nambu-Goto action back. So they're physically equivalent!
The advantage of my form, the Polyakov action, is that one may always choose world sheet coordinates so that locally $$h_{\alpha\beta}=K'\delta_{\alpha\beta}$$ and with such a simple form of the metric, the Polyakov action is just a nice free set of Klein-Gordon fields that are easy to quantize! The dependence on the scaling factor $K'$ drops between the square root of the determinant and the inverse metric in the action, a fact ("Weyl symmetry") that only holds in 2 dimensions. And the dynamics of the new Polyakov action – lots of harmonic oscillators – is still exactly equivalent physically to the original Nambu-Goto action. It seems waterproof that the nice Hilbert space one gets in this way is the only right way to "quantize the Nambu-Goto action/theory".
Let me emphasize that the trick with the auxiliary metric works for the classical equations, too. The original Nambu-Goto action leads to seemingly complicated, non-polynomial differential equations of motion. But when one looks at the situation a bit carefully and uses clever methods, he finds out that the system is solvable – and it's intrinsically a set of ordinary wave equations.
-
1
isn't in general $\mathcal{S}\ge h$? – Neo Nov 29 '12 at 11:56
2
Not necessarily. The action is something defined for classical histories and of course that you may have histories with an arbitrarily small $S$. Think about mechanics, $S = \int dt\, m(\dot x)^2/2 - V(x)$. If the speed is small and almost zero except for a short period of time, the action will be smaller than $h$. But of course, if $S\lt h$ or $S\sim h$, the quantum phenomena and the mixing with other histories with other $S$ will be very important and classical physics inapplicable. – Luboš Motl Nov 29 '12 at 12:17
2
The only right statement (inequality) of the kind you suggest is that $S\gg h$ is needed for the classical approximation to become valid. – Luboš Motl Nov 29 '12 at 12:19
Wow, this is a very nice derivation of and explanation what the Polyakov action means :-)! – Dilaton Feb 28 at 12:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9247006773948669, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/10/20/the-character-table-of-a-group/?like=1&_wpnonce=d19428bcb3
|
# The Unapologetic Mathematician
## The Character Table of a Group
Given a group $G$, Maschke’s theorem tells us that every $G$-module is completely reducible. That is, we can write any such module $V$ as the direct sum of irreducible representations:
$\displaystyle V=\bigoplus\limits_{i=1}^km_iV^{(i)}$
Thus the irreducible representations are the most important ones to understand. And so we’re particularly interested in their characters, which we call “irreducible characters”.
Of course an irreducible character — like all characters — is a class function. We can describe it by giving its values on each conjugacy class. And so we lay out the “character table”. This is an array whose rows are indexed by inequivalent irreducible representations, and whose columns are indexed by conjugacy classes $K\subseteq G$. The row indexed by $V^{(i)}$ describes the corresponding irreducible character $\chi^{(i)}$. If $k\in K$ is a representative of the conjugacy class, then the entry in the column indexed by $K$ is $\chi^{(i)}_K=\chi^{(i)}(k)$. That is, the character table looks like
$\displaystyle\begin{array}{c|ccc}&\cdots&K&\cdots\\\hline\vdots&&\vdots&\\V^{(i)}&\cdots&\chi^{(i)}_K&\cdots\\\vdots&&\vdots&\end{array}$
By convention, the first row corresponds to the trivial representation, and the first column corresponds to the conjugacy class $\{e\}$ of the identity element. We know that the trivial representation sends every group element to the $1\times 1$ identity matrix, whose trace is $1$. We also know that every character’s value on the identity element is the degree of the corresponding representation. We can slightly refine our first picture to sketch the character table like so:
$\displaystyle\begin{array}{c|cccc}&\{e\}&\cdots&K&\cdots\\\hline V^\mathrm{triv}&1&\cdots&1&\cdots\\\vdots&\vdots&&\vdots&\\V^{(i)}&\deg\left(V^{(i)}\right)&\cdots&\chi^{(i)}_K&\cdots\\\vdots&\vdots&&\vdots&\end{array}$
We have no reason to believe (yet) that the table is finite. Since $G$ is a finite group there can be only finitely many conjugacy classes, and thus only finitely many columns, but as far as we can tell there may be infinitely many inequivalent irreps, and thus infinitely many rows. Further, we have no reason to believe that the rows are all distinct. Indeed, we know that equivalent representations have equal characters — they’re related through conjugation by an invertible intertwinor — but we don’t know for sure that inequivalent representations must have distinct characters.
As an example, we can start writing down the character table of $S_3$. We know that conjugacy classes in symmetric groups correspond to cycle types, and so we can write down all three conjugacy classes easily:
$\displaystyle\begin{aligned}K_1&=\left\{e\right\}\\K_2&=\left\{(1\,2),(1\,3),(2\,3)\right\}\\K_3&=\left\{(1\,2\,3),(1\,3\,2)\right\}\end{aligned}$
We know of two irreps offhand — the trivial representation and the signum representation — and so we’ll start with those and leave the table incomplete below that:
$\displaystyle\begin{array}{c|ccc}&K_1&K_2&K_3\\\hline V^\mathrm{triv}&1&1&1\\V^\mathrm{sgn}&1&-1&1\\\vdots&\vdots&\vdots&\vdots\end{array}$
## 5 Comments »
1. [...] Products in the Character Table As we try to fill in the character table, it will help us to note another slight variation of our inner product [...]
Pingback by | October 21, 2010 | Reply
2. [...] this establishes what we suspected when setting up the character table: if and are inequivalent irreps then their characters and must be unequal. Indeed, since [...]
Pingback by | October 22, 2010 | Reply
3. [...] at most as many irreducible characters as there are conjugacy classes in . And so we know that the character table must have only finitely many rows. For instance, since has three conjugacy classes, it can have at [...]
Pingback by | October 25, 2010 | Reply
4. [...] we first defined the character table of a group, we closed by starting to write down the character table of [...]
Pingback by | October 26, 2010 | Reply
5. [...] Character Table is Square We’ve defined the character table of a group, and we’ve seen that it must be finite. Specifically, it cannot have any more rows [...]
Pingback by | November 19, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284854531288147, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/271010/mnemonic-for-the-fact-that-a-rightleft-adjoint-functor-preserves-limitscolimi
|
Mnemonic for the fact that a right(left) adjoint functor preserves limits(colimits)
A right adjoint functor preserves limits. Dually a left adjoint functor preserves colimits. I often forget which is which. Of course, you can look up a book on category theory or use internet. But it's nice if there is a good mnemonic method to remember these facts.
-
1
I've seen people refer to this fact as RAPL (Right Adjoints Preserve Limits), which is silly enough to be memorable, at least to me. – Miha Habič Jan 5 at 16:09
3 Answers
Just remember one particular instance of left and right adjoints, for example left adjoint $F$ to the forgetful functor $U$ from groups to sets. $F(X)$ is the free group on the set $X$.
The forgetful functor $U$ obviously preserves products but not coproducts, whereas $F$ obviously preserves coproducts but not products.
-
1
More simple, $- \times X : \mathsf{Set} \to \mathsf{Set}$ is left adjoint to $\mathrm{Hom}(X,-)$ and preserves colimits (for example $(A \cup B) \times X = A \times X \cup B \times X$), but no limits (for example $\star \times X \neq \star$). – Martin Brandenburg Jan 5 at 18:25
1
I use the additional mnemonic "LEFT" and "FREE" are both 4 letters word ("RIGHT" and "FORGETFUL" aren't) – Romuald Jan 7 at 10:51
I remember this (and related facts) as follows: A left adjoint $F$ is characterized by morphisms on $F(x)$, and a colimit is characterized by morphisms on it. Dually, a right adjoint $G$ is characterized by morphisms into $G$, and a limit is characterized by morphisms into it. So basically I just reprove it all the time, after all it is only one line:
$(\mathrm{colim}_i F(x_i),-) = \mathrm{lim}_i (F(x_i),-) = \mathrm{lim}_i (x_i,G(-))=(\mathrm{colim}_i x_i,G(-))=(F(\mathrm{colim}_i x_i),-)$
-
The easiest way to remember which is which is to work through the proof that, say, left adjoints preserve colimits. Here's a quick sketch, with $F \dashv U$:
\begin{align} \textrm{Hom}(F \varinjlim A_\bullet, B) \cong \textrm{Hom}(\varinjlim A_\bullet, U B) & \cong \varprojlim \textrm{Hom}(A_\bullet, U B) \\ & \cong \varprojlim \textrm{Hom}(F A_\bullet, B) \cong \textrm{Hom}(\varinjlim F A_\bullet, B) \end{align}
Unfortunately there is no really good mnemonic in general because the use of left/right is inconsistent. For example:
• Right adjoints preserve limits, so they are left exact.
• Right derived functors are left Kan extensions (when working with derived categories).
• Monomorphisms constitute the right class of an orthogonal factorisation system in regular categories, but they are preserved by left exact functors (and so by right adjoints).
In the end the only way to be sure about which is which is to remember whether the thing in question appears on the left or on the right in the diagram invoked in the definition. So, for example:
• Left adjoints are called ‘left’ because they appear on the left of the $\to$ in the bijective correspondence $$\frac{F A \to B}{A \to U B}$$
• Left exact functors are ‘left’ because they preserve left exact sequences, which are ‘left’ because they are the left part of a short exact sequence: $$0 \longrightarrow A' \longrightarrow A \longrightarrow A''$$
• Left derived functors are ‘left’ because they extend an exact sequence to the left: $$\cdots \longrightarrow L^1 F A'' \longrightarrow F A' \longrightarrow F A \longrightarrow F A'' \longrightarrow 0$$
• Left Kan extensions are ‘left’ because the functor that takes a functor to its left Kan extension is the left adjoint of the precomposition functor.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9074715971946716, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/users/2479/hagen?tab=activity
|
# Hagen
reputation
49
bio
website
location
age
member for 2 years, 7 months
seen yesterday
profile views 247
| | | bio | visits | | |
|-----------|------------------|---------|----------|------------|-------------------|
| | 3,347 reputation | website | | member for | 2 years, 7 months |
| 49 badges | location | | seen | yesterday | |
# 221 Actions
| | | |
|-------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| May15 | comment | Non-isomorphic simple extensions of the same degree of a field of positive characteristicYour approach will not work, because over the finite field $F_p$ (I assume you mean the field with $p$ elements) all irreducible polynomials are separable. However if you replace $F_p$ with a non-perfect field, then this works. It then remains to treat the case of a separably closed field $K$, that is a field that possesses only purely inseparable extensions ... |
| May8 | answered | Integral closure $\tilde{A}$ is flat over $A$, then $A$ is integrally closed |
| May2 | answered | Linear Transformations: Scaling along the line $y=x$ |
| Apr30 | revised | every field of characteristic 0 has a discrete valuation ring?added 1 characters in body |
| Apr30 | comment | every field of characteristic 0 has a discrete valuation ring?The answer to your question is "No". The reals do not carry discrete valuations for almost the same reason as for the complex numbers: one can take $n$-th roots of positve elements for every $n\in\mathbb{N}$. |
| Apr29 | revised | every field of characteristic 0 has a discrete valuation ring?added 552 characters in body |
| Apr29 | comment | every field of characteristic 0 has a discrete valuation ring?I do not agree with your statement: every field $K$ has a proper subdomain $R$ such that $K$ is the fraction field of $R$. Take a transcendence basis $T$ of $K$ over the prime field $P$ and consider the integral closure $R$ of the polynomial ring $P[T]$ in $K$. |
| Apr29 | comment | $\mathbb A^n(k)$ and $\mathbb A^n(k)\setminus \{0\}$ are not homeomorphic |
| Apr28 | revised | every field of characteristic 0 has a discrete valuation ring?added 832 characters in body |
| Apr28 | comment | every field of characteristic 0 has a discrete valuation ring?In my answer I was assuming that the DVR has fraction field equal to $\mathbb{C}$. Otherwise the statement has a trivial proof because every field of characteristic $0$ contains the rationals. |
| Apr27 | answered | every field of characteristic 0 has a discrete valuation ring? |
| Apr18 | comment | Does every algebraically closed field contain the field of complex numbers?The cardinality of a transcendence basis of $\mathbb{C}/\mathbb{Q}$ equals the cardinality of $\mathbb{R}$. |
| Apr18 | answered | Does every algebraically closed field contain the field of complex numbers? |
| Apr16 | answered | Valuation but not Noetherian Rings |
| Apr16 | answered | Value range of normalization methods? min-max, z-score, decimal scaling |
| Apr15 | comment | The field of Laurent series over $\mathbb{C}$ is quasi-finiteI see how one can avoid general theory at various points. In particular one can specialize the proof for the fact that the Galois group is cyclic to the present particular case. However in this way one will arrive at a rather lengthy verification. And at the moment I don't see how to avoid using something like Hensel's lemma at the beginning of the whole argument ... |
| Apr15 | answered | The field of Laurent series over $\mathbb{C}$ is quasi-finite |
| Apr12 | answered | Non-trivial valuation of $\mathbb R$ |
| Apr10 | awarded | Custodian |
| Apr10 | reviewed | Reject suggested edit on Isomorphism or non-isomorphism of two specific local rings |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262553453445435, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/81455/models-of-ad-different-from-l-mathbbr
|
## Models of $AD$ different from $L(\mathbb{R})$
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Today it is known that $AD$ (the axiom of determinacy of games played with integers) is true in $L(\mathbb{R})$. Has it been proven that this is the only model in which $AD$ is true? Have other models been identified in which $AD$ is true? Of course, I am asking about genuine models, since we can force over $L(\mathbb{R})$ and still keep enough $AD$. A related question is the following: how different from $L(\mathbb{R})$ is the universe $V$? Thx.
-
1
There are many models where AD holds, but of course this depends on what background assumptions you allow. You can get models of the form $L(\Gamma,{\mathbb R})$ where $\Gamma$ is a collection of sets of reals, for example, and the larger $\Gamma$ is, the more interesting the model you obtain. – Andres Caicedo Nov 20 2011 at 22:02
1
I have to say that I am a bit puzzled by the statement "today it is known that AD is true in $L(\mathbb R)$". I understand what you mean, under large cardinal assumptions AD is true in $L(\mathbb R)$, but most people, no matter how close to the Berkeley school of set theory, usually feel the need to mention that large cardinal assumptions are necessary here. – Stefan Geschke Nov 21 2011 at 13:41
## 2 Answers
I'm not sure what you mean by "genuine models", but let me comment on how different `$L(\mathbb R)$` is from $V$. They look very different to me. Partly this is because the axiom of choice holds in $V$ and fails rather spectacularly in `$L(\mathbb R)$`. For example, AD implies that `$\aleph_n$` is singular whenever $3\leq n\leq\omega$, so the cardinal structure of `$L(\mathbb R)$` looks very different from that of $V$. Even where they agree, for example at `$\aleph_1$` (which is the same in `$L(\mathbb R)$` as in $V$), there's a big difference as to what subsets are present. AD implies that the club filter on `$\aleph_1$` is an ultrafilter, so all of $V$'s stationary co-stationary subsets of `$\aleph_1$` are missing from `$L(\mathbb R)$`.
A more philosophical (by which I mean imprecise and not mathematical) reason to think `$L(\mathbb R)$` differs greatly from $V$ is that it seems entirely implausible to me that the whole universe should be constructible from any single set. I expect to see more and more complexity the higher up I go in the cumulative hierarchy --- and not just complexity of ordinals.
-
Thx for your answer. I should have made my second question more precise, sorry about that. What I wanted to mean is what difference is there in terms of truth. For instance do they agree for $\Sigma_3$ or $\Pi_2$ statements? But still I am more interested in knowing if there is some other model in which $AD$ is true. By genuine I mean, not a model constructed by forcing over $L(\mathbb{R})$ while keeping $AD$. – alephomega Nov 20 2011 at 21:49
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
To answer the first question, like Andres mentioned, larger models $L(\Gamma,\mathbb{R})$ of AD can behave quite differently from $L(\mathbb{R})$. For example they can satisfy AD$_{\mathbb{R}}$, the axiom of determinacy for Gale-Stewart games played on $\mathbb{R}$, which fails in $L(\mathbb{R})$. (This is because it implies the Axiom of Uniformization, i.e., that every binary relation on $\mathbb{R}$ contains a function with the same domain, whereas in a model such as $L(\mathbb{R})$ where every set is ordinal-definable from a real, the set of pairs $(x,y)$ such that $y$ is not ordinal-definable from $x$ cannot be uniformized.)
To add to Andreas's answer to the second question, there is a $\Sigma_1$ statement in the parameter $\mathbb{R}$ that is true under ZFC but false under AD, namely the existence of an injection $\omega_1 \to \mathbb{R}$. (This is easily seen to be inconsistent with a countably complete nonprincipal ultrafilter on $\omega_1$.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.964961051940918, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/48672/question-on-inflation/49830
|
# Question on inflation
I have two particular questions regarding the inflationary scenario. They are:
1.) What is the physical origin of the inflaton field? 2.) Why has the potential of the inflation field its particular form?
-
## 3 Answers
As @Rennie states, no-one knows what the inflaton is. The current state of affairs is that it is generally accepted that a period of exponential expansion took place during the early universe. This explains many of the features of the observable universe that are otherwise extremely hard to explain. One of the big industries in cosmology is to try to build a sensible and well motivated model of inflation that agrees with the experimental data. To date there are hundreds of such models, but many of them share a common feature which is that the inflation is produced by the potential of a scalar field. When this is the case, the particular scalar field in the model is known as the inflaton. Examples of models include:
1. Higgs inflation where the standard model higgs boson plays the role of the inflaton. Since the Higgs is the only fundamental scalar to have been observed so far, it is an important question whther it could be the inflaton.
2. GUT inflatons. In grand unified theories, there are a lot of extra scalar fields which must be present to break the GUT symmetries, these could play the role of the inflaton.
3. SUSY inflation. In Supersymmetric models, scalar fields abound and many extensions of the MSSM require additional scalar fields, any of these could play the role of the inflaton.
There are many many more models, all have various pros and cons. The important point is whether the scalar field in the model has a potential that could produce inflation and how the predictions from the specific model agree with experiment. As CMB data gets more refined, some models will be ruled out but unambiguously identifying which scalar field in nature is the inflaton is a long long way off.
The potential of the inflaton field has to have a particular form to produce inflation. Inflation requires a negative-pressure vacuum energy density. This is generically produced by a scalar field as long as $\dot\phi^2 < V(\phi)$. So the requirement boils down to having the scalar field sit at a point where the potential takes a large value and is not too steep. This can either be a local minimum (false vacuum) and the field eventually tunnels out, thus ending inflation or a flat potential where the field slowly rolls down the potential. Slow roll inflation is now generally preferred since false vacuum inflation has problems with reheating.
Remember, at the moment, producing models of inflation is very much in the realm of model building and there are still models that do not even invoke an inflaton field.
-
Nice overview, I like this +1 – Dilaton Jan 10 at 11:43
## Did you find this question interesting? Try our newsletter
email address
The source of inflation is the inflaton, but no-one knows what the inflaton is!
The inflaton potential is calculated by looking at the universe and fiddling with the potential to get something that fits observations.
In other words, there are no fundamental theories that predict the inflaton properties. At the moment the theory is purely phenomenological. However this does not mean it lacks predictive power, as we get more information out of the theory than we have to put in as parameters. Hopefully the Planck satellite will give us more information about the theory.
-
data-fitting indeed and a 'free lunch' (get something out of nothing) and both concepts are contrary to the physics way of modeling. – Helder Velez Jan 9 at 10:31
1
It is not true that there exist no theories which predict inflatons by different mechanisms. I guess the question is asking exactly about this theoretical approach too, together which complements experimental attempts to reconstruct the potential of the inflaton from data. – Dilaton Jan 9 at 12:29
I attended a lecture by Mukhanov and he admitted, that he does not know what the nature of the inflaton field is really about. – Hamurabi Jan 9 at 17:12
Hi John. Err... I approved an invalid edit. It corrected all the `inflaton` to `inflation`. Sorry for that. If I see it approved by others, I'd do a rollback :-) – Ϛѓăʑɏ βµԂԃϔ Jan 10 at 6:23
There no need of 'inflaton/inflationary scenario' at all.
The inflation era is needed in the BBT framework where space expands and is not needed in the opposite viewpoint: particles shrink thru time.
Assume that particles were created long time ago evenly all over universe at some point in time in the past due to a sudden and general change of state of the vacuum.
In the beginning the atoms were much larger than the ones we see around (the ones that we use to measure distance/mass/time and anything else but simple counting).
If they are larger then the redshift of light is a natural effect, provided c is constant, and the distant galaxies are not moving away at all (except local/peculiar motions).
As time goes by the particles give back their energy to the vacuum (it is established that electrostatic/gravitic energy spread away from the particles) and, as any other physical process where the effect is proportional to the source, such as radioactivity, it obeys an exponential decreasing law.
Thus we have a physically motivated exponential decrease of something, as opposed to 'exponential increase of space amount'.
The theory is fully derived in this paper at vixra: A self-similar model of the Universe unveils the nature of dark energy , and it matches all the fundamental measures of the universe. The dark energy, inflation, cosmological constant, etc.. are necessary artifacts of a bad model (BBT).
Nothing in physics says that the atom is an invariant. We work in loop: define units of measure with an atom (and c) and then we calculate the size of anything. In conclusion : the size/mass of any object is some fraction/multiple of an atom, and we are blinded to any variation of the unit of measure. (see Poincaré sphere-world )
An example:
Suppose we want to measure the change in length of a copper bar in function of temperature change. If we put a graduated copper bar inside the oven we are deceived and we will conclude: here's proof that the heat has no effect on the length of the bodies.
edit add:
2nd Thermodynamic law, gravity and an homogeneous universe:
start with temperature 0ºK , move 1 atom and temperature will rise. Video see at 8min20 Stephen_Hawking__The_Story_of_Everything (start at 7min), and the equations are posted here.
-
I suspect this is wrong but to my regret I don't know enough physics to put my finger on why. Therefore although I want to downvote it I can't ... :( Would be nice to hear an opposing argument! – Eugene Seidel Jan 11 at 0:42
Specifically, while Velez links to the WP article on the Poincaré sphere-world, that article says: `How will this world look to inhabitants of this sphere? ... Supposing the inhabitants were to view rods believed to be rigid, or measure distance with light rays. They would find that a geodesic is not a straight line, and that the ratio of a circle’s circumference to its radius is greater than 2π.` Can a similarly devastating refutation be found for the shrinking-particle idea? – Eugene Seidel Jan 11 at 1:47
2
O.K., I see a couple of possible attacks: (1) particles shrink by "giving back energy to the vacuum": how? So each particle loses some of its mass to "electrostatic/gravitic energy"? Is this a continuous process? But according to QM, energy levels are discrete. (2) Shrinkage "obeys an exponential decreasing law": so the rate of shrinkage is decreasing? But we observe the universe to be expanding at an increasing, not decreasing, rate. (3) If particles shrink and the distance between each of them grows then how does matter hold together? Because the extra "electrostatic/gravitic energy" ... – Eugene Seidel Jan 11 at 7:23
... released due to "shrinkage" exactly compensates for the greater distances to be bridged? Would that not violate the Second Law of Thermodynamics? O.K., time for a pro to step in and do better than my feeble efforts... – Eugene Seidel Jan 11 at 7:24
everyone suspect I'm wrong, and downvote as you did, but none offered a single counterargument. – Helder Velez Jan 12 at 18:20
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369356036186218, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/4537/algebraic-topology-need?answertab=votes
|
# Algebraic Topology Need!
In India, generally during the graduate years, we follow a course work pattern, unlike many places in the U.S where students are exposed to research during their undergraduate years itself.
As, a graduate student, we have basic courses like, Algebra, Analysis, Rings and Modules, Measure theory, Topology, functional analysis etc.. Generally topology is one subject, which i don't find that much of interest. But in some universities, students are forced to take Topology, and Algebraic topology, during their graduate years, and i have seen many students facing trouble, as they have to study a subject which is not of their interests. My question, would be, for a student whose research are in Analytic and Algebraic Number theory, does he needs to know Algebraic Topology?
-
6
To study number theory, one needs to know as much mathematics as possible. These days there's a lot of cohomology around, and one first meets cohomology in algebraic topology. – Robin Chapman Sep 13 '10 at 14:43
@Robin Chapman: There are no easy measures right! OK, I dont see algebraic topology coming into any effect atleast in analytic number theory! – anonymous Sep 13 '10 at 14:46
@Chandru1, what Robin has in mind, probably, is the fact that "cohomological ideas", which one usually first encounters in the context of algebraic topology (and this is a good thing, really, because there they are most palpable), are surely relevant to all forms of number theory nowadays. – Mariano Suárez-Alvarez♦ Sep 13 '10 at 14:53
Chandru, I didn't know you were a professor...congratulations. If you stick to the most "hard-line" analytic number theory, you won't see much cohomology, but if to you analytic number theory embraces modular forms, or asymptotically counting solutions of Diophantine equations, you won't be able to avoid it. – Robin Chapman Sep 13 '10 at 15:07
@Robin: Yes, equations, of the that form are related to somewhat study of elliptic curves where Galois cohomology plays an important part! is this what you want to say – anonymous Sep 13 '10 at 15:12
show 5 more comments
## 1 Answer
Well, I'm not an expert in Number Theory and I don't know if your interests may include in the future things like Weil conjectures and $\zeta$-functions for algebraic varieties, or on the contrary, you'll try to avoid any contact with Algebraic Geometry and Homological Algebra.
But, just in case, your interests lead you towards the first issues, then you'll have a nasty encounter with something called $\ell$-adic and étale cohomologies. I wouldn't like to be in your shoes at that moment, without having seen before any other simpler cohomology (as the singular cohomology you're going to learn in Algebraic Topology) and the classical Lefchstez fixed-point theorem.
More generally, except you're not going to use Algebraic Geometry at all in your research, nor Homological Algebra, or your use of the first is limited to the most classical aspects, you'll have to be familiar with sheaf cohomology and derived functors such as $\mathrm{Ext}$ and $\mathrm{Tor}$. It isn't impossible to learn sheaf cohomology without knowing a word of singular cohomology, and the same applies for derived functors, but you'll clearly have a tremendous gap at that point.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960659921169281, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/73557/blowing-up-a-subvariety-what-can-happen-to-the-singular-locus/73569
|
## Blowing up a subvariety - what can happen to the singular locus?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a variety defined over a number field $k$. If I blow-up along some arbitrary subvariety of $X$, what are the possible outcomes for the dimension of the singular locus of the variety? If the subvariety lies outside the singular locus of $X$, then it stays the same, if it is carefully chosen, it might go down. Can it go up?
To be more specific, my variety is a high dimensional hypersurface, and the subvariety I am blowing up is a linear space of much smaller dimension than the singular locus. I don't know if this changes the situation.
I have a feeling this question might be more suited to stackexchange, but it didn't spark much interest over there http://math.stackexchange.com/questions/53676/blowing-up-a-subvariety-what-can-happen-to-the-singular-locus. Apologies for wasting time if so.
-
## 1 Answer
Any birational map $\pi:X'\to X$ is the blow-up of some ideal sheaf on $X$, so in general one must expect singularities on $X'$, even if the ideal is reduced (as you assume).
As a concrete example, let $X=\mathbb{A}^n$ and blow-up the complete intersection subvariety gven by the ideal $I=(f,g)\subset k[x_1,\ldots,x_n]$. Then the blow-up of $X$ is the Proj of the Rees algebra $R[It]$ which is given by $k[x_1,\ldots,x_n,S,T]/(fS-gT)$. By choosing $f$ and $g$ appropriately one can produce varieties with singular locus of high dimension.
For your specific example, when $Y$ is a linear space of small dimension, I don't know if the above can happen, but there are certainly cases where the dimension of the singular locus will be unchanged after the blow-up, (e.g when $Y$ a point on a singular surface).
-
Thanks, that's exactly what I was looking for. – samian86 Aug 25 2011 at 13:01
No problem. Actually, I think it might to say more about your specific case. In that case the Rees algebra is given by $k[x_0,\ldots,x_n,y_1,\ldots,y_k](f,x_iy_j-x_jy_k,\ldots)$ and it seems doable to investigate the singularities in each chart $y_i=1$ by hand. Perhaps you can show that the dimension of the singular locus does indeed stay the same for some choices of $f$. – J.C. Ottem Aug 25 2011 at 15:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232629537582397, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/93656?sort=votes
|
## Minimal graphs with a prescribed number of spanning trees
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
As its long ago since Erdős died and mathoverflow is the second best alternative to him (for discussing personal problems), I'd like to start a fruitful discussion about the following problem that I find very interesting.
Let $n \geq 3$ be an integer and let $\alpha(n)$ denote the least integer $k$ such that there exists a simple graph on $k$ vertices having precisely $n$ spanning trees. What is the asymptotic behaviour of $\alpha$ ?
Motivation. I was introduced to the question through this post on Dick Lipton's blog. As it turns out, the question was posed already in 1970 by the Czech graph theorist J. Sedlacek (On the minimal graph with a given number of spanning trees, Canad. Math. Bull. 13 (1970) 515–517)
What is known?
Sedlacek was able to show that for every (not so) large $n$
$\alpha(n) \leq \frac{n+6}{3}$ if $n \equiv 0 \pmod{3}$ and $\alpha(n) \leq \frac{n+4}{3}$ if $n \equiv 2 \pmod{3}.$
Following is a summary of what I was able to find out.
Since the equation $n = ab+ac+bc$ is solvable for integers $1 \leq a < b < c$ for all but a finite number of integers $n$ (see this post) it can be deduced (by considering the graph $\theta_{a,b,c}$ which has $ab+ac+bc$ spanning trees) that for large enough $n \not \equiv 2 \pmod{3}$
$$\alpha(n) \leq \frac{n+9}{4}.$$
Moreover, the only fixed points of $\alpha$ are 3, 4, 5, 6, 7, 10, 13 and 22.
By generalizing the approach and considering the graphs $\theta_{x_1,\ldots,x_k}$ one could try to lower the constant in the fraction of the inequality by an arbitrary amount. As it turns out it is not know weather every large $n$ is then expressible as $n = x_1\cdots x_k(\frac{1}{x_1} + \cdots + \frac{1}{x_k})$ for suitable integers $1 \leq x_1 < \cdots < x_k.$
Even if that method would work out, the bound would most probably still be suboptimal. According to the graph (created by randomly generating graphs and calculating the number of their spanning trees) it seems reasonable to conjecture that
Conjecture.
$$\alpha(n) = o(\log{n})$$
The conjecture is clearly justifiable for highly composite numbers $n$ (consider the graph obtained after identifying a common vertex of the cycles $C_{x_1},\ldots,C_{x_k}$ for suitable odd factors $x_1, \ldots,x_k$ of $n$) but It fails for $n$'s that are primes.
It is evident to me that I lack the tools necessary for attacking this conjecture so any kind of suggestions (where to look for a possible answer, what kind of tools should I learn..) related to it are very welcome!
Edit. If anyone is willing to work on this problem, I'd be glad to collaborate since I'd benefit much from it!
-
2
Suppose you tackle it from the other end: take complete graphs on alpha(n) vertices which have n spanning trees, remove edges, and see what coverage (values of different n) you get. I don't know about asymptotics, but you may be able to show your conjecture holds for all n off of a set of density 0. Gerhard "Ask Me About System Design" Paseman, 2012.04.10 – Gerhard Paseman Apr 10 2012 at 18:45
1
Oh, if you are looking for alpha(p) where p is prime, it leads me to think that such graphs will have trivial automorphism group. If true, that would also be a nice result. Gerhard "Ask Me About System Design" Paseman, 2012.04.10 – Gerhard Paseman Apr 10 2012 at 18:48
3
A cycle of length $p$ has $p$ spanning trees and an automorphism group of size $2p$. That's not the only example. – Brendan McKay Apr 11 2012 at 1:28
Indeed, but I wonder about the case of large prime p and alpha(p) small. Is alpha(7) equal to 7? Gerhard "Ask Me About System Design" Paseman, 2012.04.10 – Gerhard Paseman Apr 11 2012 at 4:14
$\alpha(7)$ is indeed 7. As you may see in the post, 3, 4, 5, 6, 7, 10, 13 22 are the (only) fixed points of $\alpha$! – Jernej Apr 11 2012 at 8:45
show 6 more comments
## 2 Answers
No answer, but a related question: The number $n$ of spanning trees in a graph with $k+1$ vertices is the determinant of a $k\times k$ matrix with integer entries between $-1$ and $k$.
For given $n$, what is the smallest $k=\beta(n)$ such that $n$ is the determinant of such a matrix?
Of course, $\alpha(n)\ge \beta(n)+1$. Variations of this problem might restrict to symmetric, or diagonally dominant matrices, or on the other hand allow entries between $-k$ and $k$.
Additions (incorporating the remark by Will Sawin): For example, $$\left| \begin {matrix} 4&7&1&3\cr -1&10&0&0 \cr 0&-1&10&0\cr 0&0&-1&10 \end {matrix} \right| = 4713.$$ In this way, with $k$ as the base instead of 10, one gets all numbers up to $k^k$ (and a little more). The upper bound on the determinant from the Hadamard inequality is $k^{3k/2}$. With the lower bound $-1$ on the entries, this bound can probably be improved, since the row vectors of the matrix cannot be simultaneously "long" and close to orthogonal.
One can work this determinant into the number of directed spanning trees of a multigraph: $$\left| \begin {matrix} 4&-7&-1&-3\cr -1&10&0&0 \cr 0&-1&10&0\cr 0&0&-1&10 \end {matrix} \right| = 4000-713=3287.$$ Let us add a fifth column to make column sums zero: $$\begin {pmatrix} 4&-7&-1&-3\cr -1&10&0&0 \cr 0&-1&10&0\cr 0&0&-1&10\cr -3& -2&-8&-7 \end {pmatrix}$$ The digits of the determinant are now in the last row. This number 3287 is equal to the number of oriented spanning trees (arborescences) on a directed multigraph $G$ on 5 vertices which are oriented away from the root node 5. The graph $G$ is obtained by taking the negative off-diagonal entries as edge multiplicities. (The arcs going into node 5, which would be the fifth column, are obviously irrelevant.) One can also figure out directly that this is the number of arborescences, by classifying them into those 3000 that use the arc $(5,1)$ and the remaining 287 that don't.
For directed graphs, one can get rid of multiple edges by subdividing them. The new intermediate vertex on an edge must have exactly one incoming arc in every tree, and since the indegree is 1 this arc is fixed, and the number of spanning arborescences is as in the original graph. Moreover, all multiple edges go out either from vertex 1 or from vertex $k+1=5$. Multiple edges emanating from one vertex and going to different vertex can share the intermediate subdivision vertex. Thus, we need in total only $2(k-2)$ extra vertices to eliminate multiple arcs, $k-2$ from vertex 1 and $k-2$ from vertex $k+1$, for a total of $3k-3$. (I did not work out how this argument looks when translated into matrix terms.)
Every integer up to $k^k$ can be realized as the number of spanning arborescences with a fixed root in a digraph on $3k-3$ vertices without multiple arcs.
In other words, $\alpha(n)$ for digraphs is bounded by $O(\log n/\log\log n)$. Much better than what is known for undirected graphs, settling the conjecture at least for directed graphs.
The next remaining open challenge is to investigate $\beta(n)$ for symmetric matrices.
-
2
You can get every number up to $k^k-1$ by putting the number in base $k$ notation, viewing that as a polynomial, using the standard constructing to make a matrix with that characteristic polynomial, and adding $k$ times the identity matrix and making sure the signs work out. Obviously $(k+1)^{k^2}$ is a bound on the other side. By the way, it seems like the inverse functions to $\alpha$ and $\beta$ are the simpler ones to consider and describe. – Will Sawin Feb 17 at 21:38
Very nice! your upper bound is far too generous, if that is what you meant. The Hadamard bound gives $k^{3k/2}$. – Günter Rote Feb 17 at 23:47
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
While the theta construction (paths of lengths $a,b$ and $c$ with common endpoints) is not optimal, how bad can it be? Well the best it could be with real numbers instead of integers is $a+b+c=\sqrt{3}\sqrt{n}.$ I'd guess, based on limited evidence that using the theta construction one can always obtain $\alpha(p) \lt 2\sqrt{p}$ (say for $p \gt 1000$) In other words, $\frac{(a+b+c)^2}{p} \lt 4$
I looked at sets $\{a,b,c\}$ with members less than $101$ and pairwise co-prime (since I was aiming for primes). The only primes under $12500$ which I failed to get were $13,37,9463.$ For $n=9463$ there is $a,b,c=35,41,108$ with $\frac{(a+b+c)^2}{p} \approx 3.51.$ There are $1324$ primes $(1000 \lt p \lt 12500.)$ There are thirteen prime in this range with $\frac{(a+b+c)^2}{p} \gt 3.5.$ The other twelve are
$\small [1657, {3, 31, 46}, 3.862], [2293, {11, 23, 60}, 3.853], [4093, {11, 38, 75}, 3.757], [1093, {5, 21, 38}, 3.747],$$\small [4513, {17, 32, 81}, 3.745], [1777, {6, 31, 43}, 3.602], [1297, {6, 25, 37}, 3.565], [1153, {11, 15, 38}, 3.552],$$\small [1549, {7, 27, 40}, 3.535], [4657, {22, 31, 75}, 3.518], [7129, {24, 43, 91}, 3.502], [3457, {19, 27, 64}, 3.500]$
There are another twelve with the ratio in $(3.4,3.5).$ And $894$ of these primes (slightly more than $2/3$ ) have the ratio under $3.1.$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484214782714844, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/47582/why-is-it-that-complex-numbers-are-algebraically-closed?answertab=votes
|
# Why is it that Complex Numbers are algebraically closed?
I find it curious that Complex Numbers give enough flexibility to be algebraically closed, where the reals, rational numbers do not. For the reals it is easy to see that they cannot be used to solve equations like $x^2 + 1 =0$. Geometrically, one can look at the number line as see that any $x$ squared yields a positive number which when added to one cannot get you back to zero. In the complex case, however, we are working with the plane. In this case exponents stretch and rotate any given $x$. It is easy to therefore see in the particular circumstance that if $x=i$ that $x^2$ rotates it to $-1$ which when added to one yields the desired result (i.e. $0$). So because the Complex Numbers are algebraically closed, I conclude that any polynomial equation with complex coefficients my be solved by choosing one or more $x$'s in the plane and rotating them and stretching them such that they will combine using the given coefficients to produce the RHS.
Question: Why is it that we do not need a larger space than the plane to solve Complex polynomial equations?
I have tried to find a sufficient answer through Google, but was not able. I also searched M.SE and could not find a sufficient answer. I am not a mathematician, so I am looking for an intuitive answer if possible.
Thank you.
-
1
it's called "the Fundamental Theorem of Algebra", and you can find a lot of information about it on the web – Zarrax Jun 25 '11 at 14:10
1
I doubt that there is an easy explanation for a layman here. the easiest proof, I know, goes with complex analysis. Since the complex numbers are defined as an analytic object, all existent proof use some analysis (or topology). – late_learner Jun 25 '11 at 14:11
That they form an algebraically closed field is but one of the wonders of the complex numbers. – lhf Jun 25 '11 at 15:46
My favorite proof is topological. The topological proof requires a lot of heavy theorems to make it rigorous, but it is "easy" to understand - it's sort of a 2-dimensional intermediate value theorem. – Thomas Andrews Jun 25 '11 at 17:17
## 4 Answers
I think your question can be boiled down to understanding why the fundamental theorem of algebra is true. As has already been pointed out above there are many different ways to prove this and you should try to understand a few of them. However your question is not really about the mechanics of those different proofs but rather you are asking 'Why are the complex numbers enough..?'
It's a good question. The truth is that each of those different proofs is giving an argument about why they are enough -- from a slightly different perspective. Depending on your background you may find one more intuitive than another. I have two suggestions on how to get a better feel for the FTOA:
• Perhaps what you are looking for in an intuitive answer is something visual. The nice thing here is that you can make awesome colorized pics which reveal the structure of complex valued functions. Check out this unpublished paper by Daniel J. Velleman at Amherst: http://www.cs.amherst.edu/~djv/FTAp.pdf It's a great visual walk through of a few approaches to the FTOA. It's really a very nice read and the plots bring it all together in a way that many people feel is intuitive. If you're good with coding then you can leverage something like SAGE or mathematica to make some plots of your own and understand the reasoning of the FTOA with your own examples too!
• If that doesn't lock it in then take a stab at reading through Fine and Rosenberger's book: http://www.amazon.com/Fundamental-Theorem-Algebra-Undergraduate-Mathematics/dp/0387946578 You can find it in most university libraries with a strong math department. They'll walk you through the FTOA from three different perspectives; algebra, complex analysis, and topology. It's a longer approach perhaps but I suspect it will bring a lot of mathematical loose ends together for you.
Best of luck and success in your studies! You've asked a great question and that's where it all starts.
-
Of course there are many proofs, and perhaps some others will post the most attractive proofs, but I think you are looking for an intuitive explanation that would somehow make the result seem less surprising.
One such explanation, I think, is the simple observation that reals already go a long way towards being algebraically closed---they are a real closed field---since every odd-degree polynomial over $\mathbb{R}$ has a root in $\mathbb{R}$. This follows immediately from the intermediate value theorem, since in the large scale every odd degree polynomial moves from $-\infty$ to $\infty$ or conversely and hence must cross the axis.
-
2
And adjoining a square root of -1 to any Real Closed Field gives you an Algebraically Closed Field - so the algebraic closure of a Real Closed Field is always an extension of degree 2. [Ch 7 of 1st Edition PM Cohn Algebra vol 2, Theorem 4]. An intuitive reason is that you only need to split quadratic polynomials, and the first thing you try splits them all. – Mark Bennet Jun 25 '11 at 16:01
Yes but the issue with complex roots arises with even degree polynomials (i.e. $x^2 +1$) which may not cross the real at all. I can see that by starting with an odd polynomial, we are always guaranteed one real root which then can be factored out to yield an even polynomial. So why are complex numbers sufficient to solve a 4th degree polynomial? – Tpofofn Jun 26 '11 at 11:49
@Tpofofn, yes, of course; the point of my answer was merely the easy observation that being real-closed, which is very easy to see for $\mathbb{R}$, is already a huge step towards being algebraically closed. – JDH Jun 26 '11 at 13:47
yes, I see your point. In the case of $\mathbb C$ I guess the question is do they get you closer (necessary) or do they get you all the way there (sufficient). We know from FTOA that they are sufficient. I just do not understand why that is. – Tpofofn Jun 29 '11 at 3:02
Here are 3 facts that I think provide some kind of intuition :
• To show that $\mathbb{C}$ is algebraically closed, you only need to show that real polynomials have a root in $\mathbb{C}$ : because of Taylor's formula, a complex polynomial taking real numbers to real numbers is actually real. Now if $P \in \mathbb{C}[X]$ is a complex polynomial, then $P\overline{P}$ is a real polynomial and has the same roots as $P$.
• All odd degree real polynomials have a root in $\mathbb{R}$ (because they are continuous and the limits at $-\infty$ and $\infty$ have different signs).
• Because of the quadratic formula, solving degree 2 equations only requires taking square roots, which is always possible in $\mathbb{C}$ because of the geometric interpretation and because square roots of positive numbers exist in $\mathbb{R}$ (if you write $z = \rho e^{i \theta}$, then a square root of $z$ is given by $\sqrt{\rho} \ e^{i \theta /2}$).
From these 3 facts (notice that they use some analysis) and some clever algebraic manipulations (which you can find on Wikipedia in the section "Algebraic proofs"), you can deduce that $\mathbb{C}$ is algebraically closed. This is in my opinion the closest you can get to an intuition.
-
Fact 1: I think your point is that we can think about only real polynomials WLOG. Fact 2: Got it. We can factor out an odd degree to get to an even degree. Fact 3: Assumes that we can arbitrarily factor to a set of quadratics. – Tpofofn Jun 26 '11 at 12:02
One definition of the complex numbers is that they are the algebraic closure of the reals.
In other words, start with the reals and write down some degree-$d$ polynomial that has fewer than $d$ roots (incl multiplicities). Call one of these roots $x$. Now extend the reals with $x$. After you continue this (infinite) process, you have $\mathbb{C}$.
If you're willing to buy this as the definition of $\mathbb{C}$, then it's trivial to see that it's algebraically closed.
-
4
If you take that as the definition of $\mathbb{C}$, then I guess the next question is : why is it a degree $2$ extension of $\mathbb{R}$ ? – Joel Cohen Jun 25 '11 at 15:21
2
This doesn't seem like a very useful definition of the complex numbers: I don't see how to do much with it without proving that $\mathbb{R}(\sqrt{-1})$ is algebraically closed, with the "corollary" that $\mathbb{R}(\sqrt{-1}) \cong \mathbb{C}$. (One sign that this definition is not so good: it only defines $\mathbb{C}$ up to isomorphism as an $\mathbb{R}$-algebra.) Can you give references to sources where this definition is given and successfully used? – Pete L. Clark Jun 25 '11 at 19:36
I can see the need to extend to $\mathbb{C}$, however why is it enough? Why is it that we do not have to extend $\mathbb{C}$ to something more complex like quaternions? – Tpofofn Jun 26 '11 at 12:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9580709338188171, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/1251/linear-combination-of-gaussian-random-variables
|
# Linear combination of gaussian random variables
I know what random variables are but I don't understand what a linear combination of gaussian random variables is. Can anyone please give me an explanation or clues? Thanks in advance, Julien.
-
## 1 Answer
Gaussian random variable is another name for Normal random variable. It is called Gaussian because Carl Friedrich Gauss discovered many properties of the Normal distribution.
A linear combination of Gaussian random variables is another random variable, not necessarily Gaussian itself, that you get by adding and subtracting Gaussian random variables. Lets call this linear combination of Gaussian variables $Y$. The random variable $Y$ could be something like $$Y = 2 \times X_1 + 1.2234 \times X_2 -7$$ Or it could be something like $$Y = \frac{X_1 - 0.01 \times X_2}{12}$$
where $X$'s are Gaussian random variables. As you can see a linear combination of them is obtained by summing them up or subtracting them from each other, but never multiplying or dividing them with each other or by itself (like squaring or taking roots).
We can give general form of linear combination of random Gaussian variables: $$Y = a_1X_1 + a_2X_2 + ... + a_n X_n$$
where $a_i$ is any number you want it to be, like -222, or 0 or 17.222...
-
1
Thanks for this detailed answer Hassmann, the reason I asked this question is that according to my book, when a vector of risk factors is gaussian one can use Cholesky factorization; when it is not one has to use copulas for one's monte carlo simulations. Say I have three factors: underlying stock, stochastic IR and volatility. How do I know whether my vector is gaussian or not? – balteo May 31 '11 at 11:55
You can look at the wikipedia page on multivariate normal distribution. There is a paragraph on multivariate normality test. – Zarbouzou May 31 '11 at 17:03
Thanks then to both of you!! – Julien May 31 '11 at 18:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9020764231681824, "perplexity_flag": "head"}
|
http://robinryder.wordpress.com/2010/03/23/le-monde-problem-nationale-7/
|
# Robin Ryder's blog
Statistics and other stuff
## Le Monde problem: “Nationale 7″
It seems that solving mathematical puzzles from Le Monde is becoming the main focus of this blog. This week’s problem is about a road with 100 trees: 50 elms and 50 plane trees, in a random order. We are asked to show that whatever the order of the trees, there exists a sequence of 50 consecutive trees with exactly 25 elms and 25 plane trees.
Let $v_k (k=1,\ldots 51)$ be the number of elms between positions $k$ and $k+49$; we are asked to prove that there exists a $k$ such that $v_k=25$. Note that $v_1+v_{51}=50$, since that includes all the trees exactly once. By symmetry, we can assume that $v_1\leq 25$ and $v_{51}\geq 25$. Note also that the sets considered for $v_k$ and $v_{k+1}$ differ by only one tree, hence $| v_k - v_{k+1} | \leq 1$. Looking at the sequence $(v_k)$ we need to go from an integer below $25$ to an integer above $25$ by taking steps of size $0$ or $1$. At some point, we will necessarily go through $25$.
The exact same proof holds with only 98 trees (49 of each species). However, going down to 96 trees (48 of each), the following arrangement has no set of 50 consecutive trees with half of each: 24 elms, then 48 plane trees, then 24 elms.
### Like this:
Tags: Le Monde
This entry was posted on 23/03/2010 at 16:19 and is filed under Mathematical games. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### One Response to “Le Monde problem: “Nationale 7″”
1. Le Monde rank test « Xi'an's Og Says:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402344226837158, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/5580/the-localisation-long-exact-sequence-in-k-theory-over-an-arbitrary-base
|
## The localisation long exact sequence in K-theory over an arbitrary base
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If I work over a field k,write D for the formal disk k[[t]] and Dx for the formal punctured disk k((t)), then there is an associated long exact sequence in algebraic K-theory
... Kn+1(Dx) --> Kn(k) --> Kn(D) --> Kn(Dx) ...
I want to know, what happens if we replace the base k by a more general scheme?
(I am particularly interested in the map K2(Dx) --> K1(k) (which must be the tame symbol right?))
-
Is your question about the punctured disc over an arbitrary affine scheme? Or, are you asking about localization for an arbitrary open subscheme of an arbitrary scheme? – Benjamin Antieau Nov 15 2009 at 16:27
I am asking about the punctured disk over an arbitrary (affine) scheme. – Peter McNamara Nov 16 2009 at 4:20
## 3 Answers
I'm not sure that what I have to say really addresses the heart of your question, but it seems at least related.
### Background
The general Localization Theorem (7.4 of Thomason-Trobaugh) states the following. Suppose $X$ a quasiseparated, quasicompact scheme, suppose $U$ a Zariski open in $X$ such that $U$ is also quasiseparated and quasicompact, and suppose $Z$ the closed complement. Then the following sequence of spectra is a fiber sequence: $$K^B(X\textrm{ on }Z)\to K^B(X)\to K^B(U).$$ Here $K^B$ refers to the Bass nonconnective delooping of algebraic $K$-theory. One thus gets a long exact sequence $$\cdots\to K_n^B(X\textrm{ on }Z)\to K_n^B(X)\to K_n^B(U)\to K_{n-1}^B(X\textrm{ on }Z)\to\cdots$$ (If one tries to work only with the connective version, then the exact sequence ends awkwardly, since $K_0(X)\to K_0(U)$ is not in general surjective; indeed, the obstruction to lifting $K_0$-classes from $U$ to $X$ is precisely $K_{-1}(Z)$ by Bass's fundamental theorem.)
The term $K^B(X\textrm{ on }Z)$ is the Bass delooping of the $K$-theory of the ∞-category of perfect complexes of quasicoherent $\mathcal{O}$-modules that are acyclic on $U$. Identifying this fiber term with $K^B(Z)$ is generally a delicate matter. Let me summarize one situation in which it can be done.
Suppose that $X$ admits an ample family of line bundles [Thomason-Trobaugh 2.1.1, SGA VI Exp. II 2.2.3], and suppose that $Z$ admits a subscheme structure such that the inclusion $Z\to X$ is a regular immersion (so that the relative cotangent complex $\mathbf{L}_{X|Z}$ is $I/I^2[1]$, where $I$ is the ideal of definition), and $Z$ is of codimension $k$ in $d$ in $X$. Then the spectrum $K^B(X\textrm{ on }Z)$ coincides with a nonconnective delooping of the Quillen $K$-theory of the exact category of pseudocoherent $\mathcal{O}_X$-modules of Tor-dimension $\leq k$ supported on $Z$. If now $Z$ and $X$ are regular noetherian schemes, then a dévissage argument now permits us to identify $K^B(X\textrm{ on }Z)$ with $K(Z)$.
### Your case
Now I'm assuming that $K(D)$ refers just to the $K$-theory of the ring $k[[t]]$ (and not, for instance, the $K$-theory of the formal scheme $\mathrm{Spf}(k[[t]])$), then the discussion above applies to give you your desired localization sequence $$K^B(X)\to K^B(X[[t]])\to K^B(X((t)))$$ for any scheme $X$ admitting an ample family of line bundles. If in particular $X$ is regular, then the negative $K$-theory vanishes, and we have a localization sequence $$K(X)\to K(X[[t]])\to K(X((t)))$$
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I do not have the reference with me right now, but I think the localization sequence for K-theory over general base was handled in:
R. W. Thomason, T. Trobaugh, Higher algebraic K-theory of schemes and of derived categories, "The Grothendieck Festschrift", (1990) 247--435.
There is a link with Google book but it was missing the relevant pages!
-
3
That paper is one of my favorite stories. Trobaugh, despite being dead, told Thomason how to prove the main theorem in a dream. ams.org/notices/199608/comm-thomason.pdf – Graham Leuschke Dec 29 2009 at 20:14
This is not a direct answer to the original question, but is what I am interested in.
I found the following in 12.14(iii) of Brylinski and Deligne's paper "Central Extensions of Reductive Groups by K_2". I'll quote the relevant paragraph and comment afterwards.
Suppose that V is henselian and essentially of finite type over a field. For j (resp i) the inclusion of G (resp G_s) in G_V, Quillen resolution gives a short exact sequence of sheaves on G_V. $$0 \to K_2 \to j_*K_2 \to i_*K_1(D) \to 0$$
The K's are sheafified K-theory on the big Zariski site. G is the generic fibre of a smooth group scheme G_V, with special fibre G_s.
What I don't know is what "essentially of finite type over a field" means, nor how this exact sequence arises.
-
«Algebra of essentially of finite type» tends to mean «a localization of an algebra of finite type» – Mariano Suárez-Alvarez Nov 17 2009 at 15:26
the j and the i in the short exact sequence I have should both be accompanied by a lower star that I don't know how to edit to make appear. – Peter McNamara Dec 30 2009 at 23:45
I've fixed the short exact sequence for you: you can always use the TeX's double-dollar sign to write math. – Mariano Suárez-Alvarez Jan 12 2010 at 20:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9039993286132812, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/195729/finding-the-virtual-center-of-a-cloud-of-points?answertab=votes
|
# Finding the virtual center of a cloud of points.
Given:
1. (latitude, longitude) points $P_1, P_2,\ldots, P_n$.
2. Presumably, all the points should form a dense cloud. However, noise is possible.
Needed:
• The virtual center of the points.
For instance, 99% of the points may lie within a circle with 1km radius, except for 1% scattered outside that circle at a distance larger than 1km from any point inside the circle. Then this 1% is noise.
Unfortunately, I do not know how to define the noise properly. But the virtual center I am looking for should be close enough to most of the points. If most of the points are close to it, then I do not mind that some be far away.
If it is not too hard, I would like to be able to recognize more than one dense cloud amongst the points. In which case, each cloud could be reduced to its virtual center and thus I will have to find the new virtual super center of the virtual center cloud. That super center is the final result.
I am not a mathematician, so my descriptions are vague. But I am pretty sure that this is a well known problem and it probably has a trivial solution.
Thanks.
P.S.
This question is similar to Detect Abnormal Points in Point Cloud, however, my space is two dimensional, which probably does not matter. Still.
EDIT
The points are indeed on the surface of a sphere, a spheroid actually, Earth more precisely. However, the distance between them is not large enough to take the Earth curveture into account, so it may be safely assume that the surface is flat and longitude is X and latitude is Y.
-
What you could do is find the center of "mass" of the points (assigning them all equal mass, 1 for instance). Then, you look at what are the points that are most distant from that center. And you throw those points out. Everything will depend though on what you understand by "distant". I think a sensible choice would be assuming a multivariate normal distribution of the points and throw out those that are more than $n\sigma$ away from the center ($n$ being 4 for instance). Then recompute the center of mass after you've thrown out those points. – Raskolnikov Sep 14 '12 at 14:42
Could you arrange your reply as an answer? I know wiki helps, still if you could elaborate a bit on `multivariate normal distribution` that would be great. – mark Sep 14 '12 at 15:26
I don't think my answer really addresses all your issues. It doesn't address the issue of how to distinguish different clouds in the data. – Raskolnikov Sep 14 '12 at 15:46
Yep, it is a problem. – mark Sep 14 '12 at 15:54
1
Does the LAT/LON indication means that the point are of the surface of a sphere? And the resulting point should also be of the same surface? – enzotib Sep 15 '12 at 5:33
show 3 more comments
## 1 Answer
Just take the mean. The average of the x coordinates of the points will be the x coordinate of the center, and the average of the y coordinates of the points will be the y coordinate of the center.
-
1
And then throw out the wild outliers to make that center even more accurate. – muntoo Sep 15 '12 at 4:11
Do you suggest to do it iteratively? Like compute the mean, then throw all that are beyond some threshold, then recompute the mean and continue until there is nothing to throw? – mark Sep 15 '12 at 11:07
But will it work in case of two clouds? In this case the mean should be somewhere in the middle between the two clouds. The problem is that if the threshold is too low, then I might throw away both of the clouds entirely. – mark Sep 15 '12 at 11:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343101978302002, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/55445/when-do-we-get-extraneous-roots?answertab=votes
|
# When do we get extraneous roots?
There are only two situations that I am aware of that give rise to extraneous roots, namely, the “square both sides” situation (in order to eliminate a square root symbol), and the “half absolute value expansion” situation (in order to eliminate taking absolute value). An example of the former is $\sqrt{x} = x – 2$, and an example of the latter is $|2x – 1| = 3x + 6$. In the former case, by squaring both sides we get roots of $1$ and $4$, and inspection reveals that $1$ is extraneous. (Of course, squaring both sides is a special case of raising both sides to an positive even power.) In the latter case we expand the equation into the two equations $2x – 1 = 3x + 6$ and $2x – 1 = -(3x + 6)$, getting roots of $-1$ and $-7$, and inspection reveals that $-7$ is extraneous. Now, my question is: Is there any other situation besides these two that gives rise to extraneous roots? -Perhaps something involving trigonometry?
I asked this question some time ago in MO, where I got ground in the dirt like a wet french fry (as Joe Bob would say). So, I’m transferring the question here to MSE. :)
-
Extraneous roots come up in log equations. – The Chaz 2.0 Aug 3 '11 at 22:46
5
Extraneous roots happen whenever you apply functions to both sides of an equation that aren't invertible. – Qiaochu Yuan Aug 3 '11 at 22:49
1
The two examples that you give can be considered as similar, since $|2x-1|=\sqrt{(2x-1)^2}$. – André Nicolas Aug 3 '11 at 22:55
@Andre: An excellent observation, thanks! – Mike Jones Aug 3 '11 at 23:25
## 2 Answers
Suppose you have two expressions $e_1$ and $e_2$ and you know $$e_1 = e_2.$$
Then, if you apply a function to both sides, you have $$f(e_1) = f(e_2).$$ However, this logic in general does not reverse, unless the function $f$ is 1-1. This is the mechanism by which extraneous roots get introduced.
When you square both sides of an equation, you are destroying information about the signs of the two sides. Now, the equality will match if the two sides have the same absolute value. This process can, and often does, introduce spurious roots.
-
@ncmahtsadist: OK, I'll go with that, upvoting, and accepting, your answer. Thanks! – Mike Jones Aug 3 '11 at 23:26
## Did you find this question interesting? Try our newsletter
email address
Extraneous solutions are often the result of omitting a constraint during the formulation or solution of a problem. For example, the correct rule for solving absolute value equations is
$$|x| = y \iff (y \ge 0) \text{ and } ((x = y) \text{ or } (x = -y)).$$
If we use this rule then extraneous solutions do not occur.
$$|2x - 1| = 3x + 6,$$ $$(3x + 6 \ge 0) \text{ and } (2x-1 = 3x+6 \text{ or } 2x-1 = -3x-6),$$ $$(x \ge -2) \text{ and } (x = -7 \text{ or } x = -1),$$ $$x = -1.$$
However, it is customary to omit the condition $y \ge 0$ and instead use the weaker rule $$|x| = y \implies ((x = y) \text{ or } (x = -y)).$$ This makes the writing simpler, but the price you pay is that you have to check for extraneous solutions at the end.
Extraneous solutions often arise from using a rule of the form $$x = y \implies f(x) = f(y).$$ Squaring both sides of an equation is an example of such a rule.
If $f$ is one-to-one, then the rule $$f(x) = f(y) \iff x = y$$ is valid, provided that $x$ and $y$ are both in the domain of $f$. Ignoring this condition can lead to extraneous solutions. The equation $\log(x-4) = \log(2x-6)$ provides an example.
Extraneous solutions can also result from ignoring physical constraints in applied problems (e.g. length and mass are positive quantities).
-
Excellent! I've up-voted your answer. – Mike Jones Aug 4 '11 at 20:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419990181922913, "perplexity_flag": "head"}
|
http://cms.math.ca/Reunions/ete12/abs/hao
|
Réunion d'été SMC 2012
Hôtels Regina Inn et Ramada (Regina~Saskatchewan), 2 -4 juin 2012 www.smc.math.ca//Reunions/ete12
Analyse harmonique et espaces d'opérateurs
Org: Yemon Choi et Ebrahim Samei (Saskatchewan)
[PDF]
Amenability properties for the centres of certain discrete group algebras [PDF]
Let $\{G_i\}_{i\in I}$ be a family of finite groups, and let $G=\bigoplus_{i\in I}$ indicates the group generated by all $(x_i)_{i\in I}$ when $x_i$ is the identity of the group $G_i$ for all but finitely many $i$.
We characterize the amenability of $Z\ell^1(G)$, the center of the group algebra for $G$. Moreover, we study the characters on the commutative algebra $Z\ell^1(G)$, and consequently, the existence of the bounded approximate identity for the maximal ideals of $Z\ell^1(G)$ will be considered. We also study when an algebra character of $Z\ell^1(G)$ belongs to $c_0$ or $\ell^p$.
Time permitting, we will mention some results about the amenability constant of the center of the group algebra for some particular finite groups.
This is a joint project with Yemon Choi and Ebrahim Samei.
MICHAEL BRANNAN, Queen's University
Representations of quantum group convolution algebras [PDF]
In this talk, we will discuss some aspects of the (non-self-adjoint) representation theory of quantum group convolution algebras $L^1(\mathbb G)$ on Hilbert spaces. Inspired by the classical case where $L^1(\mathbb G)$ is the group algebra of a locally compact group, there are many interesting questions that one can ask about such representations. For instance, what conditions on the quantum group $\mathbb G$ and a given bounded representation $\pi:L^1(\mathbb G) \to B(H)$ ensure that $\pi$ is similar to a $\ast$-representation? Another important question is whether or not there exists an analogue of the classical result of Cowling-Haagerup relating representations to Fourier multipliers: Do the matrix elements of $\pi$ always give rise to completely bounded multipliers of the dual convolution algebra $L^1 (\hat {\mathbb G})$? We will address these and other questions in this talk, as well as discuss some concrete examples. As expected, the theory of completely bounded maps will play a prominent role in the quantum setting.
This talk is based on joint work with Matthew Daws (Leeds) and Ebrahim Samei (Saskatchewan).
ELCIM ELGUN, University of Waterloo
The Eberlein Compactification of Locally Compact Groups [PDF]
Given a locally compact group $G$, the Eberlein compactification $G^e$ is the spectrum of the uniform closure of the Fourier-Stieltjes algebra $B(G)$. It is a semitopological compactification and thus a quotient of the weakly almost periodic compactification $G^w$. We aim to study the structure and complexity of $G^e$. On one hand, for certain abelian groups, weak*-closed subsemigroups of $L^{\infty}[0,1]$ may be realised as quotients of $G^e$, thus showing that $G^e$ is large and complicated in these situations. Conversely, the structures of $G^e$ for certain semidirect product groups show that aspects of the structure of $G^e$ can be quite simple. The levels of complexity complexity of these structures mimic those of $G^w$, yet many questions about the sizes of their differences remain.
FEREIDOUN GHAHRAMANI, University of Manitoba
Automorphisms and derivations of the $p$-Volterra algebras and $p$-weighted convolution algebras [PDF]
Let $1 \leq p < \infty$ and $V_{p} = L^{p}[0 , 1]$ be the Lebesgue space of $p$-integrable functions on $[0 , 1]$. The space $V_{p}$ can be made into a (radical) Banach algebra with the convolution product $$(f \star g) (x) = \int_{0}^{x} f(x-y) g(y) dy\quad (\text{a.e.} \ x \in (0 , 1), \ \ f,g \in V_{p}).$$
The Banach algebra $V = V_{1}$ (known as the Volterra algebra) has been the subject of much study. In [1], [2], [3] and [4] derivations and automorphisms of this algebra were studied. This talk is about our recent work on derivations and automorphisms of $V_{p}$ for $p > 1$, as well as the automorphisms and derivations of the $p$-version of the weighted convolutions algebras on the half-line. This is joint work with Sandy Grabiner.
\smallskip \noindent{\bf References.}
[1] F. Ghahramani, The group of automorphisms of $L^{1}(0,1)$ is connected. {\it Trans.\ Amer.\ Math.\ Soc.} 314 (1989), no.~2, 851--859.
[2] F. Ghahramani, The connectedness of the group of automorphisms of $L^{1}(0,1)$, {\it Trans.\ Amer.\ Math.\ Soc.} 302 (1987), no.~2, 647--659.
[3] N. P. Jewell, A. M. Sinclair, Epimorphisms and derivations on $L^{1}(0,1)$ are continuous, {\it Bull.\ London Math.\ Soc.} 8 (1976), no.~2, 135--139.
[4] H. Kamowitz, and S. Scheinberg, Derivations and automorphisms of $L^{1}(0,1)$, {\it Trans.\ Amer.\ Math.\ Soc.} 135 (1969) 415--427.
MAHYA GHANDEHARI, Dalhousie University
Matrix coefficients of unitary representations and projections in $L^1(G)$. [PDF]
For a locally compact group $G$, the Fourier-Stieltjes algebra of $G$, denoted by $B(G)$, is the set of all the matrix coefficient functions of $G$ equipped with pointwise algebra operations. In this talk, we study subspaces of $B(G)$, called $A_\pi(G)$, generated by all the matrix coefficient functions of $G$ associated with a fixed unitary representation $\pi$. In particular, we consider the subspaces $A_\pi(G)$ for irreducible unitary representations $\pi$. We then discuss the construction of projections in $L^1(G)$ using elements of $A_\pi(G)$ when $\pi$ admits a certain admissibility condition.
MEHRDAD KALANTAR, Carleton University
Harmonic Operators on LC Quantum Groups [PDF]
In this talk we consider the space of $\mu$-harmonic operators in $L^\infty(\mathbb{G})$, where $\mathbb{G}$ is a locally compact quantum group, and $\mu\in C_0(\mathbb{G})^*$ is a quantum probability measure. We discuss quantum versions of various classical results, along with some applications. This talk is partly based on joint work with Matthias Neufang and Zhong-Jin Ruan.
LAURA MARTI PEREZ
A groupoid generalization of the map $\overline{L^2(H)}\otimes L^2(H) \to A(H)$. [PDF]
Let $H$ be a locally compact group and $A(H)$ its Fourier algebra. The map $q_0: \overline{L^2(H)}\otimes L^2(H) \to A(H)$ is a quotient map that respects the product. This result also admits an operator space version.
If we consider a locally compact groupoid $G$, we can define a Fourier algebra $A(G)$. In this talk we are going to present a map that extends $q_0$ to the groupoid context. In particular we need to define a trace-class type groupoid product on spaces that are projective tensor products of amplified $L^2$ row and column spaces.
MATTHEW MAZOWITA, University of Alberta
The weighted compactification of a group and topological centres [PDF]
The spectrum of the algebra of LUC (left uniformly continuous) functions on a topological group G is a compact right topological semigroup with the Arens product, called the LUC-compactification of the group, and has topological centre equal to G. In the context of weights and Beurling algebras, the spectrum of the weighted LUC algebra is what we call the weighted LUC-compactification of the group. This compactification is not (in general) a semigroup but its algebraic properties reflect properties of the weight. We study this compactification and use it to find the topological centres of related semigroups and algebras and extend some results of Budak, I\c{s}\i k, and Pym on the existence of small sets which determine the topological centres of the LUC and group algebras to their weighted analogues.
VOLKER RUNDE, University of Alberta
Weighted Fig\`a-Talamanca--Herz algebras [PDF]
For a locally compact group $G$ and $p \in (1,\infty)$, we define and study the Beurling-Figa-Talamanca-Herz algebras $A_p(G,\omega)$. For $p=2$ and abelian $G$, these are precisely the Beurling algebras on the dual group $\hat{G}$. For $p =2$ and compact $G$, our approach subsumes an earlier one by H. H. Lee and E. Samei. The key to our approach is not to define Beurling algebras through weights, i.e., possibly unbounded continuous functions, but rather through their inverses, which are bounded continuous functions. We prove that a locally compact group $G$ is amenable if and only if one---and, equivalently, every---Beurling-Fig\`a-Talamanca--Herz algebra $A_p(G,\omega)$ has a bounded approximate identity. This is joint work with S. \"Oztop and N. Spronk.
NICO SPRONK, University of Waterloo
On the algebra generated by pure positive definite functions [PDF]
Let $G$ be a locally comapct group. In his doctoral thesis at Alberta, Y.-H. Cheng studied the closed subspace $a_0(G)$, spanned by pure continuous positive definite functions, in the Fourier-Steiltjes algerba $B(G)$. We let $a(G)$ denote the closed algebra generated by $a_0(G)$. We show that $a_0(G)\subsetneq a(G)$, in general, by illustrating the examples of Heisnberg groups $\mathbb{H}_n$ and $\mathrm{SL}_2(\mathbb{R})$. We show that $a(\mathbb{H}_n)$ is contained in the spine $A^*(\mathbb{H}_n)$ -- an algebra defined by M. Ilie and the speaker -- and is operator amenable. We also note that $a(\mathrm{SL}_2(\mathbb{R}))$ is not operator weakly amenable though it admits no point derivations.
This represents joint work with Y.-H. Cheng and B.E. Forrest.
KEITH TAYLOR, Dalhousie University
Groups with (essentially) one point duals [PDF]
Let $G$ be a locally compact group and $\widehat{G}$ its dual space of equivalence classes of irreducible unitary representations, which carries the Mackey-Fell topology. In this talk, we consider groups $G$ of the form $A\rtimes H$ with $A$ abelian and $H$ acting on $A$ in such a manner that there exists a $\pi\in\widehat{G}$ with $\{\pi\}$ open and dense in $\widehat{G}$. In this case, $\pi$ is a square-integrable representation and its matrix coefficient functions satisfy generalized orthogonality relations which lead to an abundance of projections in $L^1(G)$ and transforms on $L^2(A)$ generalizing the continuous wavelet transform of $L^2(\mathbb{R})$. We will focus on presenting examples.
BEN WILLSON, University of Windsor
A Hilbert space approach to approximate diagonals for locally compact quantum groups [PDF]
For a locally compact group $G$, the unitary operator $W$ on $L^2(G\times G)$ given by $W\xi(x,y)=\xi(x,x^{-1}y)$ encapsulates the structure of $G$. If $G$ is amenable then one can find simple tensors in $L^2(G)\otimes L^2(G)$ which, when acted upon by $W^*$ produce the square root of an (operator) bounded approximate diagonal for $L^1(G)$.
Using this approximate diagonal for a group algebra as a motivating example, this talk will discuss the relationship between these tensors and approximate identities and approximate translation invariant means. A general approach for approximate diagonals for predual algebras of locally compact quantum groups will be presented.
YONG ZHANG, University of Manitoba
The invariant subspace property for F-algebras [PDF]
We establish Ky Fan's finite dimensional invariant subspace theorem for left amenable F-algebras. This is joint work with A. T.-M. Lau.
## Commandites
Nous remercions chaleureusement ces commanditaires de leur soutien.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 117, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8788177967071533, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/1419/what-is-the-definition-of-momentum-when-a-mass-distribution-rhor-t-is-given
|
# What is the definition of momentum when a mass distribution $\rho(r,t)$ is given?
This question is Edited after recieving comments.
What is the definition of momentum when a mass distribution $\rho(r,t)$ is given?
Assuming a particle as a point mass we know the definition of momentum as $p = mv$.
I need a definition where it is assumed that point masses are not present.
-
I don't know that they have other, more descriptive names. In what context are you asking your question? I mean, are you trying to derive the Navier Stokes equations or something like that? – sigoldberg1 Nov 29 '10 at 6:58
@sigoldberg1: I want to know what is meant by momentum and momentum distribution, in this context. – Rajesh D Nov 29 '10 at 7:04
maybe you'd be better off asking that. Or at least expand on your question to indicate why you're asking what you're asking and how momentum relates to it. – David Zaslavsky♦ Nov 29 '10 at 7:12
@David Zaslavsky: please answer the edited part in the question. – Rajesh D Nov 29 '10 at 8:39
## 3 Answers
If you have multiple masses the total momentum is $$p = \sum_i m_i v_i$$
If you have a continuum distribution then you can proceed like follows. You divide the continuum into small boxes where each part of the box has approximately same mass and velocity (this assumes some kind of smoothness of the distribution). Then you can obtain whole momentum with the above formula. Now letting the box sizes go to zero you obtain integral
$$p(t) = \int \rho(r, t) v(r, t) dr$$
-
@Marek: what do you call the quantity $\rho(r, t) v(r, t)$. What is $v(r, t)$ called ? – Rajesh D Nov 29 '10 at 11:17
@Marek: How are $\rho(r, t), v(r, t)$ coupled if we were to describe the newtonian gravitation in a set of differential equations. – Rajesh D Nov 29 '10 at 11:20
1
@Rajesh If it is a rigid body, this does not need a name since it is just constant velocity (rotation will cancel up). If it is fluid dynamics, this is just the flow. If it is elasticity, this is deformation velocity field or something like this, I don't recall it too well. – mbq♦ Nov 29 '10 at 12:09
2
@Rajesh: I would call it unphysical. And probably also mathematically inconsistent. – Marek Nov 29 '10 at 13:22
1
@Rajesh: For any fluid that is at all reasonable, $\rho$ and $\vec v$ have to satisfy the continuity equation $\frac{\partial \rho}{\partial t} + \nabla\cdot\left(\vec v \rho\right) = 0$, so they are not completely independent degrees of freedom anyway. – Jerry Schirmer Nov 30 '10 at 3:16
show 9 more comments
Counter-question: What is the definition of mass, when a mass distribution given?
Strange question, isn't it?
Since you have the mass distribution you got to have the momentum distribution as well.
It is simply ρ*v.
-
Motion of a non-point object can be described as a combination of translation (which only depends on the total mass of the object) and rotation (which depends on the mass distribution). Rotational momentum is defined as $\mathbf{L}= I \boldsymbol{\omega}$ where $I$ is a moment of inertia $I = \int_V \rho(\mathbf{r})\,d(\mathbf{r})^2 \, \mathrm{d}V\!(\mathbf{r})$
As you can see, the mass distribution defines the rotational momentum of a system.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329467415809631, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/206534-rationals-irrationals-properties.html
|
# Thread:
1. ## Rationals and irrationals - properties
Hey guys!
So I have two questions which are similar, but not the same. The first asks me to prove that between any two distinct rational numbers there exists an irrational number - I haven't managed to do this. The question after however, which asks me to show that between any two real numbers there exists an irrational number, I've had an attempt at but I'm not sure whether the proof is adequate enough - it is written below (please bear in mind that I have already proven that between any two real numbers lies a rational number, so the first statement of my proof follows from that theorem, and in addition that $\sqrt2 \notin \mathbb{Q}$):
Consider $\dfrac{a}{\sqrt2} < \dfrac{p}{q} < \dfrac{b}{\sqrt2}$ where $p,q \in \mathbb{Z}$ with $q\not= 0$ and $a,b \in \mathbb{R}$. That is, by definition, $\dfrac{p}{q} \in \mathbb{Q}$. Multiplying this inequality by $\sqrt2 > 0$ means that the inequality still holds, hence $a < \dfrac{p\sqrt2}{q} < b$ and therefore it follows that between any two real numbers there lies an irrational number as $\dfrac{p\sqrt2}{q} \notin \mathbb{Q}$.
2. ## Re: Rationals and irrationals - properties
some things you need to add, for clarity:
since a,b are assumed distinct, without loss of generality you may assume a < b (or else switch them, a standard tactic).
you should show why p√2/q is not rational (it's not hard, and only takes a line or two).
p needs to be non-zero, or else your argument fails. for example, what if a = -1/n, and b = 1/n, where n is a VERY large positive integer (like 3 billion)? this is a rather serious defect.
what can you do about this?
3. ## Re: Rationals and irrationals - properties
Originally Posted by Deveno
some things you need to add, for clarity:
since a,b are assumed distinct, without loss of generality you may assume a < b (or else switch them, a standard tactic).
you should show why p√2/q is not rational (it's not hard, and only takes a line or two).
p needs to be non-zero, or else your argument fails. for example, what if a = -1/n, and b = 1/n, where n is a VERY large positive integer (like 3 billion)? this is a rather serious defect.
what can you do about this?
Hmm, first of all thanks very much for your input; I really appreciate it.
I understand your points apart from the last one - sorry I don't follow. I sort of understand that p must be non-zero but how does the following argument where a = -1/n and b = 1/n relate?
Also, I've just proved that between two distinct real numbers there lies an irrational number. But how do I prove that between two distinct rational numbers there lies an irrational number? I thought that surely if all rational numbers are real then haven't I kind of just proved that already?
4. ## Re: Rationals and irrationals - properties
Originally Posted by Femto
Also, I've just proved that between two distinct real numbers there lies an irrational number. But how do I prove that between two distinct rational numbers there lies an irrational number? I thought that surely if all rational numbers are real then haven't I kind of just proved that already?
You have done that.
Between any two numbers there is a rational number.
5. ## Re: Rationals and irrationals - properties
how do i answer this question about rational and irrational numbers?
Social media recruitment
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549505710601807, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Optical_flow
|
Optical flow
From Wikipedia, the free encyclopedia
Jump to: navigation, search
It has been suggested that be merged into this article. (Discuss) Proposed since April 2011.
The optic flow experienced by a rotating observer (in this case a fly). The direction and magnitude of optic flow at each location is represented by the direction and length of each arrow. Image taken from.[1]
Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer (an eye or a camera) and the scene.[2][3] The concept of optical flow was introduced by the American psychologist James J. Gibson in the 1940s to describe the visual stimulus provided to animals moving through the world.[4] James Gibson stressed the importance of optic flow for affordance perception, the ability to discern possibilities for action within the environment. Followers of Gibson and his ecological approach to psychology have further demonstrated the role of the optical flow stimulus for: the perception of movement by the observer in the world; perception of the shape, distance and movement of objects in the world; and the control of locomotion.[5] This feature has since been co-opted by roboticists, who use optical flow techniques (including motion detection, object segmentation, time-to-contact information, focus of expansion calculations, luminance, motion compensated encoding, and stereo disparity measurement) for image processing and control of navigation.[6][7]
Estimation of the optical flow
Sequences of ordered images allow the estimation of motion as either instantaneous image velocities or discrete image displacements.[7] Fleet and Weiss provide a tutorial introduction to gradient based optical flow .[8] John L. Barron, David J. Fleet, and Steven Beauchemin provide a performance analysis of a number of optical flow techniques. It emphasizes the accuracy and density of measurements.[9]
The optical flow methods try to calculate the motion between two image frames which are taken at times t and $t+\Delta t$ at every voxel position. These methods are called differential since they are based on local Taylor series approximations of the image signal; that is, they use partial derivatives with respect to the spatial and temporal coordinates.
For a 2D+t dimensional case (3D or n-D cases are similar) a voxel at location $(x,y,t)$ with intensity $I(x,y,t)$ will have moved by $\Delta x$, $\Delta y$ and $\Delta t$ between the two image frames, and the following image constraint equation can be given:
$I(x,y,t) = I(x+\Delta x, y + \Delta y, t + \Delta t)$
Assuming the movement to be small, the image constraint at $I(x,y,t)$ with Taylor series can be developed to get:
$I(x+\Delta x,y+\Delta y,t+\Delta t) = I(x,y,t) + \frac{\partial I}{\partial x}\Delta x+\frac{\partial I}{\partial y}\Delta y+\frac{\partial I}{\partial t}\Delta t+$H.O.T.
From these equations it follows that:
$\frac{\partial I}{\partial x}\Delta x+\frac{\partial I}{\partial y}\Delta y+\frac{\partial I}{\partial t}\Delta t = 0$
or
$\frac{\partial I}{\partial x}\frac{\Delta x}{\Delta t}+\frac{\partial I}{\partial y}\frac{\Delta y}{\Delta t}+\frac{\partial I}{\partial t}\frac{\Delta t}{\Delta t} = 0$
which results in
$\frac{\partial I}{\partial x}V_x+\frac{\partial I}{\partial y}V_y+\frac{\partial I}{\partial t} = 0$
where $V_x,V_y$ are the $x$ and $y$ components of the velocity or optical flow of $I(x,y,t)$ and $\tfrac{\partial I}{\partial x}$, $\tfrac{\partial I}{\partial y}$ and $\tfrac{\partial I}{\partial t}$ are the derivatives of the image at $(x,y,t)$ in the corresponding directions. $I_x$,$I_y$ and $I_t$ can be written for the derivatives in the following.
Thus:
$I_xV_x+I_yV_y=-I_t$
or
$\nabla I^T\cdot\vec{V} = -I_t$
This is an equation in two unknowns and cannot be solved as such. This is known as the aperture problem of the optical flow algorithms. To find the optical flow another set of equations is needed, given by some additional constraint. All optical flow methods introduce additional conditions for estimating the actual flow.
Methods for determining optical flow
• Phase correlation – inverse of normalized cross-power spectrum
• Block-based methods – minimizing sum of squared differences or sum of absolute differences, or maximizing normalized cross-correlation
• Differential methods of estimating optical flow, based on partial derivatives of the image signal and/or the sought flow field and higher-order partial derivatives, such as:
• Lucas–Kanade method – regarding image patches and an affine model for the flow field
• Horn–Schunck method – optimizing a functional based on residuals from the brightness constancy constraint, and a particular regularization term expressing the expected smoothness of the flow field
• Buxton–Buxton method – based on a model of the motion of edges in image sequences[10]
• Black–Jepson method – coarse optical flow via correlation[7]
• General variational methods – a range of modifications/extensions of Horn–Schunck, using other data terms and other smoothness terms.
• Discrete optimization methods – the search space is quantized, and then image matching is addressed through label assignment at every pixel, such that the corresponding deformation minimizes the distance between the source and the target image.[11] The optimal solution is often recovered through Max-flow min-cut theorem algorithms, linear programming or belief propagation methods.
Uses of Optical Flow
Motion estimation and video compression have developed as a major aspect of optical flow research. While the optical flow field is superficially similar to a dense motion field derived from the techniques of motion estimation, optical flow is the study of not only the determination of the optical flow field itself, but also of its use in estimating the three-dimensional nature and structure of the scene, as well as the 3D motion of objects and the observer relative to the scene, most of them using the Image Jacobian.
Optical flow was used by robotics researchers in many areas such as: object detection and tracking, image dominant plane extraction, movement detection, robot navigation and visual odometry.[6] Optical flow information has been recognized as being useful for controlling micro air vehicles.[12]
The application of optical flow includes the problem of inferring not only the motion of the observer and objects in the scene, but also the structure of objects and the environment. Since awareness of motion and the generation of mental maps of the structure of our environment are critical components of animal (and human) vision, the conversion of this innate ability to a computer capability is similarly crucial in the field of machine vision.[13]
The optical flow vector of a moving object in a video sequence.
Consider a five-frame clip of a ball moving from the bottom left of a field of vision, to the top right. Motion estimation techniques can determine that on a two dimensional plane the ball is moving up and to the right and vectors describing this motion can be extracted from the sequence of frames. For the purposes of video compression (e.g., MPEG), the sequence is now described as well as it needs to be. However, in the field of machine vision, the question of whether the ball is moving to the right or if the observer is moving to the left is unknowable yet critical information. Not even if a static, patterned background were present in the five frames, could we confidently state that the ball was moving to the right, because the pattern might have an infinite distance to the observer.
References
1. Huston SJ, Krapp HG (2008). "Visuomotor Transformation in the Fly Gaze Stabilization System". In Kurtz, Rafael. PLoS Biology 6 (7): e173. doi:10.1371/journal.pbio.0060173. PMC 2475543. PMID 18651791.
2. Andrew Burton and John Radford (1978). Thinking in Perspective: Critical Essays in the Study of Thought Processes. Routledge. ISBN 0-416-85840-6.
3. David H. Warren and Edward R. Strelow (1985). Electronic Spatial Sensing for the Blind: Contributions from Perception. Springer. ISBN 90-247-2689-1.
4. Gibson, J.J. (1950). The Perception of the Visual World. Houghton Mifflin.
5. Royden, C. S., & Moore, K. D. (2012). Use of speed cues in the detection of moving objects by moving observers. Vision research, 59, 17–24. doi:10.1016/j.visres.2012.02.006
6. ^ a b Kelson R. T. Aires, Andre M. Santana, Adelardo A. D. Medeiros (2008). Optical Flow Using Color Information. ACM New York, NY, USA. ISBN 978-1-59593-753-7.
7. ^ a b c
8. David J. Fleet and Yair Weiss (2006). "Optical Flow Estimation". In Paragios et al. Handbook of Mathematical Models in Computer Vision. Springer. ISBN 0-387-26371-3.
9. John L. Barron, David J. Fleet, and Steven Beauchemin (1994). "Performance of optical flow techniques". International Journal of Computer Vision (Springer).
10. Glyn W. Humphreys and Vicki Bruce (1989). Visual Cognition. Psychology Press. ISBN 0-86377-124-6.
11.
12. Barrows G.L., Chahl J.S., and Srinivasan M.V., Biologically inspired visual sensing and flight control, Aeronautical Journal vol. 107, pp. 159–268, 2003.
13. Christopher M. Brown (1987). Advances in Computer Vision. Lawrence Erlbaum Associates. ISBN 0-89859-648-3.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8790534734725952, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/10435/negation-of-if-and-only-if/10442
|
# Negation of if and only if?
Let a statement P is "X is true if and only if Y is true". What is the negation of P? I am little confused. It seems that digital equivalent of this statement is P = X and Y. Hence negation of P is (not X) or (not Y) i.e. Either X or Y is false. Am I right guys?
-
7
The negation happens to be equivalent to "X is true if and only if Y is false". – Rahul Narain Nov 15 '10 at 19:06
1
As a several-months-late aside, this is commonly expressed in mathematical English as "exactly one of X and Y holds." – ccc Aug 13 '11 at 15:38
## 7 Answers
$X\leftrightarrow Y$ is the conjunction of $X\leftarrow Y$ and $X\rightarrow Y$. The negation of a conjunction is the disjunction of the negations; the negation of $P\rightarrow Q$ is $P\wedge \neg Q$. So we have: \begin{align*} \neg(X\leftrightarrow Y) &\Longleftrightarrow \neg\Bigl( (X\rightarrow Y)\wedge (Y\rightarrow X)\Bigr)\\ &\Longleftrightarrow \neg(X\rightarrow Y)\vee \neg(Y\rightarrow X)\\ &\Longleftrightarrow (X\wedge \neg Y) \vee (Y\wedge \neg X). \end{align*} So the negation of "$X$ is true if and only if $Y$ is true" is "Either $X$ is true and $Y$ is false, or $X$ is false and $Y$ is true." Added: as it happens, as noted by Rahul Narain in his comment, this is in turn equivalent to "$X$ is true if and only if $Y$ is false" (just compare the cases when they are each true). So you also get that $$\neg(X\leftrightarrow Y) \Longleftrightarrow X\leftrightarrow \neg Y \Longleftrightarrow \neg X\leftrightarrow Y.$$
-
4
@Dilawar: You cannot accept both; you can only accept one answer. So pick one. If you like Gabe's answer better, then "give" it to him. – Arturo Magidin Nov 15 '10 at 19:43
4
@Dilawar: As an engineer, just toss a coin, then. (-: – Arturo Magidin Nov 15 '10 at 20:07
3
@Doug: Indeed, if you decide, as you usually do, to interpret everything in your own idiosyncratic and secret way, suddenly and miraculously nothing anybody else says will be correct. If that is the sum total of what you want to contribute, perhaps you can instead just talk to yourself. As for your downvoting (I'm guessing it's you) it seems as well founded and useful as the rest of your contributions to this site; i.e., not at all. – Arturo Magidin Aug 12 '11 at 21:14
4
@Doug: The assumption that the question does not refer to the usual meanings of the words is not just idiosyncratic, it is downright perverse, as is your attitude, your insistence on polish notation and neologisms, and your attempts at justifying your silliness. Trying to impose a nonstandard interpretation just to justify your vengeful downvoting... well that's just pathtetic. You aren't going to convince me, so take them elsewhere. – Arturo Magidin Aug 12 '11 at 22:20
4
@Doug:In short, you criticized and downvoted an answer about nine months after it was posted because if you interpret it in a non-standard context (non-classical logic), when there is no reason to believe the original poster meant anything of the sort, then it suddenly turns out to be inapplicable. Like I said: silly, self-serving, unhelpful, and trollish. As are the vast majority of your contributions to this site. – Arturo Magidin Aug 12 '11 at 23:06
show 13 more comments
The digital equivalent is P = X XNOR Y, and thus the negation is (not P) = X XOR Y. In other words, P is false when X is true but Y is false, or when X is false but Y is true.
-
hmmm.. Making sense to me. Lets wait for some more time. – Dilawar Nov 15 '10 at 19:16
Yes, but you have to be very precise here, because the negation of implication is exclusive (not inclusive) OR. So the answer is "Either $X$ or $Y$ is false, but not both".
In general, if you are confused, start with a truth table for implication and then negate it. Resulting table matches XOR (exclusive OR).
-
You are not right. Let $X = (\ell$ is even) and $Y = (\ell$ is not odd). Then clearly $X \Leftrightarrow Y$, but "($\ell$ is not even) or ($\ell$ is odd)" is strictly weaker; you want "($\ell$ is not even) $\Leftrightarrow$ ($\ell$ is odd)" to be true.
-
The assertion $X \leftrightarrow Y$ can also be written as $X = Y$. So its negation is $X \neq Y$, which is the same as $X = \overline{Y}$ (since $X,Y \in \{0,1\}$), which is the same as $\overline{X} \leftrightarrow Y$.
-
The confusion here, I think, arises from not recognizing the principal connective at work. I know of at least three ways to figure out the principal connective.
1. Write the statement symbolically in Polish notation. The principal connective always gets represented by the very first symbol (or the string is not a wff).
2. Write the statement symbolically in reverse Polish notation. The principal connective always gets represented by the very last symbol.
3. Write out an abbreviated truth table. Just like regular truth tables you start with atomic wffs, then deal with longer and longer wffs "gradually". The last column that gets filled in falls under the symbol of the principal connective.
For this formula here's an abbreviated truth table with step numbers listed below the columns.
````(( x -> y) ^ (y -> x))
F T F T F F F
F T T F T F F
T F F F F T T
T T T T T T T
1 2 1 3 1 2 1
````
Also, "x if and only if y" in Polish notation goes KCxyCyx. So, we have a conjunction, and thus its negation goes NKCxyCyx, a negation of the conjunction of two conditionals. What this implies depends on the logical system in place. If we have an appropriate De Morgan law for the logic, then we can infer ANCxyNCyx (at least one of either of the negation of one of the conditionals or the negation of the other conditional holds). But, that De Morgan law might not hold (and in fact doesn't hold for some logical systems). Also, "x if and only if y" isn't logically equivalent in two-valued logic to "x and y", as I hope the above makes clear from the truth table of "x and y".
-
1
There is no need to abreviate a truth table when the proposition has only 2 variables. Secondly, even if there was a need, the table you wrote isn't it. I recommend you remove it because it makes no sense and only serves to add confusion. – bwkaplan Aug 13 '11 at 14:15
@Bwkaplan The number of variables simply does not determine anything about what type of wff you have. NKpq, Apq, CCCpqpq all have two variables. In other words, it doesn't tell you anything about what the principal connective. The last step in an abbreviated truth table always falls under the column of the principal connective. ((x->y)^(y->x)) in words goes ""if x then y" and "if y then x", or in other words ""y, if x" and "x, if y". Or ""x only if y" and "x if y"" which more compactly becomes "x if and only if y". (x<->y) in words goes "the material equivalence of x and y". – Doug Spoonwood Aug 14 '11 at 1:10
From Doug's suggestion, I'm taking the extra step of making the logic explicit in a truth table.
````X Y P NOT-P
T T T F
T F F T
F T F T
F F T F
````
-
See my post now. – Doug Spoonwood Aug 12 '11 at 22:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274441003799438, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/ideals+lattice-orders
|
# Tagged Questions
1answer
69 views
### Is there a ring with the lattice of ideals isomorphic to $(\omega+1)^{\operatorname{op}}?$
In this question, I gave an example of a ring whose lattice of two-sided ideals is order-isomorphic to $\omega+1$. I've been playing a bit with trying to find rings with a given lattice of ideals ...
1answer
67 views
### Diamonds of ideals, part 3
I'd like to wrap up the line of questioning started first in this question and then continued in this question. The only variant left to try is: "How close can you get to the Diamond lattice ...
2answers
176 views
### Followup to “Examples of rings with ideal lattice isomorphic to $M_3$, $N_5$”
In this post: Examples of rings with ideal lattice isomorphic to $M_3$, $N_5$ a nice example was given of a non-distributive ring. The lattice of ideals turned out to be the Diamond lattice $M_3$ with ...
2answers
230 views
### Are ideals in rings and lattices related?
There are (at least) two notions of ideals: An ideal in a ring is a set closed under addition and multiplication by arbitrary element. An ideal in a lattice is a set closed under taking smaller ...
3answers
378 views
### Examples of rings with ideal lattice isomorphic to $M_3$, $N_5$
In thinking about this recent question, I was reading about distributive lattices, and the Wikipedia article includes a very interesting characterization: A lattice is distributive if and only if ...
3answers
347 views
### Simple example of non-arithmetic ring
Can anyone provide a simple concrete example of a non-arithmetic commutative and unitary ring (i.e., a commutative and unitary ring in which the lattice of ideals is non-distributive)?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463162422180176, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/147089/translation-invariant-measures-on-mathbb-r
|
# Translation invariant measures on $\mathbb R$.
What are all the translation invariant measures on $\mathbb{R}$?
Except Lebesgue measure on $\mathbb R$ I didn't find any translation invariant measure. So I put this question?
I know that if $\mu$ is a measure then $c \times \mu$ is again a measure where $c>0$.
-
5
Up to a multiplicative constant, Lebesgue measure is the only translation-invariant measure on the Borel sets that puts positive, finite measure on the unit interval. I don't have a reference at hand, though. – Michael Greinecker May 19 '12 at 17:00
4
– Kevin May 19 '12 at 17:14
1
@Kevin: yes, this was known to Lebesgue already. He also asked explicitly whether it was possible to extend Lebesgue measure to the entire power set of $\mathbb{R}$ and whether such an extension was unique. This became known as Le problème de la mesure and influenced Banach's early work. The Banach-Tarski paradox is the most famous outgrowth of these investigations. – t.b. May 19 '12 at 19:00
## 3 Answers
Here is a way to argue out. I will let you fill in the details.
1. If we let $\mu([0,1))=C$, then $\mu([0,1/n)) = C/n$, where $n \in \mathbb{Z}^+$. This follows from additivity and translation invariance.
2. Now prove that if $(b-a) \in \mathbb{Q}^+$, then $\mu([a,b)) = C(b-a)$ using translation invariance and what you obtained from the previous result.
3. Now use the monotonicity of the measure to get lower continuity of the measure for all intervals $[a,b)$.
Hence, $\mu([a,b)) = \mu([0,1]) \times(b-a)$.
-
Let $\lambda$ be a translation-invariant measure on the Borel sets that puts positive and finite measure on the right-open unit interval $[0,1)$ then $\lambda$ is a positive multiple of Lebesgue measure. Here is an outline of the proof: Every Borel measure is determined by its behavior on finite intervals. By translation invariance, you know that a right-open interval of length $1/2^n$ has measure $1/2^n \lambda[0,1)$, since $2^n$ such pieces form a disjoint cover over $[0,1)$ and every such piece can be translated into every other other such piece. Now you can approximate every interval by such pieces to pin down the measure of each interval.
-
$\mathbb{R}$ is a locally compact group with respect to addition and the translation invariant measures are the Haar measures on this group. A general theorem by Von Neumann states that such a measure is unique up to a multiplicative constant.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388514757156372, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/5377/proof-that-zfc-is-inconsistent-using-turing-machines
|
# “Proof” that ZFC is inconsistent using Turing machines
I came across the following "proof" for the inconsistency of ZFC and can't find the flaw in it (if there is one...):
Construct a Turing machine A which sequentially runs on all proofs in ZFC and checks if the claim "A does not halt or ZFC is inconsistent" is proved. If such a proof is found, A halts.
Note that A can know its encoding - see Kleene's recursion theorem - and even if it can't this "obstacle" can be overcome. So it seems to me such A is constructible.
Now, if A halts then it is because it has found a proof for "A does not halt or ZFC is inconsistent". Since A halted, it is implied that ZFC is inconsistent. So either A does not halt, or ZFC is inconsistent.
Now (and my guess is that this is the part where this proof "cheats") we have just proved the claim "A does not halt or ZFC is inconsistent", so such a proof exists in ZFC. So A must halt on this proof - and we have seen that this implies that ZFC is inconsistent.
Where is the mistake?
-
In the antepenultimate paragraph, did you mean *"So either A does not halt, or ZFC is inconsistent"? Also: how do you justify that "such a proof exists in ZFC"? – Arturo Magidin Sep 24 '10 at 14:36
Yes, thank you. – Gadi A Sep 24 '10 at 14:37
1
Surely ZFC is consistent, and is rather not complete? – Noldorin Sep 24 '10 at 19:49
@Noldorin: Hence the question. – ShreevatsaR Sep 25 '10 at 3:11
If you ran into this in a published document, could you give a pointer? It would be a nice exercise for beginning grad students learning logic – Carl Mummert Sep 25 '10 at 11:46
show 2 more comments
## 7 Answers
Here's my take on it. Use $(A \uparrow)$ to mean $A$ does not halt, and $(A \downarrow)$ to mean $A$ does halt. Let $\Phi$ be any sentence; the question uses ~Con(ZFC) but this is not material. Let $\mathrm{Pvbl}(b)$ be the standard ZFC-formalized provability predicate, which says that there is a coded proof of the formula with number $b$.
The question is right that, by the recursion theorem, we can create a specific machine $A$ such that $$(A \downarrow) \Leftrightarrow \mathrm{Pvbl}((A\uparrow) \lor \Phi)$$
Moreover, because of the specific form of the formula $(A \downarrow)$, ZFC is able to prove $$(A \downarrow) \to \mathrm{Pbvl}(A\downarrow)$$
Working in ZFC, assume $(A \downarrow)$. Then we know $\mathrm{Pvbl}(A \downarrow)$ and $\mathrm{Pvbl}((A \uparrow) \lor \Phi)$. ZFC is able to prove enough about the Pvbl predicate to ensure that $$\mathrm{Pvbl}(\psi) \land \mathrm{Pvbl}(\lnot \psi \lor \theta) \rightarrow \mathrm{Pvbl}(\theta)$$ for all $\psi$ and $\theta$. So we can obtain $\mathrm{Pvbl}(\Phi)$.
Now: what we have obtained is: "A does not halt or Pvbl($\Phi$)". In the fourth paragraph of the question, it instead claims we can obtain "A does not halt or $\Phi$". That is a stronger statement, and this is the first error I see in the proof. In the fourth paragraph, it is silently assumed that provability implies truth, but this is not correct for formalized provability.
ZFC does not prove $\mathrm{Pvbl}(\Phi) \to \Phi$ in general. In particular, ZFC does not prove $\mathrm{Pvbl}(0=1) \to (0=1)$ because there are models of ZFC in which both $\mathrm{Pvbl}(0=1)$ and $0 \not = 1$ are satisfied.
Addendum: Löb's theorem addresses this exact question. Applied to ZFC, it states that if ZFC proves $\mathrm{Pvbl}(\Phi) \to \Phi$ then ZFC already proves $\Phi$. So, in particular, ZFC does not prove that Pvbl(~Con(ZFC)) implies ~Con(ZFC).
-
1
Thank you very much. My intuition still bothers me about that fact that if ZFC proves that X is provable, it still does not say that X is true - after all, isn't it what the standard interpretation of ZFC should ensure? – Gadi A Sep 25 '10 at 8:33
3
It's a subtle point. It's true that if ZFC proves Pvbl($\Phi$) then, because Pvbl($\Phi$) must hold in every model of ZFC and ZFC has an ω-model, $\Phi$ really is provable. However, ZFC doesn't prove the scheme $\mathrm{Pvbl}(\Phi)\to\Phi$ because this scheme fails in some models of ZFC. The Pvbl predicate quantifies over the "natural numbers" in a given model of ZFC, which might not be the actual natural numbers. If Pvbl($\Phi$) holds in a non-$\omega$-model, and the number that witnesses the existential quantifier is nonstandard, then that "coded proof" is not an actual proof. – Carl Mummert Sep 25 '10 at 11:40
1
Gadi constructed a Turing machine enumerating possible proofs of a statement P that is by construction true but not provable in ZFC. The only flaw I see in his reasoning is (as he originally pointed it) where he assumes that because he just proved that P must be true, then his proof can be formalized in ZFC. You say that Gadi assumed that provability in ZFC implies truth, which seems to me to be the very reason why we constructed ZFC in the first place, assuming it is consistent. To me, it looks like Gadi has assumed that truth implies provability in ZFC. – user519 Sep 25 '10 at 13:54
1
I said that the problem is that the question assumed that formalized provability implies truth: that is, it assumed that Pvbl($\Phi$) implies $\Phi$, which is not right. The issue is that Pvbl only corresponds to actual provability in the some models, so although it is possible to prove "A does not halt or Pvbl($\Phi)$" in ZFC, it is not apparently possible to prove "A does not halt or $\Phi$" in ZFC. As I read it, the point of the question is to determine exactly where things go wrong if you try to actually formalize the proof in ZFC. That's what I addressed. – Carl Mummert Sep 25 '10 at 23:13
You've just proved that "either A does not halt or ZFC is inconsistent" must be true, not that this statement is provable in ZFC. There exist unprovable true statements in ZFC (if it is consistent). See Gödel's first incompleteness theorem.
-
2
You are correct, of course, but naively it seems that everything in my "proof" can be formalized inside ZFC. I can't put my finger on the "hard part" that cannot be formalized. – Gadi A Sep 24 '10 at 14:55
1
I don't have an answer to this question but you definitely showed that this proof can't be formalized inside ZFC. We can actually simplify your problem a bit. Take a Turing machine A that enumerates proofs in ZFC and halts iff it finds a proof of "A doesn't halt". Obviously, A cannot halt if ZFC is consistent and therefore there can't be a proof in ZFC of "A doesn't halt" although it is true. That's actually a sketch proof of Gödel's first incompleteness theorem. – user519 Sep 24 '10 at 15:25
1
Note the difference - in "my" problem I give a proof for the claim "A does not halt or ZFC is inconsistent" - the only problem is that my proof is not phrases in formal terms and so maybe can't be formalized in ZFC. However, this is a simple proof, and usually such proofs are assumed to be possible to phrase in ZFC without question. – Gadi A Sep 24 '10 at 15:32
For such questions, it is hard to get an intuitive understanding. I think the best way to intuitively understand it is by formalizing it using model-theory. So lets look at it.
Say you define your A as a set in ZFC, using an $\cal{L}_1$-Formula, and denote the property of "halting" as $\cal{H}$, which can also be expressed in ZFC. So by your assumption, you have $ZFC\models{\cal H}(A)\leftrightarrow(ZFC\models\bot \vee ZFC\models\lnot{\cal H}(A))$.
With a little more theory, you can imply therefore $ZFC+{\cal H}(A)\models (ZFC\models\bot\vee ZFC\models\lnot{\cal H}(A))$, and therefore, $ZFC+{\cal H}(A)\models (ZFC\models\bot)$. That is your one direction.
The other direction goes with $ZFC+\lnot{\cal H}(A)\models \lnot (ZFC\models\bot \vee ZFC\models\lnot{\cal H}(A))$ (which follows from the first assumption), i.e. $ZFC+\lnot{\cal H}(A)\models (ZFC\not\models\bot) \wedge (ZFC\not\models\lnot{\cal H}(A))$. That is, in every model of ZFC, in which $\lnot{\cal H}(A)$ holds, every "sub-model" which can be created doint meta-theory inside that one, does neither model $\bot$ nor model $\lnot{\cal H}(A)$.
-
You might find Scott Aaronson's lecture on this useful. Search for the phrase "Talk to the axioms!"
In summary, suppose ZFC is consistent. I'll work with Alex's simplification, where $A$ searches for a ZFC proof of "A does not halt." Then there is a model $M$ of ZFC+"$A$ halts". In $M$, there is a set $S$ which satisfies the whatever sentence of ZFC you have cooked up to encode "a halting sequence of states for $A$". There is a set $P$ which satisfies the first order sentence you have cooked up to encode "a ZFC proof that $A$ does not halt."
From the perspective of the model $M$, there is no contradiction. The model sees that $P$ encodes a ZFC proof that $A$ doesn't halt, the model sees that $A$ does halt, so the model believes that ZFC is inconsistent. Since Con(ZFC) is not an axiom of ZFC, the model sees nothing wrong with this.
From an outside perspective, $S$ does not actually look like a sequence of states for the Turing machine $A$, and $P$ does not look like a proof. They are just objects of the model which the model thinks have those features, because it has some strange interpretation of set theory. Even if you could construct $M$ (and I think there are obstacles to $M$ being computable in any reasonable way), you could not convert $P$ into an actual ZFC proof that $A$ does not halt, nor could you convert $S$ into a halting sequence for $A$. Any attempt to convert them into an actual proof or actual halting sequence will run into the same difficulty Scott describes in trying to extract a proof of NOT(Con(PA)) from a model of PA+NOT(Con(PA))
-
Here's a simplified form of your original argument:
Construct a Turing machine A which sequentially runs on all proofs in ZFC and checks if the claim "A does not halt" is proved. If such a proof is found, A halts.
Note that A can know its encoding - see Kleene's recursion theorem - and even if it can't this "obstacle" can be overcome. So it seems to me such A is constructible.
Now, if A halts then it is because it has found a proof for "A does not halt". This is impossible, since A halted.
Now we have just proved the claim "A does not halt", so such a proof exists in ZFC. So A must halt on this proof - a contradiction.
(Hopefully this version makes the error more clear. The proof that "A does not halt" assumes the consistency of ZFC, and is therefore a proof in ZFC + Con(ZFC), not ZFC. In fact, the above argument is essentially a proof that Con(ZFC) is not provable in ZFC.)
-
Assuming you're working in ZFC (not assuming Con(ZFC)), it's not actually contradictory for A to find a "coded proof" that A halts. If A halts, all we know is there is a (maybe nonstandard) number that codes a proof of "A does not halt". This does not actually contradict A halting (which just means some $\Sigma^0_1$ formula is satisfied). I do think that you have shown that any model in which A halts has an inconsistent provability predicate (meaning Pvbl($\theta$) holds for all $\theta$). But Pvbl($\theta \land \lnot \theta$) is not a contradiction, and doesn't imply anything about $\theta$. – Carl Mummert Sep 26 '10 at 11:53
The problem is that if the machine manages to prove it - it halts. and therefore ZFC inconsistent.
-
3
I don't see how this addresses the question. – Carl Mummert Sep 24 '10 at 18:49
Why not replace "ZFC is inconsistent" by false? Namely you can prove anything this way.
But this argument is not a proof because it has an extra assumption that A halts iff there is a proof that A does not halt. You mean A is constructible.
What does consistent mean? Someone defines consistent as ~provable(0 = 1), but this only implies the axioms are not against our intuition. But we could have 0 = 1 as an axiom.
-
The theory {A does not halt} proves A does not halt, no matter A halts or not. So it's all about whether we accept the axioms of the theory. If we accept them as true, the proof implies truth. Another question is what if we choose the theory as the empty set of axioms, then certainly proof implies truth. – Zirui Wang Dec 24 '10 at 5:16
What's Kleene's recursion theorem and how is it applied to the construction of A? How is it proved? – Zirui Wang Dec 24 '10 at 6:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509453773498535, "perplexity_flag": "middle"}
|
http://www.quantummatter.com/beyond-point-particle/
|
Milo Wolff's site on his
Space Resonance Theory.
A fresh look on
modern quantum physics.
Recent articles
# Beyond the Point Particle - A Wave Structure for the Electron
May 15th, 1998 by Milo Wolff
Galilean Electrodynamics 6
No. 5, October 1995, pages 83-91
(updated 15 May 1998)
## Abstract.
The dual particle/wave nature of the electron has long been a paradox in physics. It is now seen that the electron consists entirely of a structure of spherical waves whose behavior creates their particle-like appearance. The correctness of this structure is supported by the physical laws which originate from this wave structure, including quantum theory, special relativity, electric force, gravity, and magnetism. This type of structure is termed a Space Resonance.
Key words: Electron, Physical Laws, Cosmology, Particles, Space, Quantum Mechanics.
## INTRODUCTION
The apparent inconsistencies between the point particle theory and the observed wave behavior of the electron are reconciled by an electron structure, the Space Resonance. Electrons or positrons can be described as a pair of spherical scalar waves diverging and converging at their center. This simple structure produces the observed properties of electrons. This basic charged-particle structure is then found to be the origin of the basic laws of physics, including quantum theory, relativistic mass increase, inertia, charge and electromagnetism.
The Space Resonance structure is obtained from three assumptions or principles:
1. a Wave Equation describing spherical scalar waves,
2. the Space Density Assumption, which leads to an energy exchange mechanism, and
3. the Minimum Amplitude Principle, which regulates particle interactions.
### Origins of Natural Laws.
The business of physics is the abstract quantification of facts observed in nature. The rules we form for reconstruction and expression of the observed facts are the laws of nature and Principles of nature. The distinction between them is tied to their generality. Principles are considered to be more general and by implication more basic. For example, the Principle of Least Action is inferred from several of the force laws and the principle of Conservation of Energy expresses all the various heat and energy flow laws.
Since laws are obtained by measurement of nature rather than derived from other knowledge, they are by definition empirical and "of unknown origin". Therefore if we seek to find the origins of laws we cannot use the existing laws themselves but must use other observed facts together with logical deduction and established mathematics. Rarely a law is found contained within another law. For example the gas law $PV = nRT$ half a century ago was seen to be a result of Newton's laws and QM applied to molecules in a closed container. Such serendipity is the exception; today, the search for origins must probe deeper into nature than heretofore and we must be prepared to find unprecedented perspectives of nature. Growing evidence cited by Galeczki[1] is compelling that the basic laws are intimately involved with cosmology and and are dependent on relationships between individual particles and the remaining matter of the universe. Accordingly in the search for the the origins of natural laws, observations of unexplained puzzles of quantum particles and cosmology are attractive sources of input data.
When seeking origins, it is important not to inadvertently use existing laws to deduce themselves. Although the quantum laws of quantum particles can be extrapolated to large macro-objects, the inverse is not possible. Such circular reasoning can occur if, for example, an e-m field or mechanical model from macro-physics is assumed to be the structure of a quantum particle. Logically, finding the origins of existing laws requires forming new concepts that nevertheless satisfy observed data. It is a major result of this article to further deduce that most of the natural laws originate from the properties of the quantum waves of the charged particles (electron, proton, etc.), and the properties of the space (Ether, vacuum, etc.) which is formed from the totality of all those particle quantum waves. One such effect is already known as Mach's Principle which asserts that inertia is a result of an inertial reference frame established by all matter in the universe.
The discovery of these origins from the work of this article creates a radical new picture of the physical world: quantum mechanics and relativity are in a sense united, origins of forces are understood, puzzles and paradoxes are explained and, most important, relationships between microphysics (electrons and particles) and the universe (cosmology) are seen to be a result of an all-pervading "space" (the vacuum or Ether) filled with oscillating quantum (particle) waves.
The reader should be aware that he is evaluating a new basic proposal that all natural science results from just three assumptions about the properties of space.
## SECTION I - HISTORY
The search for the structure of the electron started over a century ago, in H.A. Lorentz's book[2] "Theory of the Electron". No satisfactory structure has been found (until now). As late as the 1950's, Einstein was asked if he could explain the confusion of hadron particles which were being found in ever increasing numbers. He replied:
"I would be happy just to know what an electron is!"
Many have suggested that a wave-structured electron plays a fundamental role in nature. The famous geometer-mathematician Clifford[3] suggested in 1876 that all physical laws were the result of undulations (waves) in the fabric of space. Nobel laureate Paul Dirac, who developed much of the theory describing the quantum waves of the electron, was never satisfied with its point-particle character because the Coulomb electron required a mathematical correction termed "renormalization". In 1937, he wrote:
"This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small — not neglecting it because it is infinitely large and you do not want it!"
Weyl, Clifford, Einstein, and Schrödinger[4] agreed that the puzzle of matter will be found in the structure of space, not in point-like bits of matter. They speculated that the physical world is based upon a geometry of space. What we observe as material bodies and forces are nothing but shapes and variations in the structure of space. The complexity of physics and cosmology is just a special geometry. This idea had an enduring appeal because of its economy of concepts and simplicity of design.
In 1945, Wheeler and Feynman[5] represented charge by assuming a pair of spherical inward and outward electromagnetic waves. Their use of advanced (inward) waves is an apparent violation of the principle of causality which states:
"Events cannot occur before their causes."
Wheeler and Feynman showed that the puzzling inward waves do not violate causality because they are not directly observeable. Their results hold for scalar waves which are exact solutions of a wave equation in spherical coordinates.
Phipps[6] put forth a beta-structure hypothesis, in which he suggested that the electron-positron is the fundamental particle of the universe. He reasoned that the infinite extent of charge forces were more fundamental than local effects of baryons. Wolff[7, 8, 9] formulated the results described here.
This paper shows that these scientists' visions have come true. Dirac was correct. The electron is a wave structure without particle substances. The medium of the waves is space, still unexplored but related to "vacuum" and "Ether"; terms increasingly used as the wave nature of matter becomes unmistakable.
## SECTION II - PUZZLES OF ELECTRON STRUCTURE
### A. Is the Electron a Wave or a Particle?
The electron exhibits properties of both particles and waves. However, many experiments have been done to search for a core of the electron without results. What we do observe is that energy exchanges take place at "point-like" locations in the metallic lattice of detectors. On the other hand, wave properties of an electron are obvious from the success of Quantum Mechanics. This theory describes a mechanism where waves interact at point locations and thereby produce the results that we observe as particle-like.
### B. What is the Mechanism of Energy Exchange?
Exchanges between charged particles is the dominant way in which energy is transferred in our solar system. An exchange is always required to darken a film, move a needle, record a bit, or fire a neuron. These exchanges dominate our technology, daily lives, and Nature. They are the means of our human senses, laboratory experiments, and the production of knowledge, but the mechanism is unknown.
Furthermore, an Energy Exchange Mechanism can be seen to underly the force laws and even special relativity, the DeBroglie wavelength, and Conservation of Energy. For example, the force laws describe force as the change in energy over distance, $F = \frac{dE}{dr}$. Therefore, whatever motivates the change in energy generates what we observe as force. The coulomb and gravity force laws do not describe what creates these forces because they are only formulas to calculate force. That is, they do not imply any particular energy exchange mechanism. This mechanism for the electron, described below, depends on the existence of other matter in the universe.
### C. How does Matter depend on other Matter in the Universe?
The mere existence of a particle imposes requirements upon its properties. Without particles to populate a universe, the universe could not exist because our concept of "universe" is simply a collection of particles and their distribution. Thus our understanding of our universe depends on our understanding of the particles in it. Further, the natural laws of the universe could not exist without particles; Laws demand the presence of particles, upon which the laws can operate. Laws without particles are meaningless because particles are the objects of the laws. Especially we need to understand the relationship of the laws to the electron and proton, the two charged particles whose infinite fields dominate the universe.
And the opposite is true. We cannot identify a particle without the force laws to locate and measure it. Thus our perception of particles depends on the form of the natural laws. These three, particles, laws, and the universe are an interdependent trilogy. Each requires the existence of the others. Therefore, we cannot expect to understand cosmology, the structure of the universe, unless we also understand the relationships within the trilogy. The nature of the relationships between separated particles of matter, more basic than forces between them, are brought out by the following arguments.
#### Measurement is a Property of an Ensemble of Matter.
A particle entirely alone in the universe cannot have dimensions of time, length, or mass. These measures are undefined without the existence of other matter because dimensions can only be defined in comparison with other matter. For example, at least five separated particles are necessary to crudely define length in a 3D space: four to establish coordinates and one being measured. Thus the measurement concept requires the existence of an ensemble of particles. In our universe the required ensemble must include all observeable matter, for there is no way to choose a special ensemble. The importance of this fact becomes clear when we recall that time, length and mass are the basic unit set used to describe all scientific measurements.
#### Particle Properties require Perception-communication Between Particles.
If there were no means for each particle to sense the presence of other matter in its universe, the required dimensional relationships above could not be established. How can a particle possess a property which is dependent on other particles, if there is no way for the particles to impart their presence to each other? Without communication, each particle would be alone in its own separate universe. Therefore continual two-way perceptive communication between each particle and other matter in its universe is needed to establish the laws of nature. The laws are then established in terms of the dimensions (units) established by the ensemble of matter.
Figure 1. The Dynamic Waves of a Space Resonance.
We deduce that the waves of an electron structure are the means of the communication between particles of matter. Below in Section III we shall see that the mathematical solutions of the wave equation indeed allow for two-way continuous communication by means of waves which form the the electron structure. This reasoning underlying measurement yields boundary criteria on the structure of the electron summarized in the two corollaries below:
##### Corollary I.
There exists a means of continual communication between particles which takes place in the space (Ether, vacuum) of the universe of the particles.
##### Corollary II.
A "universe" is defined for each particle as the space and other particles within the space which are able to communicate with the particle.
#### The Measurement of Time requires a cosmological clock.
Using reasoning similar to the above but for the dimension of time, we can conclude that time measurement requires the existence of cyclic events among the particles of the universe; a kind of clock. Those properties of particles which involve the measurement of time, notably mass and frequency, cannot have a meaning if particles have no scale of time. That is, the particles themselves, must have a way to compare their own cyclic events with other particles. Therefore, there must exist a standard cosmological clock. One straight-forward proposal is a cosmic clock contained in every identical particle structure as an oscillator which communicates with other particles. Because of the uniformity of space (the oscillator medium) the clocks would be alike.
#### The role of Space.
Since all the laws of nature are written in terms of the dimensions (time, length, mass) defined by ensembles of matter communicating in the space of a universe, we infer that the behavior of matter is at least partly determined by the geometric properties of the space (Aether) within the universe.
It may be noted that Einstein's General Theory of Relativity (GTR) is also derived from properties of space that determine the large scale motion of matter and light beams. Similarly, measurements in GTR space depend upon the distribution of matter in the universe. However, unlike the viewpoint employed here, the GTR theory is descriptive rather than investigative. And the large-scale GTR does not involve quantum-level properties nor is it concerned with communication between particles. Nevertheless the properties of space viewed from this quantum perspective, particles dependent upon particles, should, when expanded to the limit of large scale matter, be the same as the GTR.
### D. Mach's Principle.
The unknown origin of Newton's law of inertia, $F = \frac{dp}{dt}$, has attracted frequent attention. Ernst Mach[10] in 1883 boldly suggested that inertia depends upon the existence of the distant stars. His concept arises from two fundamentally different methods of measuring the speed of rotation. First, without looking at the sky, one can measure the centrifugal force on a mass $m$ and use Newton's Law in the form, $F = \frac{mv^{2}}{r}$, to find circumferential speed $v$. The second method compares the object's angular positions with the fixed (distant) stars. Mysteriously, both methods give exactly the same result. Mach reasoned that there must be a causal connection between the distant matter in the universe and inertia. He asserted:
"Every local inertial frame is determined by the composite matter of the universe."
(This wave structure of the electron now proves that Mach was right.)
Mach's Principle of Inertia is the clearest evidence that very distant bodies can affect us instantaneously. Phipps[6] quotes Mach:
"When the subway jerks, it is the fixed stars that throw you down."
Mach's Principle is criticized because it appears to violate causality:
"Events cannot occur before the causes which produce them."
but this does not actually occur as will be seen below where Mach's Principle is used to find the energy exchange mechanism of the electron.
## SECTION III - THEORY OF THE NEW ELECTRON
Three assumptions about the properties of space determine the space resonances. In return for this investment, the theory obtains a physical and mathematical origin for natural laws plus relationships between particles and cosmology.
### A. Assumption I - The Wave Equation.
Because it must be compatible with quantum theory, a scalar wave equation is needed to describe the structure of natural electrons. Spherically symmetric solutions are required because charged particles have spherical symmetry. Quantum theory requires the frequency of the waves to be proportional to the mass according to the formula $f = \frac{mc^2}{h}$. Two solutions of the wave equation shown in Figure 2 describe the physical structure of the electron.
Figure 2. The Dynamic Waves of a Space Resonance.
The resonance is composed of a spherical IN wave which converges to the center and an OUT wave which diverges from the center. Their separate amplitudes are infinite at the centers. When combined, the two waves form a standingwave which has a finite amplitude at the center. The standing wave is the structure of the electron. The inward and outward waves provide communication with other matter of the universe. Spin of the electron is a result of the reversal of the IN wave at the center to become the OUT wave.
Equation 2 below shows that an electron is comprised of two spherical scalar waves traveling in space with velocity $c$; one inward to a center and the other outward. The two superimposed waves form a standing wave, termed a Space Resonance (SR). The center of the wave structure is the nominal location of the electron. These space resonances are perpetual spherical oscillators. Each resonance extends throughout space and interacts with other resonances so that the natural laws result from the properties of the waves and the medium they travel in, 'space' or the Ether.
The Wave Equation for the electron, in spherical coordinates, is:
Formula 1
$\dfrac{\partial^{2}\Psi}{\partial r^{2}} + \dfrac{2}{r} \dfrac{\partial\Psi}{\partial r} - \dfrac{1}{c^{2}} \dfrac{\partial^{2}\Psi}{\partial t^{2}} = 0$
where $\Psi$ is a continuous scalar amplitude with values everywhere in space and $c$ is the propagation speed. This equation has two spherical wave solutions for the amplitude $\Psi$: One of them is a converging IN wave and the other is a diverging OUT wave, shown in Figure 2,
Formula 2
$\Psi_{IN} = \dfrac{\Psi_{0}\ e^{i(\omega t + k r)}}{r}$
$\Psi_{OUT} = \dfrac{\Psi_{0}\ e^{i(\omega t - k r)}}{r}$
The IN and OUT waves combine to form a standing wave. $\omega$ is the frequency characteristic of an electron proposed by deBroglie and Schrödinger. $k$ is the wave constant. The amplitude of the continuous waves is a scalar number, not an electromagnetic vector. At the center the standing wave amplitude is finite, not infinite, in agreement with the observed electron.
A standing wave results by combining them with their amplitudes opposing at $r = 0$, to get:
Formula 3
$\Psi_{STANDING} = \Psi_{IN} + \Psi_{OUT} = \dfrac{\Psi_{0}\ e^{i(\omega t + kr)}}{r} + \dfrac{\Psi_{0}\ e^{i(\omega t - kr)}}{r}$
The equation, $\mbox{Energy} = mc^{2} = hf$, converts units of energy into units of frequency. Thus mass is proportional to frequency of the electron's space resonance oscillator: $\omega = 2 \pi f = 2 \pi \frac{mc^{2}}{h}$. All the waves of all the charged particles in the universe have this same frequency because the frequency is a property of the wave medium - space itself, the Ether. This frequency is the universal cosmic clock which regulates the laws of nature and our sense of time. The velocity $c$ is also a universal property of the Ether which we observe as the speed of energy exchange (light).
This equation becomes clearer when changed to a simpler exponential function:
Formula 4
$\Psi_{STANDING} = \dfrac{\Psi_{0}\ e^{i \omega t} \cdot sin(kr)}{r}$
The exponential factor is an oscillator. The sine function modulates the rapid oscillator waves with a standing wave of wavelength $\lambda = \frac{1}{k}$ which surprisingly is the Compton wavelength of the electron. The intensity is the envelope of $\Psi\!\cdot\!\Psi^{*}$, which decreases as $\frac{1}{r^{2}}$ away from its centers. This equation is simulated in the animation of the electron on this website.
The amplitude, $\Psi_{STANDING}$ corresponds to the electric potential of the electron. The amplitude at the center is obtained by taking the limit as $r \rightarrow 0$ in $\frac {sin(kr)}{r}$ in the equation above and is equal to $\Psi_{0}$. This finite amplitude explains why 'renormalization' in QED theory works. Renormalization boils down to an arbitrary cut-off of the Coulomb electric potential to avoid an unwanted infinity at the center when $r \rightarrow 0$. Avoiding the infinity because it is was annoying was Dirac's complaint. Although the reason was contrived, the cut-off worked. Now, since experiments show extremely accurate verification of the cut-off, one can regard the cut-off as an observed correction to the electron potential, which elsewhere is the well-known $\frac{1}{r^{2}}$. The space resonance structure correctly shows the origin of the correction, the finite center amplitude.
#### Predicted Properties from the Wave Equation.
From only the first of three assumptions, several properties of the electron are already observable:
1. There are two kinds of SR electrons as a result of two ways to superimpose the IN and OUT waves. One combination has a negative IN-wave amplitude at the center and corresponds to the electron. The other has a negative OUT-wave at the center forming an anti-resonance which is the positron. If the anti-resonance is superimposed upon the resonance, they annihilate like electron and positron. This is seen from the equations.
2. They obey Feynman's Rule: A positron is an electron going backward in time. To see this, replace the variable $t$ with a $-t$ in the function for an electron resonance, Equation 3. Replacing the time exchanges the IN and OUT waves and you obtain the equation for a positron as Feynman said.
3. The Origin of Conservation of Energy. Energy is exchanged in nature by two resonances (oscillators) interacting with each other. For all oscillator pairs known in nature, significant coupling occurs only if both have the same resonant frequency. If one oscillator changes frequency upward, the other changes frequency downward. Thus, the frequency (energy) changes of interacting space resonances are equal and opposite. This is exactly the content of the Conservation of Energy law.
### B. Assumption II - Establishing the Density of Space
The wave equation provided a structure which possesses some of the electron's characteristics but a means for the SRs to interact and exchange energy is also needed. Unfortunately, since waves in a homogeneous medium pass through each other, the medium has no means for interaction. To find the means of interaction, we recognize that space is not homogeneous everywhere. For example, it has been observed that a star will bend the path of light which goes near it. A similar behavior occurs at the center of a charged particle.
To examine this requirement we first make a quantitative assumption, similar to Mach's Principle, which establishes the density of space (Ether or vacuum). Then we will examine the density formula seeking a means of intereaction. The Space Density assumption is:
Assume that the mass (wave frequency) and propagation speed of an SR wave in space depends on the sum of all SR wave intensities in that space; a superposition of the intensities of waves from all particles inside the Hubble ($H$) Sphere of radius $R = \frac{c}{H}$, including the intensity of a particle's own waves.
Formula 5
$mc^{2} = hf = k' \sum\limits_{n=1}^{N} \dfrac{\Psi_{n}^{2}}{r_{n}^{2}}$
In other words, the frequency $f$ or mass $m$ of a particle depends on the sum of amplitudes squared of all waves $\Psi_{n}$, from the $N$ particles in the universe, whose intensities decrease inversely with range squared. That is, waves from all particles in the universe combine their intensities to form the total density of 'space'. This density determines the electron's wave frequency. This space corresponds to Einstein's "Aether" or quantum theory's "vacuum".
Now examine the homnogeneity of the space. The universe contains so many particles that the density of space is nearly constant everywhere. But close to the center of an electron, the amplitude of an electron's own waves following the $\frac{1}{r^{2}}$ rule, is larger, producing a "lump" in space density. This lump at the center of the electron causes wave interactions. It is the way energy is transferred and what we call "charge". Its correctness is tested below.
#### Energy Transfer Mechanism of the Space Resonance.
How does the charge mechanism operate? It is well-known that AC signals flowing through a non-linear element in a circuit will mix. That is, if there is a two-signal input:
Formula 6
$\mbox{INPUT} = A\,cos(\omega_{1}\,t) + B\,cos(\omega_{2}\,t)$
then the output will be:
Formula 7
$\mbox{OUTPUT} = \dfrac{A\,B\,[cos(\omega_{1}\,t + \omega_{2}\,t) + cos(\omega_{1}\,t - \omega_{2}\,t)]}{2} + \mbox{other components}$
The non-linear element produces sum and difference frequencies of the original $\omega_{1}$ and $\omega_{2}$.
Similarly in space, different waves passing through the dense, non-linear region at the particle center will mix. If an input frequency and a particle frequency are similar, resonance can occur. An example of this is a tuned radio receiver. An energy (frequency) exchange between resonances behaves like two coupled oscillators in a circuit, or like two pendulums joined with a spring.
#### A Test of Assumption II
If an electron's own waves can create a denser region near its center, then the intensity $I$ of those waves at some radius of non-linearity $r_{0}$, must be comparable to the intensity of waves from all other $N$ particles in the Universe. This requirement is written:
Formula 8
$\mbox{Intensity}\ \ =\ \ I \ \ =\ \ \dfrac{\Psi_{0}^{2}}{r_{0}^{2}} \ \ =\ \ \sum\limits_{n=1}^{N} \dfrac{\Psi_{n}^{2}}{r_{n}^{2}} \ \ =\ \ \dfrac{N}{V}\ \int\limits_{r=0}^{r=cT} \left( \dfrac{\Psi_{0}}{r_{0}} \right)^{2} 4 \pi r^{2}\ dr$
where $V$ is the volume inside the Hubble Sphere and $R$ its radius. The integral, from $r = 0$ to $R = cT = \frac{c}{H}$, extends over a sphere whose expanding radius $R$ depends on the age $T$ of the particle. Thus $T$ is the maximum range of the particle's spherical waves. This reduces to:
Formula 9
$r_{0}^{2} = \dfrac{R^{2}}{3N}$
Inserting values from astronomy measures, $R = 10^26$ meters and $N = 10^{80}$ particles, the critical radius ro equals $6 \times 10^{-15}$ meter. If the assumption is right, this should approximate the classical radius of an electron, $r_{e} = \dfrac{e^{2}}{mc^{2}}$, which is $2.8 \times 10^{-15}$ meters. The two values almost match, so the prediction is verified. Apparently dense centers do exist, and:
Formula 10
$\dfrac{e^{2}}{mc^{2}} = \dfrac{R}{\sqrt{3N}}$
Equation 9 is a relation between the size $r_{0}$ of an electron and the size $R$ of the Hubble Universe. It is termed the Equation of the Cosmos.
#### Observations on the non-linearity Properties of Space Density
The large density of an electron's own waves at the center are the causes of 'charge' effects, wave-coupling, and energy exchange between particles.
1. Charge and mass interactions occur at the center (lump). The electron resonance extends throughout space but energy exchanges take place in the non-linear bump at the center. Thus the SR "looks" like a point particle but no mass or charge substance is required to produce this experimental appearance. It is all waves.
2. Modulation of the waves behaves like a photon. When two resonances exchange energy (shift frequency - Section IV H below), the IN/OUT waves traveling between them are modulated with the frequency-shift information. This modulation travels at velocity $c$, like a photon. But the only events we observe are two energy shifts, one at the source and one at the absorber. This corresponds exactly with experimental observation of the photon.
### C. Assumption III - The Minimum Amplitude Principle.
Assumptions I and II describe the electron's structure, its energy exchange mechanism, conservation and electric force. But there has to be a law to determine whether two particles should move together or apart, or whether their frequencies will change up or down. One more assumption is needed that governs the behavior of energy exchanges within a group of particles. A Minimum Amplitude Principle (MAP) is found, described by:
Formula 10
$\int \left( \Psi_{1} + \Psi_{2} + \Psi_{3} + ... + \Psi_{n} \right)^{2} dx\,dy\,dz = \mbox{a minimum}$
or:
The total amplitude of particle waves in space always seeks a minimum.
In other words, all the waves of the total number $N$ of particles inside the Hubble Sphere adjust themselves at each point to make total amplitude a minimum. To accomplish this, energy (frequency) exchanges take place, or wave-centers move in order to minimize the total amplitude. This principle is very powerful and predicts many observations. For example, waves of two electrons close together will have a higher intensity than electrons farther apart. Therefore two electrons must repel in order to satisfy the MAP. A positron and an electron will attract. It also creates the Pauli Exclusion Principle, forces between atomic nuclei, and gravitation.
#### Observations on the Minimum Amplitude Principle.
1. The Pauli Exclusion Principle is one result. This is because MAP prevents two identical resonances (fermions) from occupying the same state since their total amplitude would be a maximum rather than a minimum.
2. The electric charge force between two resonances is $F = \frac{k}{r^{2}}$, where $k = \dfrac{e^{2}}{4 \pi \varepsilon_{0}}$. It is the same as Coulomb force everywhere except at the center. This force arises as a result of the Minimum Amplitude Principle which attempts to minimize wave amplitudes near the resonances. The $\frac{1}{r^{2}}$ factor is the result of the 3D geometry of ordinary space. The electric constant $k$ is a measured parameter which can be approximated from Equation 10 which shows it to be a property of space. Thus only one value of charge occurs in nature. The complex amplitude $\Psi$ can be regarded as the electric potential of the electron.
## SECTION IV - APPLICATIONS OF THE SPACE RESONANCE ELECTRON
The structure of the SR leads to new applications that solve puzzles of physics and cosmology. The examples below are important applications.
### A. Properties of a Moving Space Resonance.
Quantum mechanics and special relativity seem unrelated, but they have one feature in common: Both laws depend on the relative velocity between two particles. Therefore, we should investigate the interaction of two space resonances in relative motion. One SR may be thought of as a source interacting with the other SR, as an absorber or observer.
Consider two SRs moving with relative velocity $\beta = \frac{v}{c}$. Each receives the same Doppler shifted waves from the other. They are symmetrical. Their IN waves are red-shifted and their OUT waves are blue-shifted according to the usual Doppler factors, $\gamma(1+\beta)$ and $\gamma(1-\beta)$ which shift frequency and wavelength.
The received amplitude of each SR is the sum of Doppler-shifted IN and OUT waves which reduces to:
Formula 11
$\Psi = \mbox{shifted}(\Psi_{IN} + \Psi_{OUT}) = \dfrac{2 \Psi_{0}\ e^{i k \gamma (c t + \beta r)} \cdot sin[k \gamma (\beta c t + r)]}{r}$
Equation 11 is composed of an exponential carrier wave modulated by a sine function. The relativistic term, $\gamma = \dfrac{1}{\sqrt{1 - \left( \frac{v}{c} \right)^{2}}}$, occurs properly to match experimental observation. It is a result of the Doppler effect on the combined IN and OUT waves. These matching results are:
#### The parameters of the exponential oscillator are:
• wavelength = $\dfrac{h}{\gamma m v}$ = deBroglie wavelength, $\lambda$.
• frequency = $\dfrac{\gamma k c}{2 \pi} = \dfrac{\gamma mc^{2}}{h}$ = mass-energy frequency.
• velocity = $\dfrac{c}{\beta}$ = phase velocity.
#### The parameters of the sine function are:
• wavelength = $\dfrac{h}{\gamma mc}$ = Compton wavelength.
• frequency = $\dfrac{\gamma \beta mc^{2}}{h} = \beta \times \mbox{(mass frequency)}$ = "momentum frequency".
• velocity = $\beta c = v$ = relative velocity of the two resonances.
The above matching results are remarkable! They clearly show the origin of mass increase and quantum mechanics in the wave structure of matter. It is instructive to compare Equation 11 for moving electrons with Equation 4 for a stationary electron. They are of the same form but Formula 11 contains the velocity $\beta = \frac{v}{c}$ and the related quantum and relativistic properties for moving particles.
#### Origin of Quantum Mechanics and Special Relativity.
Both moving resonances see the other with its momentum and mass (rest frequency) increased by the factor $\gamma = \dfrac{1}{\sqrt{1 - \left( \frac{v}{c} \right)^{2}}}$. This predicts the observed relativistic mass increase of particles moving relative to a lab. Each electron also receives a QM deBroglie wavelength $\lambda = \frac{h}{p}$ from the other. This is the original experimental basis of quantum theory. We conclude that quantum theory and the mass increase of special relativity are a fundamental property of the space resonance, symmetrically dependent on both the IN and OUT waves.
### B. A Single Value of Charge.
Combine the Equation of the Cosmos (Formula 9) with the classical electron radius $r_{e} = \dfrac{e^{2}}{mc^{2}}$. Eliminate $r_{0}$ and obtain:
Formula 12
$e^{2} = \dfrac{mc^{2} R}{\sqrt{3N}}$
This shows that the charge $e^{2}$ is dependent on the total of all $N$ particles. We also recall that charge always occurs in natural laws as $e^{2}$, never as $e$ alone. Thus, charge is a property of space and total matter, not of particles, and there is only one value of charge in nature $e^{2}$. Conservation of charge follows from the anti-symmetrical structures of the SR and anti-SR described in Section III above.
### C. Forces depend on the Structure of Space.
Understanding energy exchanges enables us to understand the origin of forces. In general, $\mbox{force} = \frac{dE}{dr}$ where $dE$ is the energy exchanged between resonances. For an electron, the potential is proportional to $\Psi$. The energy changes depend on the variation of force along the distance $dr$ between them. For example, the dominant force in the universe is the electric force between charges which varies as $\frac{1}{r^{2}}$, the geometric property of distance in 3D space.
Inertia. Below in section IV D, it is shown that a tiny inhomogeneity of space perturbs the enormous charge force and thereby produces the inertial force law, which is $\approx 10^{40}$ times smaller than charge force. Space becomes inhomogeneous where a particle is accelerated ($F = ma$). In this situation the Minimum Amplitude Principle (MAP) compensates the inhomogeneity with amplitude-minimizing energy exchanges that cause forces and movement. These compensations first occur in the local space, with an immediate local energy exchange to the space waves. The energy exchanged and force appear like action at a distance, unlike the charge (photon) exchanges which propagate at velocity $c$. Thus Newton's original statement of inertia and gravity force is upheld.
Other types of space inhomogeneities also appear as force laws including Mach's Principle, gravity, and magnetism, which are discussed in IV E and IV F. Rotation, angular momentum, spin and the Dirac Equation are discussed in References[8, 9, 11].
### D. The Origin of Inertial Forces.
The force of inertia on an accelerated electron is a perturbation of the electric force produced by changes of wavelength caused by the acceleration. The energy exchange takes place directly between the accelerated resonance and other waves in space. Recoil force is eventually transmitted to other masses of the universe via their space waves.
To analyze this, examine the IN/OUT wavelength change from acceleration and calculate the forces caused by acceleration relative to the masses of the universe. This change disturbs the local balance with waves from other matter in the universe. The MAP corrects the imbalance by readjusting frequencies of the accelerated resonance:
To calculate this perturbation, use a force on the accelerated mass analogous to force on an accelerated charge (radiation damping):
Formula 13
$\mbox{electric Force} = {\bf\sf F_{e}} = e' {\bf\sf E}$
where ${\bf\sf E}$ = electric field. In analogy:
Formula 14
$\mbox{mass Force} = {\bf\sf F_{m}} = m' {\bf\sf M}$
where ${\bf\sf M}$ = mass field.
The ${\bf\sf E}$ field of an accelerated charge $e$ is computed from the magnetic vector potential ${\bf\sf A}$. That is:
Formula 15
$\mbox{electric field} = {\bf\sf E} = \dfrac{d{\bf\sf A}}{dt} = \dfrac{e {\bf\sf a}}{4 \pi \varepsilon_{0}\ c^{2}\ r}$
For the analogous particle $m$, assume an analogous mass field derived from an analogous vector potential:
Formula 16
$\mbox{mass field} = {\bf\sf M} = \dfrac{m {\bf\sf a} G}{c^{2}\ r}$
Following the analogy, the gravity constant $G$ has replaced the electric constant $K_{e} = \dfrac{1}{4 \pi \varepsilon_{0}}$.
To find the force on the masses $m'$, set $m'$ equal to the mass of the universe (This produces Mach's Principle):
Formula 17
$m' = d_u\,V_u = d_u\,\frac{4}{3} \pi R^3$
where $d_u$ = mass density of the universe. Choose the average distance $R$ of $m'$ as half the Hubble Sphere radius, $R = \frac{c}{2H}$. The force between the particle $m$ and masses $m'$ becomes
Formula 18
$\mbox{Force} = m' {\bf\sf M} = \dfrac{d_u\,\frac{4}{3} \pi \left( \frac{c}{H} \right)^3 G m {\bf\sf a}}{c^2\ r} = \left( \dfrac{8 \pi G d_u}{3H^2} \right) m {\bf\sf a}$
Now if we choose $d_{u}$ equal to the critical density of the universe, a flat universe in general relativity, then:
Formula 19
$d_{u} = d_{c} = \dfrac{3H^{2}}{8 \pi G}$
We can insert it into Equation 18. Then the factor in braces ( ) becomes one and the remainder is Newton's Law of inertia: $F = ma$. This result confirms that inertial force is a perturbation of electric force, that inertial mass is equivalent to gravitational mass as experimentally observed and predicts a flat universe.
### E. The origin of Gravity Forces.
The force of gravity can also be found as a perturbation of charge force. MAP seeks an energy exchange (-->force) between a given mass and the waves of other nearby masses that will balance the observed perturbed (changing) properties of space described by the Hubble constant. Wolff[8] obtains the ratio of the electric to gravity force:
Formula 20
$\dfrac{\mbox{electric force}}{\mbox{gravity force}} = \dfrac{F_{e}}{F_{g}} = \dfrac{mc^{2}}{hH} = 5.8 \times 10^{39}$
Compare this with the measured ratio = $\dfrac{e^{2}}{4 \pi \varepsilon_{0}\ G\ m_{e}\,m_{p}} = 2.3 \times 10^{39}$. They agree within Hubble constant error.
One can regard this perturbation as an induction of a gravity force by the changing space property. It is analogous to the induction of an electric field by a changing current. Like Lenz's law, the force opposes the change.
### F. The Origin of Magnetic Forces.
Magnetic forces can be regarded as a perturbation of electric charge forces where the perturbing element is the relative velocity $\left( \frac{v}{c} \right)^{2}$ between two charges. This little known result was found about the year 1910. Lorrain and Corson[12] use it to derive the magnetic force equation beginning with Coulomb's law and Special Relativity, with the result:
Formula 21
${\bf\sf F} = q ({\bf\sf v}\!\times\!{\bf\sf B})$
where Special Relativity creates the coss product, $q$ is the current-producing charge with relative velocity ${\bf\sf v}$ of the current, and ${\bf\sf B}$ is the magnetic field.
### G. Parameters of the Electron Depend on the Parameters of the Universe.
Equation 9, the Equation of the Cosmos, provides an important numerical relation between the cosmological dimensions $R$ and $N$ of the Universe and the radius $r_{0}$ of the electron, the big and the small. Remarkably, it describes how all the mass of the universe acts together to create the "charge" and mass of each electron as a property of space.
To see how the electron mass depends on other matter, combine Equation 9 with the Compton wavelength $r_0 = r_e = \frac{h}{mc}$. Eliminate $r_0$ to obtain:
Formula 22
$mc^{2} = \dfrac{hc}{\sqrt{\frac{N}{R}}}$
Again, confirming our logical deduction, we see that the electron mass like the charge is a property of the universe, that is the total particles $N$ and its size $R$.
### H. The Puzzle of the EPR Effect.
The well-known but mysterious EPR effect[13, 14] is an fascinating example of particle-to-particle communication. Section II C above pointed out that two-way communication between particles was a fundamental requirement for the existence of natural laws and we have seen how the IN and OUT waves provide the means for this two-way communication. This is what happens in EPR:
Ordinarily we observe communication as two energy-exchange events: an energy shift at a source particle and later an absorbtion event at a receiver particle. We calculate the message velocity ($c$) using the time between the events. We used to think of this as a moving photon but this leads to confusion. The correct picture uses the IN-OUT waves traveling at space-wave velocity $c$.
Before two potential partners can undergo those energy shifts, the IN/OUT waves must exchange information (boundary conditions) of their respective particle energy states so that energy exchanges can take place in a way that minimizes wave amplitudes in accordance with the MAP (Assumption III). If minimization is not possible no exchange can take place. Is this respect, the Minimum Amplitude Principle is similar to other physics principles such as the Principle of Least Action, and "Energy flows down-hill". It underlies them.
These prior information exchanges do not produce energy changes visible to us. The mysterious EPR experiments use two separated photo-detectors which appear to have instantaneous knowlege of each other's state of polarization. We are not aware of the prior information exchanges because they are hidden from our laboratory instruments since they are not energy shifts but are carried by the IN-OUT quantum waves. After we understand the role of the quantum waves we recognize that Nature is a puppet-master who allows us to see the puppets but not the quantum wave ensemble behind the curtain.
Several variations of the EPR effect have been found and Greenberger et al.[15] describe a general method of calculation.
## SECTION V - CONCLUSIONS
### Space underlies physical laws.
The most extraordinary conclusion of the Space Resonance electron structure is that the laws of physics and the structure of matter ultimately depend upon the properties of space determined by the matter itself. Matter in the universe is inter-dependent. Every particle communicates its quantum-wave state with other matter so that energy exchange and the laws of physics are properties of the entire ensemble of matter. Mach's Principle is a law conspicuously displaying this particle inter-dependence.
### Two Worlds within our Universe.
The work of this paper shows that there are two real and parallel 'worlds' partaking in the physical behavior of matter. One world is our familiar 3D environment, governed by the natural laws and observed by us using our five senses and their extensions as laboratory instruments. Its attributes are familiar material objects, events, and forces between objects, plus the related energy exchanges which enable us to observe the objects and form mental images of them. This world can be termed the World of Energy-exchange since energy-exchange is the unique attribute which allows us to observe this world.
A second World of Scalar Waves forms the structure of the basic particles, electron, protons, and neutrons which compose the material objects and the space (Ether) of our world of energy-exchange. These waves in space are unseen by us. We only know of their existence when an energy (frequency) exchange occurs to stimulate our senses. Nevertheless this unseen scalar wave world is basic and determines the real action in both worlds. The waves obey the rules of superposition and interference and are governed by Assumptions I, II, and III.
The behavior of the particles (space resonances) in their interactions is largely due to their oscillating scalar waves which reveal their behavior to us via the rules of quantum mechanics and relativity. These waves (inward and outward) fulfill the requirements of matter inter-dependence discussed in Sections III and IV above.
One role of the scalar waves is inter-particle information exchange of their quantum states. This is usually unseen in our world but it is conspicuous in the mysterious EPR effect (Einstein et al, 1935). Information must be exchanged because partners of a future energy exchange cannot act until they have "knowledge" of each other's state. This is necessary so that the MAP (Assumption III) can determine whether an exchange will minimize net wave amplitudes. These information exchanges are usually hidden from our laboratory instruments because they are not energy shifts. Nature is a puppetmaster who allows us to see the puppets but not the orchestration behind the curtain.
Another role of the waves is as a universal cosmic clock which Galeczki[1] has pointed out is a requirement behind Newton's laws. The clock is the fixed frequency of the IN and OUT waves pervading the universe.
### Relation to Special Relativity.
The relativistic law obtained from analyzing the movement of two SRs in Section IV A is the well-confirmed mass increase of moving matter. But the controversial time-space contractions are not predicted. An explanation outside the scope of this article predicts that the speed of an energy transition is equal to the speed of the IN wave to the receiver. This wave always moves in the frame of the receiver at a constant velocity $c$. This is observed but does not imply contraction of space or time.
### Some Other Predictions already verified:
1. The space resonance theory predicts and shows the origin of the natural laws: QM and relativistic mass-increase, the conservation of energy, charge, and momentum; and the forces of charge, inertia and magnetism.
2. The lifetimes of atomic and nuclear decays are not constants as once thought but depend on their quantum-wave states and the distance between partners of the energy exchange. Such variable lifetime atomic decays have been investigated by Walther et al.[16] and Greenberger et al.[15].
3. Inertial and gravity forces are predicted to be of the action-at-a-distance type as originally stated by Newton. This agrees with action-at-a-distance gravity as recognized by astronomers to account for planetary motions. Lorrain & Corson[12] and Graneau[17, 18] verify action-at-a-distance for magnetism confirming the SR electron but not conventional older physics.
## REFERENCES.
1. Galeczki, G. 1994, "Physical Laws and the Special Theory of Rerlativity", Apeiron 20, 26-31.
2. H. A. Lorentz, "Theory of Electrons", Leipzig (1909), Dover Books 1952.
3. W. Clifford, (1876), "Lectures", Royal Philosophical Society, and "The World of Mathematics", p 568, Simon & Schuster, NY (1956).
4. W. Moore, "Life of Schroedinger", p 327, Cambridge U. Press (1989).
5. J. Wheeler and R. Feynman, "Interaction with the Absorber as the Mechanism of Radiation", Rev. Mod. Phys. 17, 157 (1945).
6. T. Phipps, Found. Phys. 6, 71-82, (1976).
7. M. Wolff, "Microphysics, Fundamental Laws and Cosmology", Proc 1st Int'l Sakharov Conf Phys., Moscow, May 21-31, 1990, pp 1131-1150. Nova Sci. Publ., NY.
8. M. Wolff, "Fundamental Laws, Microphysics and Cosmology" Physics Essays 6, 181-203 (1993).
9. M. Wolff, "Exploring the Physics of the Unknown Universe", ISBN 0-9627787-0-2, Technotran Press (1990).
10. E. Mach, (German, 1883), English: "The Science of Mechanics", London (1893).
11. E. Batty-Pratt, and T. Racey, "Geometric Model for Fundamental Particles", Intl. J. Theor. Phys. 19, 437-475 (1980).
12. P. Lorrain and D. Corson, "Electromagnetic Fields and Waves", pp 273-6 (1970).
13. A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47, 777 (1935).
14. A. Aspect, J. Dalibard, and G. Rogers, Phys. Rev. Ltrs. 49, 1804 (1982).
15. D. Greenberger, M. Horne, and A. Zeilinger, Physics Today, 22-29, June 1993.
16. H. Walther, Charles Townes award, CLEO/IQEC meeting, Anaheim, CA., May (1990).
17. P. Graneau, J. Physics D: Appl. Phys., 20, 391-393 (1987).
18. P. Graneau, "Interconnecting Action-at-a-Distance", Physics Essays 4, 340 (1990).
### 7 Responses to “Beyond the Point Particle - A Wave Structure for the Electron”
1. organic seo services says:
Do not believe in people. They are effective at greatness....
Great goods through you, man. I've understand your stuff before and you're simply too great. I really such as what you've obtained here, really like exactly what you're stating and how you say this. You make it entertaining and also you still look ...
SEO,
Are you from Australia? Or S. Africa? Your English Grammar looks like its from there.
In any case, thanks for your good comment.
Milo
2. I loved around you will receive completed right here. The sketch is actually tasteful, your authored subject material stylish....
You command obtain bought an edginess over that you simply wish be delivering the next. unwell unquestionably arrive further formerly again since a similar nearly a great deal often inside situation you shield this particular increase....
3. philallsopp says:
Fascinating and very compelling. I'm trying to think of a way of depicting the "movement" of a cluster of spherical standing waves ("particles") through "space". At the gigantic scale of "particle" aggregation that, for example, represents a human cell or a speck of sand, how does the cluster of spherical standing waves actually move such that at our scale we can detect (see) that speck of sand being carried off by the wind?
4. Milo Wolff says:
Dear Phil,
You undersstand the Wave Structure of Matter, WSM.
Congratulations.
5. Milo Wolff says:
Fix up your English Grammar. Then I can understand you.
Thanks, Milo
6. Milo Wolff says:
Fix up your English Grammer. Then I can understand you.
Thanks, Milo
7. Milo Wolff says:
Good thought. Buy the book. Milo
Avatars by Sterling Adventures
Books of Milo Wolff
Recent Comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 147, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153482913970947, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/33645/what-is-the-most-appropriate-way-to-test-significance-of-effect-difference-acros
|
# What is the most appropriate way to test significance of effect difference across two groups?
Here is the question: After running two separate OLS regressions, one for each group (male vs. female), we get coefficient b1 for a specific IV in "male" regression, and coefficient b2 for the same IV in "female" regression. What would be the most appropriate way to test the hypothesis: Ho: b1>b2? As far as I know, Wald is a widely used test, but it only check whether the effect difference is different from zero (Ho: b1-b2=0)
Thanks!
-
## 2 Answers
A one sided test like a t-test can be obtained by merging your two separate regressions into one that models both groups. If you are interested in the effect of x on y across groups then add a $x_i D_i$ regressor where $D_i$ is a dummy variable that codes your group:
$y_i = a x_i + b x_i D_i + u_i$
As you can see, your group effects are given by $a$ and $a+b$. Thus your hypothesis test becomes $H_0: b>0$ and its statistic would be based on a one sided t-statistic
ABOUT COLINEARITY. These 2 IV are not going to be colinear as there is no linear combination that makes its sum = 0 under standard conditions (and a OLS, ML estimator). to see why let's only take the sample for which $D_i=0$ and assume the following vector notation: $y = x a + z b + u$ where z is a vector with $x_iD_i$components.
Colinearity may airse as the linear combination $c_1x + c_2z$ approaches to zero (for $c_j \neq 0$ But just considering the block of observations where $D_i=0$ automatically leads to a linear combination block equal to $c_1 x\neq 0$ if $x \neq 0$.
I guess you are thinking in term of standard theory that tells you that higher IV corellation may lead to colinearity, but don't forget that standard theory refers to the IV which in our case are $x_i$ and $z_i(=x_iD_i)$ and not $x_i$ and $D_i$. Moreover, the mathematical issue behind the correlation issue is the fact of having linearly dependent variables, so linear dependence between $x_i$ and $z_i$ must be adressed.
-
I agree with JDav. I think we came up with the same answer at almost the same time. He is really confirming what I happened to get submitted first. I suspect that he was writing his before I finished mine and came up with it independently. Ihope this will convince you to use this approach rather than work with two spearate regressions. – Michael Chernick Aug 4 '12 at 0:52
Thanks!!The joint model approach sounds rather straightforward and flexible indeed! My only concerns is that, in my dataset, the gender dummy is highly correlated with the IV, and for this reason I was a bit hesitant to pool the two groups together... – Bill718 Aug 4 '12 at 0:57
Haha, indeed, almost same answer at almost the same time! I think I am convinced what the best method is:) – Bill718 Aug 4 '12 at 1:14
It took me long to write these very few lines and it was certainly almost simultaneous answer. About colinearity, see the edited answer. – JDav Aug 4 '12 at 12:33
Many thanks for your time, really appreciated!I was stuck with the standard theory as you mentioned;worried about high IV correlation that could cause serious colinearity issues...it sounds clear now! – Bill718 Aug 5 '12 at 2:13
Why not do a more conventional thing and do one regression with gender as a covariate? Then a statistically significant gender effect would indicate that gender is important and more specifically include a gender interaction term with your IV. If the interaction is statistically significant that indicates that the male and female slopes differ. Make the test one sided and you have your answer.
-
Thanks!!Please see comment above, high correlation between IV and dummy is a point of concern if doing one regression only. – Bill718 Aug 4 '12 at 0:59
1
Correlation between covariates is not a problem unless it is so high as to create a near multicollinearity. In fact if you believe one slope is different from the other then the IV interacts with gender and there is necessarily a correlation. Also correlation between a binary variable and and a continuous one is different from correlation between two continuous variables. Certainly a pearson correlation cannot be defined. – Michael Chernick Aug 4 '12 at 3:03
I see, I was not aware that pearson correlation cannot be defined in case of correlation between binary and continuous variables, many thanks! – Bill718 Aug 5 '12 at 2:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585490226745605, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/43338/how-to-explain-pedagogically-why-there-is-4-spacetime-dimensions-while-we-see?answertab=active
|
# How to explain (pedagogically) why there is 4 spacetime dimensions while we see only the 3 spatial dimesions?
I have been asked this question by a student, but I was able and in the same time incapable to give a good answer for this without equations, so do you have ideas how one can explain this in a simple way?
(Answers like we can take time as an imaginary, or our space is actually pseudo-Euclidean will be hard to grasp for new students.)
Note that the problem is not in visualizing the 4th dimension, that an easy thing to explain. The problem is more related to why we are in 3D that moving along 1D time dimension? In Differential geometry this interpreted by fiber bundles, but how to explain it to usual student.
-
Yes I agree with anna, this question up till now had been edited for more than 4 times and by different people! – TMS Feb 10 at 8:29
You can roll back the edits . If you click on "edited 3 hours ago" the other versions are there – anna v Feb 10 at 8:32
TMS, you are getting heavily edited because you asked what seems to be an interesting question, but did it using sentences with unexpected words and missing verbs. For example, did you really mean "interpenetrated by fiber bundles", or did you perhaps mean "interpreted"? Please at least review your own text to make sure you have said what you really intended to say. – Terry Bollinger Feb 11 at 4:49
@Terry, corrected that, it was just auto correction. – TMS Feb 11 at 6:31
## 9 Answers
We do see the fourth dimension.
The difference between three dimensions and four dimensions is the difference between a (2d) snapshot image and a ("2d+t") video.
-
1
A good student will immediately respond to that that means nothing more that we parametrized our series of pictures by parameter "time", even so the analogy not bad, but not enough for sure. – TMS Nov 3 '12 at 14:45
4
And that would be correct from a good student. Actually time is not a dimension - it is a parameter. Which by some magic happens to look like 4th coordinate, because the interval in special relativity is conserved. That is all. Say like that, good student will understand :) – Asphir Dom Nov 3 '12 at 21:15
Beleive it or not, but this is almost a question of religion. Catholics (especially French) were first who admitted time as a 4-th dimension. Protestants (Englishs) were considering the time as a parameter for very long time. Another catholic - Hamilton (Irish) - invented quaternions at the time when many protestants did not admit even complex numbers (imaginary unit $i$!). So this dispute has old roots... – Murod Abdukhakimov Feb 8 at 10:19
Because it seems this question interesting to many people, I will tell you how currently I'm explaining this, the idea was came to me when I read about how Philosophers understands time, correct me if you think there is something wrong:
Even so we treat time as a dimensions, it's not that similar to the spatial one, and the reason is as follows:
We start with the most fundamental concept in physics: cause and effect, this concept enable us to sort events in a series : first event is cause then effect ... , that creates an "illusion" of the ability to number those events, what in turn provides as with ability to treat time as dimension and measure "distance" between events, anyway I said the illusion of numbering, because saying "numbering" tells us that we can do that in absolute way, which is wrong according to theory of relativity, because numbering events for one observer is not compatible generally speaking with others (here i explain how speed of light affects cause and effect), for that time is not the same as spatial dimension, and that why Menikowski space is pseudo-Euclidean, not Euclidean, then I add Anno2001's answer how to look at time.
-
Note that the problem is not in visualizing the 4th dimension, that an easy thing to explain. The problem more related to why we are in 3D that moving along 1D time dimension?
You cannot avoid defining time in terms of change. In the same way that if there were no changes (dx/dy etc) in a terrain the map would be totally uniform and uninteresting, if the terrain did not change in time, time would be uniform and undefinable. Repetitive changes allow us to define time ( no need to go to entropy, the solar system, day/night etc are enough ) and clock it/ measure it.
Time projects into the 3 dimensional world. Geological strata ( and many other proxies) assign to each (x,y,z) a time value on the axis of t. Thus time can be projected into (x,y,z).In a similar way space dimensions project into time. The time taken to walk to the station has one to one correspondence with the distance in kilometers.
So time is a necessary dimension to describe the changes seen in three dimensions, in a similar way that a third space dimension is needed to describe the projections of a sphere to two dimensions.
Entropy must come in for a classical definition of the arrow of time and non reversibility. A rough discussion on disorder, broken glass not repairable etc should give them the concept.
Thus even without special relativity time can be thought as another dimension since it projects into the spatial ones. One can then go on to special relativity as surprising us with the different type of dimension ( pseudo euclidean) it turns out to be ,from experiments.
p.s. with this view of how time is defined in our experience we can also with assurance say that there is only one time dimension. If there were two time dimensions the functional dependence of changes in the space three dimensions would be complicated. It would be a many to one projection, similar to projecting a three space dimensional object to one space dimension.
-
I want to add that my answer is an extension of the answer of @Bzazz – anna v Feb 10 at 6:49
Some good ideas have been expressed here. My take on this is the following:
Imagine you have a laser gun and you send a laser pulse outwards in the outer space sending an image out there. The laser pulse travels at the speed of light. Now let us time the beam for a length of time $\delta t$. There is some distance that corresponds to this time and it is $c\delta t$ where c is the speed of light. This $c\delta t$ is the fourth dimension in the 4-D Minkowski space, corresponding to that short time $\delta t$. It tells us how far the image has travelled within this short time. So it is the speed of light that generates the fourth dimension, and also gives the length we call the fourth dimension. Therefore, the fourth dimension starts on our watch and $c\delta t$ is the length of it within the time $\delta t$. It makes sense only in the context of the speed of light. This is the whole point of space-time in special relativity. This is what it means when we say that an object is ‘so many light years away’. In a way, this is the distance that the image of an object is away (a galaxy for example) in the fourth dimension. The mathematical representation of this has been written in Leos Ondra reply. I hope this helps somewhat.
-
This is a good way to define time as a dimension within special relativity. – anna v Feb 10 at 6:57
We see 3 dimensions because we ourselves are 3-dimensional. Imagine a 2d creature originally living in 2d space - an Euclidean plane. It naturally perceives only events which occur in its body, like a photon (assuming for a while something that something like light can exist in 2d) which interacts with its 2d cell in retina. If somehow in the course of evolution the third dimension is added to its flat world, it will still perceive only two dimensions. Now, hoverer, it can be rotated in 3d so its plane of living changes, and new strange things occur - a rod, which has always kept its length in the original flat 2d world, because the flat Pythagoras (and many others before him) proved that
$ds^2 = dx^2 + dy^2$
Now, the rod strangely contracts and lengthens, but one creature (called Einstein the Flat by some, and Lorentz or Fitzgerald by others) founds that there is still something like length which remains constant, namely
$ds^2 = dx^2 + dy^2 - dt^2$
which sounds strangely to others because
1. t has never been considered a dimension
2. There is minus sign before.
-
Here's an attempt at a non-mathematical (and unorthodox) answer:
1) We can see in the three spatial dimensions because light travels through them, reflecting back from objects around us and eventually reaching your eye.
2) Everything (incl. light) travels "forward" through the time dimension (meaning, it is able to move in only one direction, and cannot move back and forth).
3) Light is therefore not able to reflect "back" from an object to your eye through the time dimension.
Thought experiment: take one spatial dimension, but everything is moving in one direction at the speed of light. Would you be able to see what's right next to you?
-
To stay well bound to the physics, I suggest to explain the concept of event, to point out that if we want to identify something that happens in our universe we need to locate it in space and time.
Now, the definition of "number of dimensions" is, roughly, how many numbers you need to identify an element (of the vector space). In this case, it's clear you need 3 spatial coordinate, plus "when".
-
I think one way is to illustrate it in 2 + 1 dimensions, and have them imagine it in 3 + 1. You could draw a cube, and explain that a slice along the cross section of the cube is a 'snapshot' of a 2D space, and the third dimension is time. So they are moving around in 2D, while they are "experiencing" the third.
It's hard to explain this exactly without a figure... if I can lay my hands on one or I can make one myself, I'll add it here when I do.
-
Note that the problem is not in visualizing the 4th dimension, but why we humans can't "feel"/see it as the other 3. – TMS Nov 3 '12 at 14:47
@TMS So what are you asking about? What is the difference between time and space? – Leos Ondra Nov 3 '12 at 18:26
@Leos: I added post note in the question to describe the issue clearly. – TMS Nov 3 '12 at 20:46
You might introduce the thought of time being a fourth dimension by asking your students to contemplate the meaning of 'perpendicular'.
They will likely respond that length, width and height are perpendicular directions. If you push further and demand a defining characteristics of 'perpendicular' they will probably arrive at the non-mathematical and hand-waving characteristics that perpendicular directions are those that allow you to move in either of these dimensions, without making any movement in any of the other. At that stage you can ask them if you can move in time without moving in any of the three spatial dimensions.
Just leave them with that thought. They will come back with further questions...
-
"They will come back with further questions..." Like the question: Is it possible to move only along x axis without moving a bit in y, z and time? – Leos Ondra Nov 3 '12 at 12:11
Exactly. And that is the point at which you can reveal that time behaves different than the other directions, which causes us to observe time as distinct from the other three dimensions. – Johannes Nov 3 '12 at 12:16
@Johannes: I added a post note in the question please check it – TMS Nov 3 '12 at 14:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559096097946167, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/133628/distribution-of-hitting-time-of-line-by-brownian-motion?answertab=oldest
|
# Distribution of hitting time of line by Brownian motion
I came across the following question:
Let $T_{a,b}$ denote the first hitting time of the line $a + bs$ by a standard Brownian motion, where $a > 0$ and $−\infty < b < \infty$ and let $T_a = T_{a,0}$ represent the first hitting time of the level $a$.
1) For $\theta > 0$, by using the fact that $\mathbb{E}e^{-\theta T_a}=e^{-a\sqrt{2\theta}}$, or otherwise, derive an expression for $Ee^{-\theta T_{a,b}}$, for each $b$, $−\infty < b < \infty$.
2) Hence, or otherwise, show that, for $t > 0$, $$\mathbb{P}[T_{a,b}\leq t] = e^{-2ab}\phi\left(\frac{bt-a}{\sqrt{t}}\right)+1-\phi\left(\frac{a+bt}{\sqrt{t}}\right).$$
For the first part, I ended up, by changing measure, with the (unverified) expression
$$\mathbb{E}e^{-\theta T_{a,b}}=\exp\left(-a\left[b+\sqrt{2\left(\theta+\frac{b^2}{2}\right)}\right]\right).$$
What's the cleanest way to do the second part? It seems I could either do some kind of inverse transform on the moment generating function, or calculate the moment generating function of the given distribution. Both of these seem difficult. Am I missing something, or do I just need to persevere?
Thank you.
-
– Sasha Apr 19 '12 at 5:56
@Sasha Thanks. The question suggests there should be some (not too horrible) way to use the mgf in the second part. – Ben Derrett Apr 19 '12 at 8:51
## 1 Answer
First part
The probability density of $T_{a,0}$ is well-known: $$f_{T_{a,0}}(t) = \frac{a}{\sqrt{2 \pi}} t^{-3/2} \exp\left( -\frac{a^2}{2t} \right)$$ From here, for $\theta >0$,
$$\mathbb{E}\left( \mathrm{e}^{-\theta T_{a,0}} \right) = \int_0^\infty \frac{a}{\sqrt{2 \pi t}} \exp\left( -\theta t -\frac{a^2}{2t} \right) \frac{\mathrm{d} t}{t} \stackrel{t = a^2 u}{=} \int_0^\infty \frac{1}{\sqrt{2 \pi u}} \exp\left( -\theta a^2 u -\frac{1}{2 u} \right) \frac{\mathrm{d} u}{u}$$ According to Grandstein and Ryzhyk, formula 3.471.9, see also this math.SE question, we have: $$\mathbb{E}\left( \mathrm{e}^{-\theta T_{a,0}} \right) = \frac{1}{\sqrt{2 \pi}} \cdot \left. 2 \left(2 \theta a^2\right)^{\nu/2} K_{\nu}\left( 2 \sqrt{\frac{\theta a^2}{2}} \right) \right|_{\nu = \frac{1}{2}} = \sqrt{\frac{2}{\pi}} \sqrt{2\theta} a K_{1/2}(a \sqrt{2 \theta} ) = \mathrm{e}^{-a \sqrt{2 \theta}}$$
The time $T_{a,b}$ for standard Brownian motion $B(t)$ to hit slope $a+ b t$, is equal in distribution to the time for Wiener process $W_{-b, 1}(t)$ to hit level $a$. Thus we can use Girsanov theorem, with $M_t = \exp(-b B(t) - b^2 t/2)$: $$\mathbb{E}_P\left( \mathrm{e}^{-\theta T_{a,b}} \right) = \mathbb{E}_Q\left( \mathrm{e}^{-\theta T_{a,0}} M_{T_{a,0}} \right) = \mathbb{E}_Q\left( \mathrm{e}^{-\theta T_{a,0}} \mathrm{e}^{-b a - b^2 T_{a,0}/2} \right) = \exp(-b a - a \sqrt{b^2 + 2\theta})$$
Second part
In order to arrive at $\mathbb{P}(T_{a,b} \leqslant t)$ notice that $$\mathbb{P}(T_{a,b} \leqslant t) = \mathbb{E}_Q\left( [T_{a,0} \leqslant t] \mathrm{e}^{-b a - b^2 T_{a,0}/2} \right) = \int_0^t \frac{a}{\sqrt{2 \pi s}} \exp\left( -b a - \frac{b^2 s}{2} -\frac{a^2}{2s} \right) \frac{\mathrm{d} s}{s}$$ The integral is doable by noticing that $$-b a - \frac{b^2 s}{2} -\frac{a^2}{2s} = -\frac{(a+b s)^2}{2s} = -2a b -\frac{(a-b s)^2}{2s}$$ and $$\frac{a}{s^{3/2}} = \frac{\mathrm{d}}{\mathrm{d} s} \frac{-2a}{\sqrt{s}} = \frac{\mathrm{d}}{\mathrm{d} s} \left( \frac{b s - a}{\sqrt{s}} - \frac{b s + a}{\sqrt{s}}\right)$$ Hence $$\begin{eqnarray} \mathbb{P}(T_{a,b} \leqslant t) &=& \int_0^t \frac{1}{\sqrt{2\pi}} \exp\left(- \frac{(a+bs)^2}{2 s}\right) \mathrm{d} \left( - \frac{b s + a}{\sqrt{s}} \right) + \\ &\phantom{+}& \int_0^t \frac{1}{\sqrt{2\pi}} \exp(-2ab) \exp\left(- \frac{(b s-a)^2}{2 s}\right) \mathrm{d} \left( \frac{b s - a}{\sqrt{s}} \right) \\ &=& -\Phi\left( \frac{b t + a}{\sqrt{t}} \right) + \lim_{t \searrow 0} \Phi\left( \frac{b t + a}{\sqrt{t}} \right) + \\ &\phantom{=}& \mathrm{e}^{-2 a b} \Phi\left(\frac{b t - a}{\sqrt{t}} \right) - \mathrm{e}^{-2 a b} \lim_{t \searrow 0} \Phi\left(\frac{b t - a}{\sqrt{t}} \right) \end{eqnarray}$$ where $\Phi(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-z^2/2} \mathrm{d} z$ is the cumulative distribution function of the standard normal variable. Since we assumed $a > 0$, $$\lim_{t \searrow 0} \Phi\left( \frac{b t + a}{\sqrt{t}} \right) = \Phi(+\infty) = 1 \qquad \lim_{t \searrow 0} \Phi\left( \frac{b t - a}{\sqrt{t}} \right) = \Phi(-\infty) = 0$$ and we arrive at c.d.f of the inverse Gaussian random variable: $$\mathbb{P}(T_{a,b} \leqslant t) = 1 - \Phi\left( \frac{b t + a}{\sqrt{t}} \right) + \mathrm{e}^{-2 a b} \Phi\left( \frac{b t - a}{\sqrt{t}} \right)$$
-
@BenDerrett Evidently my answer is not helpful. Did you manage to solve it to your satisfaction. If so, I would appreciate it, if you dropped a hint, or better yet, posted it as a solution. Thanks. – Sasha Apr 20 '12 at 14:42
Thanks for taking the time to write this. This is probably the best way to show 2). The question suggests there's an easier way to show this, assuming 1). – Ben Derrett Apr 20 '12 at 16:03
@BenDerrett I suspect that the problem intends to prove 2) by differentiating to find pdf, computing the Laplace transform of the pdf. I would very much like to know if there is a more elegant way. So if you find out of one, please take a moment to share. – Sasha Apr 20 '12 at 17:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9163931012153625, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Brownian_dynamics
|
Brownian dynamics
Brownian dynamics (BD) can be used to describe the motion of molecules in molecular simulation. It is a simplified version of Langevin dynamics and corresponds to the limit where no average acceleration takes place during the simulation run. This approximation can also be described as 'overdamped' Langevin dynamics, or as Langevin dynamics without inertia.
In Langevin dynamics, the equation of motion is
$M\ddot{X} = - \nabla U(X) - \gamma M\dot{X} + \sqrt{2 \gamma k_B T M} R(t)$
where $U(X)$ is the particle interaction potential; $\nabla$ is the gradient operator such that $- \nabla U(X)$ is the force calculated from the particle interaction potentials; the dot is a time derivative such that $\dot{X}$ is the velocity and $\ddot{X}$ is the acceleration; T is the temperature, kB is Boltzmann's constant; and $R(t)$ is a delta-correlated stationary Gaussian process with zero-mean, satisfying
$\left\langle R(t) \right\rangle =0$
$\left\langle R(t)R(t') \right\rangle = \delta(t-t').$
In Brownian dynamics, no acceleration is assumed to take place. Thus, the $M\ddot{X}(t)$ term is neglected, and the sum of these terms is zero.
$0 = - \nabla U(X) - \gamma M\dot{X}+ \sqrt{2 \gamma k_B T M} R(t)$
Defining $\zeta = \gamma M$, and using the Einstein relation, $D = k_B T/\zeta$, it is often convenient to write the equation as,
$\dot{X}(t) = - \nabla U(X)/\zeta + \sqrt{2 D} R(t).$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.925168514251709, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/73817/a-test-for-randomness-of-direction-of-vector-data
|
A test for randomness of direction of vector data
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I want to test the hypothesis that a group of vectors in 3D space, say given by a long list of xyz coordinates from some experiment, have no preferred direction. Is it sufficient to pick some direction in space, say the x-axis, and calculate the cosine angle between each data vector and this direction, and look at the mean cosine angle? Thanks, -nuun
-
I cannot imagine that is the right way to do it, but it has been decades since I had a statistics class. For this kind of very specific question, I think you will have a better experience at stats.stackexchange.com/questions although, for the moment, their site is not coming up. – Will Jagy Aug 27 2011 at 4:17
4 Answers
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I presume that by "having no preferred direction" you mean that the distribution on the sphere is uniform - as it has just been pointed out by Gerry. Testing uniformity on the sphere is a classical statistical problem. There is A LOT about it: you may have a look at the book "Directional Statistics" by Mardia and Jupp (especially Chapters 9 and 10), or, for instance, at these more recent papers by Pycke and Bakshaev.
-
There is a notion of uniform distribution on spheres, and a notion of discrepancy on a sphere (which is a numerical measure of the distance from uniform distribution). That should give you some search terms. One paper on the topic (probably more theoretical than you want, but it should have or at least point to the relevant definitions) is Martin Blumlinger, Slice discrepancy and irregularities of distribution on spheres, Mathematika 38 (1991) 105-116.
-
Here is one approach to consider.
Treating the data as points on the surface of the unit sphere, consider the collection of convex subsets on this surface that contain all of your observations. Then, define $S$ to be minimum area among such sets. One way to interpret the idea of "having no preferred direction" is that this set $S$ should be almost as big as the entire surface; conversely a preferred direction would manifest as the data being tightly concentrated in a small area on the sphere.
This is just a rough idea -- figuring out how to operationalize "almost as big as the entire surface" would depend on your statistical needs. Hope I haven't missed the mark too badly.
-
That's a decent idea, I think. But what if there are several preferred directions? – Nuun Aug 27 2011 at 6:09
Yeah, in that case you'd need to cluster the observations in some way first, so the relevant area would be over disjoint convex covering regions. But the basic idea of comparing the containing area to the total possible area still stands I would think. There are surely lots of (different) ways to do (basically) what you want -- it all boils down to operationally defining "preferred direction". Good luck. – R Hahn Aug 27 2011 at 6:33
A set of pretty uniformly distributed points, and the same set with one point appearing a billion times, will get the same measure. Your test is valid, but it's power will be very low. – Brendan McKay Oct 27 at 7:44
@Brendan, I guess as an applied statistician I'm usually happy to rule out the kind of degeneracy you mention, but your point is well taken. On a separate note, several years ago I used your nauty program when doing my masters degree on a branch-and-bound algorithm for the maximum independent set problem. So thanks for that! – R Hahn Oct 27 at 15:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475146532058716, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/146003/scheme-flat-of-finite-type-over-mathbbz?answertab=active
|
# Scheme flat of finite type over $\mathbb{Z}$
Let $X$ be a scheme which is integral, of finite type, flat and separated over $\mathbb{Z}$.
Let $D \subseteq X$ be a prime divisor on $X$ which is not flat over $\mathbb{Z}$.
Is it true that $D(\mathbb{F}_p) = \emptyset$ for all primes $p$, with at most one exception?
-
1
Remember that a morphism from an integral scheme to the spectrum of a Dedekind domain is flat if and only if it is dominant. – QiL'8 May 16 '12 at 21:40
Hm, does this show that $D(\mathbb{F}_p) = \emptyset$ for all but finitely many $p$...? I must be missing something. – Evariste May 16 '12 at 22:44
Aha, OK. The fact that $D$ is not $\mathbb{Z}$-flat shows that for only finitely many $p$, the fiber $D_{\mathbb{F}_p}$ is non-empty. But since $D$ is supposed to be irreducible, there can be at most one such $p$. Right? – Evariste May 16 '12 at 22:54
Yes you are correct. – QiL'8 May 16 '12 at 22:55
Thank you QiL! It wasn't so difficult after all, but I was looking at it the wrong way. – Evariste May 16 '12 at 22:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9603989124298096, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/38811/greens-function-for-wave-equations-in-r-or-r/38817
|
## Green’s function for wave equations in R² or R³
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello,
For almost one year, I am searching for the Green's function for wave equation in R² or R³ with some boundary conditions. As far as I know, when the boundaries permit the method of images, we can get the Green's function. But this requirement is too strong.
What we would like to have is that: the wave is confined in a convex, sufficiently smooth domain. On the boundary, either Dirichlet or Neumann's conditions can be put. To impose these conditions is just to avoid the diffraction problem, which can be too much complicated for us.
During this searching, I encountered books by Prof. Melrose, Prof. Michael E. Taylor and also the formidable three volumns by Prof. Hormander. I still feel hopeless in finding that.
Thanks in advance for any comments!
Best!
-
## 1 Answer
What do you mean by getting the Green's function ? If you mean in closed form, then this is hopeless for most domains.
Otherwise, the proper way to express the solution of $$u_{tt}=\Delta u,\qquad u(0)=u_0,\qquad u_t(0)=u_1$$ with homogeneous boundary conditions BC (say Dirichlet or Neumann) is to use the Laplace transform $\hat u$ of $u$ as an auxiliary function: $$\hat u(z):=\int_0^{+\infty}\exp(-sz)u(s)ds.$$ For each $z$ of positive real part, $\hat u(z)$ solves the elliptic problem $$(-\Delta+z^2)w=u_1+zu_0.$$ The above problem, with BC, is well-posed, for every $z$ away from the imaginary axis, and the map $z\mapsto w$ is holomorphic. One recovers $u$ through a Cauchy integral along an appropriate contour in the complex plane. This amounts to express the Green's function of the wave equation as a Cauchy integral in terms of the Green's functions of the elliptic problems parametrized by $z$. This expression may be used to analyze the singularities of the Green's function.
-
Thanks Prof. Serre. I am wondering whether the geometric optics would work under our assumptions on the boundaries (smooth concave closed domain). From geometric optics, people use ray-tracing method in computer graphics and acoustics as well. It is an approximation to the real physics phenomenon. But under our assumptions, we hope that there was no diffraction and hence the approximation was indeed accurate. – Anand Sep 15 2010 at 13:23
Do you mean smooth convex close domain ? – Denis Serre Sep 15 2010 at 13:33
Yes. Like interior of a ball. :-) – Anand Sep 15 2010 at 14:15
1
Well, geometric optics is a vast topic. It is impossible to give even a flavour of it in a few lines, because it involves the theory of pseudo-differential operators, even for a domain without boundary (${\mathbb R}^n$, compact manifolds). In presence of a boundary, it can be a nightmare when rays reach the boundary tangentially. Fortunately, this does not happen if the domain is convex, and thus the theory is essentially ray-tracing plus ordinary reflexion. – Denis Serre Sep 16 2010 at 8:24
Dear Prof. Serre, that's true. Even when the domain has a conner such as edges of a box, there will be very complicated diffractions. That's why we impose the strong conditions that the domain is convex (to avoid tangent incident rays) plus sufficiently continuous (to avoid diffractions by conner). You said that it is "essentially"...., what do you mean by "essentially"? I am wondering under these conditions, whether we can prove strictly that ray-tracing plus ordinary reflexion can work. Thank you very much for your useful comments! :-) – Anand Sep 16 2010 at 9:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199057817459106, "perplexity_flag": "head"}
|
http://nrich.maths.org/1366/index
|
### Flexi Quads
A quadrilateral changes shape with the edge lengths constant. Show the scalar product of the diagonals is constant. If the diagonals are perpendicular in one position are they always perpendicular?
### A Knight's Journey
This article looks at knight's moves on a chess board and introduces you to the idea of vectors and vector addition.
### 8 Methods for Three by One
This problem in geometry has been solved in no less than EIGHT ways by a pair of students. How would you solve it? How many of their solutions can you follow? How are they the same or different? Which do you like best?
# Which Twin Is Older?
##### Stage: 5
Article by Ruth Williams
## Introduction
If you follow through this project, you will be led to an amazing conclusion! Suppose that one of a pair of identical twins goes on a journey into space and then returns to compare his or her (let's say her) age with that of the other twin who has remained at the same location on Earth. It turns out that after the journey, one twin will be younger than her sister! You must be asking how that could be possible, and the aim of this project is to show you how.
Let's start by playing a guessing game - we'll find out if you're right at the end! Suppose the travelling twin goes on a 12-year journey - 6 years out and 6 years back. Obviously she will be 12 years older when she gets back. What you have to do is to guess how much older the non-travelling twin will be....! I need to tell you how fast the moving twin travels - let's say four-fifths of the speed of light - although I dare say this will not help you too much with your guess! Let me suggest that you guess a whole number between 1 and 30!
Before we consider the twins, we must set up certain tools which we shall need in order to understand what is going on. We shall make a lot of use of a certain kind of graph called space-time diagram. Don't worry if you have never used graphs before - we'll try to start from the basics.
?
## Graphs
The simplest sort of graph is just a picture of what is happening on a chosen flat surface. Let's assume that you are sitting at a table or a desk, and that you have two rulers and a large piece of paper. Starting at the bottom left-hand corner, place one ruler along the bottom and one up the left-hand side, with the paper in position between them. What you have should look like this:
We shall call the point where the two rulers meet (the zero on both scales) the origin, or O for short, and the rulers provide the two axes . Following convention, we shall call the one along the bottom the x-axis, and the one up the side the y-axis. We can label any point on the paper by its values of $x$ and $y$. For example, the origin has $x=0$, $y=0$. The point $1cm$ from the y-axis and $2cm$ from the x-axis has $x=1$, $y=2$ (I really do have these the right way round!).
Exercise Draw your own graph and mark the following points:
$x=3$, $y=0$;
$x=2$, $y=1$;
$x=3$, $y=2$;
$x=4$, $y=1$.
What shape do they form?
Now imagine some ants with very dirty feet. Suppose one walks across the paper staying always a distance of $1cm$ from the x-axis. Its path would be a line, described by the equation $y=1$ (a fancy way of saying what is said in words in the previous sentence). Another might walk always $3cm$ from the y-axis; the equation of its path would be $x=3$. A rather more original ant might walk so that its distance from both axes is always the same; it would go through $x=1$, $y=1$; $x=2$, $y=2$; etc and its path would be given by the equation $x=y$.
## Space-time diagrams
In this section, we shall draw some graphs which look very similar to those in the previous section, but their meaning will be rather different. We still have two axes, and the one labelled $x$ still represents distance in a certain direction. However the one that was labelled $y$ is now labelled $t$, which represents time, measured in seconds say. Imagine for example a ball at a fixed point $3cm$ from O; its path in space-time will be the straight line $x=3$. Now suppose the ball rolls along a straight path; then points on the graph would correspond to positions of the ball at particular times eg $x=5$, $t=4$ would correspond to the ball being $5cm$ from the origin $4$ seconds after the measurement of time began.
Exercise Plot points on the graph corresponding to the following events in the ball's history: $x=4$, $t=1$; $x=4$, $t=2$; $x=4$, $t=3$. How would you interpret this?
Now try $x=1$, $t=1$; $x=3$, $t=2$; $x=5$, $t=3$. What do you notice about these points?
If the path of the ball is a straight line on the graph, it means that the ball is moving with constant speed (perhaps speed zero). In that case we can work out the speed by seeing how far the ball travels in one second - we divide the change in the x-value by the corresponding change in the t-value. So in our second example, the speed of the ball is $2cm/sec$. Do you agree?
Exercise Draw a graph representing the motion of a ball which moves backwards and forwards between two points, with constant speed (but periodically changing direction, of course).
We are now going to do something rather strange to our graph. In the theory called special relativity , which is what makes it interesting to think about about twins and space travel, we often need to plot light rays on our graphs. Now light rays travel in empty space with a constant and very large (but finite) speed; a ray of light reaching you from the sun left there about $8$ minutes ago. (People used to believe that light travelled infinitely fast so that you could see the stars at the moment you observed them but this is now known to be wrong.) To be more precise, the speed of light is about $30000000000cm/sec$! This could lead to some very strange scales on the axes of of a space-time graph, so we shall choose to measure distance in a different way; the units on the x-axis will be light-seconds, that is the distance travelled by light in one second. (If the scale for $t$ is years, the corresponding x-scale will be light-years.)
The big advantage of this is that the path of a ray of light will always be at $45$ degrees to the axes. This means that a light ray through O will be at equal angles to both axes, as you see on the diagram. (Although light travels at constant speed so its path is a straight line, we represent it by a wiggly line, as shown, to distinguish light rays from paths of other objects with mass.)
?
## Measuring time
We tend to assume that when people disagree about what time it is, someone's watch is wrong, or perhaps the people are in different time zones! But it is even more complicated than that! According to special relativity, two people can be in the same place and correlate their watches, but if one is moving relative to the other, they will subsequently disagree about what time it is. Unfortunately if you team up with a friend and try to test this theory, you will be disappointed, not because the theory is wrong, but because at the speed you are likely to be able to run, even if you are a super-athlete, the effect will be too small to observe. It is only when speeds becone very large (sizeable fractions of the speed of light) that this strange phenomenon can be observed and, even then, only by using extremely accurate clocks.
To see how this could work, let us consider a "thought experiment" similar to one which Einstein suggested. (It is a "thought" experiment rather than a real one because it cannot actually be done, as you will realise.) Suppose you watch the clock on Big Ben through a very powerful telescope, as you move away from it on a very fast train which passed the clock at exactly midday (I know trains don't go right past Big Ben but let's pretend!). Now if the train could move at the speed of light, what would you see happening to the hands of the clock? They would appear to stand still, both pointing to $12$! Why? This is because the light emitted by (or reflected from) the hands at midday would be travelling away from the clock at exactly the same speed as you on the train, and light emitted at later times could not catch up with the train! Weird!
Can you see why this is a "thought experiment" (quite apart from the fact that trains don't go past Big Ben)? It is because a train could not actually move with the speed of light (which is something else that special relativity tells us - any object which weighs anything can never move as fast as light). But suppose the train moved at half the speed of light (still very fast) - you would see the clock hands move, but more slowly than those on your watch! Measurement of time depends on how you are moving!
How can this be? To understand this, let's consider a special sort of clock, a light clock . You are not likely to find one of these beside your bed waking you up in the morning. It consists of a source of light which emits signals which travel a distance D and are then reflected back to the source. The time gap or interval between each time a signal is sent and when it is received back, defines the ticks of the clock; they occur at time $2T$ apart, where \begin{equation} {T} = {D/c} \end{equation} with $c$ representing the speed of light. (Remember that speed = distance/time, so time = distance/speed.)
Now let's suppose that a moving rocket carries such a clock - the experience of the crew will be that the ticks occur at intervals of $2T$. Now suppose that the crew of a stationary rocket observe the clock of the moving one, and compare it with their own, also ticking away at intervals of $2T$. What will they see? The diagrams below should suggest the answer. (I am sorry that my rockets look more like fish!)
For the moving rocket, the light is reflected from the mirror at the half-way time between when the signal is sent and when it is reflected back. From the stationary rocket, the travel time appears to be $2T^{ \prime}$ say. We can work this out using a very important theorem (a fancy name for something that has been proved to be true in mathematics!) - that of Pythagoras.
\begin{equation} {a^2 + b^2} = {c^2} \quad (1) \end{equation} In the rocket diagram, we have a right-angled triangle, so \begin{equation} D^2 + v^2 T^{'2} = {c^2 T ^{\prime 2}}\quad (2) \end{equation} We now solve this for $T^{\prime}$: \begin{equation} {T^{\prime 2} (c^2 - v^2)} = {D^2}, \quad (3) \end{equation} \begin{equation} {T^{\prime 2} c^2 (1 - v^2/{c^2})} = {D^2} \quad (4), \end{equation} giving \begin{equation} {T^{\prime}} = {\frac {D} {c \sqrt{(1 - v^2/{c^2})}}}\quad (5) \end{equation} and so \begin{equation} {T^{\prime}} = {\frac {T} {\sqrt{(1 - v^2/{c^2})}}}, \quad (6) \end{equation} and the clock rates will be different (unless of course $v=0$). In fact $T^{\prime}$ is larger than $T$, so it looks to the stationary crew as though the moving clock has longer intervals between the ticks and so is going slow.
Once we accept that there is no universal definition of time which holds for everyone, we have to think what we really mean by measuring time. In some sense, we are measuring distances along our paths in space-time (think back to the space-time diagram). The twins follow different paths in space-time, so it is not so surprising that they have experienced different amounts of time.
But don't jump to any conclusions - things are not always what they seem...! The path of the travelling twin looks longer but does that mean she has experienced more time? To see how tricky this sort of question is, think about a situation which might be rather similar. Suppose that you live in Trumpington, on the south side of Cambridge, and want to go shopping in Tesco's on the northern extreme of the city centre (as nearly as you can, avoiding pedestrian zones), on the ring road or on the by-pass - see the diagram below.
At most times of the day, you would find that the longest route - the by-pass - took the least time, followed by the ring road. Are you now convinced that the obvious answer is not always the right one??
We have seen how time is measured using light signals. Once we have an accurate clock, we can then measure distance using light or radar signals reflected from the distant object; the distance will be half the light travel time, multiplied by the speed of light.
\begin{equation} {D} = {cT}\quad (7) \end{equation} Like time, distance is a quantity where the result of the measurement depends on the way the observer is moving.
## K-factors
I am now going to describe an idea which should help us to do calculations of the type of effect I have been telling you about.
Suppose an astronaut B (for Ben) is in a rocket moving at speed $c/5$ away from another astronaut A (for Alf) at a space station. Once a year on March 13, Alf sends birthday greetings to Ben. Suppose that the radio message carrying this greeting in the year 2010 is measured by the space-station to travel a distance of half a light-year to reach the rocket, taking half a year to do this. The next message is sent exactly a year later. When this radio signal has travelled for half a year to where Ben the astronaut received the previous signal, the rocket has moved one-fifth of a light-year further on, so this signal has to travel for longer to catch up with the rocket; in fact, the time measured by Alf when the signal reaches the rocket is three-quarters of a year after it was sent - see the diagram below. Poor Alf concludes that the birthday greetings sent yearly will be received by Ben at intervals of one and a quarter years, according to Alf's clock.
This does not tell us what Ben will measure for these intervals, but it does suggest that it may well not be a year! A similar effect will happen for signals sent from Ben to Alf.
Now let's look at the general case and make this more precise. Consider two observers Alf and Ben moving away from each other at constant speeds.
Alf sends a light signal and then another at time $T$ later. Ben receives the two signals at times $T^{\prime}$ apart, according to his clock. Then we define a quantity $K$ by \begin{equation} {K} = {T^{\prime}/T}. \quad (8) \end{equation} Note that if Alf and Ben were moving with the same speed (in the same direction!), $K$ would be one. We shall see later precisely how $K$ depends on the relative speed of Alf and Ben. $K$ is sometimes called the Doppler shift factor and the effect is similar to that for sound waves - you must all know the change in sound of the siren of an ambulance or police car as it approaches and then recedes.
How can we measure $K$? The obvious way is for the observers to keep records of when the light signals are sent and received so that they can work out $T$ and $T^{\prime}$, and hence $K$, when they meet later. Another possibility would be for one to have a very powerful telescope with which to watch the clock of the other (this is really the train going past Big Ben all over again!).
We need to make some assumptions about this number $K$ in order for it to be useful to us. We assume first that when Alf and Ben are moving with constant speeds, then $K$ does not depend on when $T$ and $T^{\prime}$ are measured, nor does it depend on how big $T$ is. So for example, if Alf waits twice as long between sending light signals, $K$ will be the same. See if you can fill in the numbers $T_1^{\prime}$, $T_2^{\prime}$, $T_3^{\prime}$ in this case.
?
(Notice that it is OK to start measuring time when the two observers are together, and they can both set their stopwatches to zero at that moment.) So in general we have
?
The second thing we assume, which is equally important, is that the $K$ measured by Ben for light signals from Alf is the same as that measured by Alf for light signals from Ben. Why do we assume this? Imagine two identical cars back to back on a road. Car $A$ stays still and car $B$ moves off at $50km/hr$ away from $A$. Passengers looking through the back window of $A$ will see $B$ disappearing at the appropriate rate. Passengers in $B$ will see a very similar picture if they look through their back window - car $A$ will appear to be moving away at the same speed! Have you ever had that uncanny experience of sitting in a train and thinking it has just moved off, when it turns out that it was the neighbouring train moving off in the opposite direction and your train is still stationary?
This assumption makes it possible for one observer to measure $K$ by radar without any co-operation from the other. Can anyone guess how? Let me give you a hint.
Suppose as usual that Alf and Ben are moving apart with constant speed. Alf sends out two signals at an interval of $T$, Ben reflects them at interval $T^{\prime}$ and Alf receives them back again at interval $T^{\prime\prime}$. How can Alf work out $K$?
?
Answer
\begin{equation} {T^{\prime}} = {K T} \quad (9), \end{equation} \begin{equation} {T^{\prime\prime}} = {K T^{\prime}}\quad (10) \end{equation} Therefore \begin{equation} {T^{\prime\prime}} = {K(K T)} = {K^2 T}\quad (11) \end{equation} so we have \begin{equation} {K} = {\sqrt{T^{\prime\prime}/T}}. \quad (12) \end{equation} Problem In order to perform a complicated docking manoeuvre, it is essential that two spacecraft be held at rest relative to each other. Devise a simple experiment to check that this is so.
## The relation between K and speed
We know already that if the relative speed of Alf and Ben is zero, then $K$ = $1$. What is its value for general speeds? There is a clever way of working this out, which uses the idea we have just been talking about, plus the idea of simultaneity. What does that mean? I'll explain about it in a minute.
First let's imagine our usual two observers travelling away from each other with constant speed $v$. Let's suppose that when they are together, they both set their clocks to $t = 0$. At time $T$ by his clock, Alf emits a radio signal; Ben reflects it back at time $T'$ by his clock and Alf gets it back again at time $T^{\prime\prime}$ by his clock. Let's draw a picture as usual.
?
Now we know that \begin{equation} {T^{\prime}} = {K T}\quad (13) \end{equation} \begin{equation} {T^{\prime\prime}} = {K T^{\prime}} = {K^2 T}. \quad (14) \end{equation} So Alf thinks that the travel time for the radio pulse is \begin{equation} {T^{\prime\prime}-T} = {K^2 T-T} = {(K^2-1)T}, \quad (15) \end{equation} and Alf works out that the distance, $D$, between himself and Ben, is half the distance travelled by the radio signal , which is half the speed times the time: \begin{equation} {D} = {\frac {1} {2} c(K^2-1)T}. \quad (16) \end{equation} Now we know that the distance between Alf and Ben is constantly changing, so we have to ask when (i.e. at what time) this is the distance between Alf and Ben. This is where we need the idea of simultaneity. The distance is clearly measured when Ben is at the point $P$, but what time does this correspond to for Alf? Well, Alf knows that the radio pulse travels the same distance out and back, so the event $Q$, which Alf judges to be at the same time as $P$, will be half way between $T$ and $T^{\prime\prime}$. (We say that $Q$ is simultaneous with $P$ for Alf - it just means at the same time.) So the time at $Q$ is \begin{equation} {T_Q} = {\frac {1} {2}(T+T'')} = {\frac {1} {2} (K^2+1)T}. \quad (17) \end{equation} Thus Alf concludes that Ben has travelled a distance $D$ in time $T_Q$, so his speed $v$ is given by \begin{equation} {v} = {\frac {D} {T_Q}} = {\frac {\frac {1} {2} c(K^2-1)T} {\frac {1} {2} (K^2+1)T}}. \quad (18) \end{equation} Therefore we have \begin{equation} {\frac {v} {c}} = {\frac {K^2-1} {K^2+1}}. \quad (19) \end{equation} We can now work out $K$ in terms of $\frac {v} {c}$: \begin{equation} {(K^2+1)\frac {v} {c}} = {K^2-1}, \quad (20) \end{equation} \begin{equation} {K^2 \frac{v}{c} + \frac{v}{c}} = {K^2-1}, \quad (21) \end{equation} \begin{equation} {1+ \frac{v}{c}} = {K^2(1- \frac{v}{c})}, \quad (22) \end{equation} Therefore \begin{equation} {K^2} = {\frac {1+ \frac{v}{c}} {1- \frac{v}{c}}}\quad (23) \end{equation} and so \begin{equation} {K} = \sqrt{{\left({\frac{1+\frac{v}{c}}{1-\frac{v}{c}}}\right)}}. \quad (24) \end{equation} Try working out some values of K:
e.g. $v = c/4$ - that gives $K = \sqrt{5/3} = 1.291$.
Now try $v = c/2, 9c/10, 99c/100$.
Now it doesn't matter whether Alf is still and Ben is moving or the other way round - $K$ is still the same. What is different is if Alf and Ben are approaching each other rather than moving apart - in that case, we take the value of $v$ to be negative, but we can use the same formula.
e.g. if Alf and Ben approach each other with relative speed $c/2$, \begin{equation} K = \sqrt{{\left({\frac{1+(-1/2)}{1-(-1/2)}}\right)}} = \sqrt{{\left({\frac{1/2}{3/2}}\right)}} = \sqrt{1/3}. \quad (25) \end{equation}. Let's work out $K$ for some typical speeds:
1) airliners approaching each other with speed $1000km/hr$;
2) galaxies in our cluster moving apart with relative speeds of $500km/s$;
3) a car approaching a policeman at $100km/hr$;
4) you walking towards your friend at $5km/hr$ Do you think any of these effects would be observable? The twin paradox - at last!
What happens? Why is it a paradox? (What does the word paradox mean anyway?)
Let's consider identical twins, Albertina and Brigitta! Albertina stay at home in London, while Brigitta, the adventurous one, goes on a space trip. She travels away from Earth for 6 years, as measured by her clock, in a very fast space ship which travels at $v = 4c/5$. She then returns at the same speed for 6 years. So Brigitta measures a time of 12 years for her trip - she is 12 years older when she gets back home to Albertina in London. But how much older is Albertina? Let's see whether your guess was right or perhaps just close!
Let's draw a space-time diagram:
On Brigitta's outward journey, the twins move apart with relative speed $v = 4c/5$, so \begin{equation} {K^2} = {\frac{1+\frac{4}{5}}{1-\frac{4}{5}}} = {9}, \quad (26) \end{equation} and we have $K=3$. On Brigitta's return journey, they approach each other with the same speed, so \begin{equation} {K^2} = {\frac{1-\frac{4}{5}}{1+\frac{4}{5}}}\quad (27) \end{equation} and $K=1/3$. All we need to do to work out the time that Albertina measures is to put in one light signal! Any suggestions where?
Suppose Albertina sends a signal at the point $S$, chosen so that it reaches Brigitta at $U$, just as she is about to turn round.
Then what is $T$ if $T^{\prime}=6$ and $K=3$? We have \begin{equation} {T^{\prime}} = {K T} \quad (28) \end{equation}
so $6=3T$ and $T=2$.
Now look at what happens on Brigitta's return journey. What is $T^{\prime\prime}$? We have $T^{\prime}=6$, $K=1/3$ and \begin{equation} {T^{\prime}} = {K T^{\prime\prime}}, \quad (29) \end{equation} so $6=T^{\prime\prime}/3$ and $T^{\prime\prime}=18$. Therefore the total time measured by Albertina is $T+T^{\prime\prime}=20$ years, so she is 20 years older when they meet, whereas Brigitta is only 12 years older! So Albertina has aged by 8 years more than Brigitta!
So did you guess right????
What you can try doing on your own is some experiments with numbers. For example, you can imagine that Brigitta takes a journey of $10+10=20$ years say, and you can work out how fast she has to travel for Albertina to have aged by only one more year than she has (i.e. $21$ years has passed on Albertina's clock) when Brigitta returns. Perhaps you can work out whether it would be possible for there to be a realistic (i.e. achievable) space journey in which twins would age differently by a noticeable amount!
Now why is all this called a paradox ? Remember what paradox means? We said earlier that if two people are moving with constant speeds (and hence constant speeds relative to each other), a lot of what happens is the same whether we consider the first stationary and the second moving, or the other way round. Applying that argument to Albertina and Brigitta, why can't we regard Brigitta in her space ship as the person who sits still, and Albertina on Earth sailing away and then coming back. Put like that, it sounds rather silly, but I hope you get the idea. Then we would expect Brigitta to be older than Albertina when they finally meet up. But we have already said that Albertina is older then. How could both these statements be true? That is the paradox!
Can you see what is wrong???
The point is that Brigitta is not moving with constant speed with respect to Albertina in her entire journey; she has two long sections of journey in which she does that, but in between there is a possibly short but very important section where she decelerates, stops and then accelerates in the opposite direction. This is what makes all the difference! Acceleration is a real phenomenon, as you probably know from the rides which make you feel sick at a fair! Do you know how it could be detected with a simple piece of equipment?
So what do you make of all this? It seems a bit unfair that Brigitta gets to travel and see more of the Universe but ends up younger than her stay-at-home twin sister - life does not always seem fair! But it should encourage you to travel - not necessarily in space! - and be adventurous! Reference For more discussion of the twin paradox and the techniques described here, you can try looking at the book Flat and Curved Space-Times , by G. F. R. Ellis and myself (published by Oxford University Press in 1988). You could also look at the Mr. Tompkins books which introduce the concepts of special relativity in an amusing and approachable way.
You can find reviews of the Mr Tompkins books and order them from the Cambridge University Press website, for example see The New World of Mr Tompkins : George Gamow's Classic Mr. Tompkins in Paperback
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 146, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953399121761322, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/photon+optics
|
# Tagged Questions
1answer
62 views
### Infinite reflection of light and the conservation of energy / momentum
First off, I confess I'm no physicist, but I have been asking people with a more extensive knowledge this one question, without a definitive answer so far. Basically, I'm playing around with the idea ...
2answers
229 views
### Photon energy - momentum in matter
$E = h\nu$ and $P = h\nu/c$ in vacuum. If a photon enters water, it's frequency $\nu$ doesn't change. What are its energy and momentum : $h\nu$ ? and $h\nu/c$ ? Since part of it's energy and momentum ...
0answers
64 views
### polarization - quantum point of view
polarization could be easily imagined in classical model: direction of E vector. is there any simple image for polarization of single photon?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344778656959534, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/65103?sort=votes
|
## Uniqueness of loop spaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose X is a loop space; by this we mean there is some space $Y$ with $\Omega Y \simeq X$.
Under what assumptions is (the homotopy type of) $Y$ unique?
As has been pointed out below, the homotopy type of $Y$ being determined uniquely is far from true in general. But for connected $Y$, are there conditions we can impose that make it so?
-
2
The question is, probably: «if $\Omega X_1$ and $\Omega X_2$ are homotopy equivalent, are $X_1$ and $X_2$ homotopy equivalent?» – Mariano Suárez-Alvarez May 16 2011 at 3:12
5
Don't have time to leave a full answer, but if the homotopy groups of X are concentrated in a narrow "stable" range (roughly the top nonzero group below about twice the dimension of the bottom nonzero group) then there is a unique homotopy type. Similarly if Y has very small dimension relative to its connectivity. One can compare the obstruction theory for $Map(Y_1, Y_2)$ with that for $Map(\Omega Y_1, \Omega Y_2)$, together with a comparison of the cohomology of $Y_1$ and $\Omega Y_1$, to get a fuller statement. – Tyler Lawson May 16 2011 at 3:57
3
Yes, of course you would want $Y$ to be connected to avoid trivialities. – Dr Shello May 16 2011 at 6:19
1
@Mariano: yes, right. The question is what assumptions are needed to make that true. – Dr Shello May 16 2011 at 6:25
2
You might also ask a slightly different question: "Given a homotopy equivalence of based loop spaces, how do I tell if it is a loop map?" – S. Carnahan♦ May 16 2011 at 8:31
show 4 more comments
## 2 Answers
As Ryan points out, if Y is allowed to be disconnected, then there is no hope, since the loop-space construction sees only the connected component of the basepoint. But even if Y is assumed to be connected, it is not unique. For instance, let G and H be two discrete groups whose underlying sets are bijective, but which are not isomorphic. Then as (discrete) topological spaces, we have $G\simeq H$, and so both $K(G,1)$ and $K(H,1)$ are spaces Y such that $\Omega Y \simeq G \simeq H$. But $K(G,1)$ and $K(H,1)$ are not homotopy equivalent unless $G\cong H$ as groups.
What is true, however, is that if we remember the "up-to-coherent-homotopy" multiplication (i.e. "$A_\infty$-structure") on a loop space $\Omega Y$, then the connected space Y is characterized up to homotopy equivalence by $\Omega Y$ and this additional data. For there is a delooping functor "B" from $A_\infty$-spaces to connected spaces, which preserves homotopy equivalence, and such that $B\Omega Y \simeq Y$.
-
So this means in general for $Y$ connected, its homotopy type is not unique. But are there conditions that can be put on $Y$ to make it so? – Dr Shello May 17 2011 at 6:05
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As others have pointed out, the generic case (whatever that should mean in this case) is that the loop structure on a loop space is not unique. However, things get quite interesting whenever we have a space that actually does have a unique loop structure. I highly recommend looking at:
Dwyer, Miller, Wilkerson: The homotopic uniqueness of $BS^3$, LNM 1298
and
Dwyer, Miller, Wilkerson: Homotopical uniqueness of classifying spaces. Topology 31 (1992), no. 1, 29–45.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364825487136841, "perplexity_flag": "head"}
|
http://meta.math.stackexchange.com/questions/1756/coping-with-abstract-duplicate-questions
|
# Coping with *abstract* duplicate questions.
Any longtime reader cannot help but notice that we get many abstract duplicate questions, e.g. this recent question on partial fraction computation, which is not essentially different from many other questions of the same shape, e.g. this question. Once you know how to solve one of these problems you can solve them all. There are many classes of problems that frequently arise in reparametrized variants, e.g. divisibility problems using Fermat's little theorem, proving basic properties of gcds, etc. Does it make sense to try to try to prevent these minor variations from swamping the site? In a couple years time we could well have many hundreds of variants of such questions that are all essentially the same except for minor variation of parameters. Among other detrimental consequences, this greatly obfuscates search results. Certainly our user community has the expertise to appropriately classify and eliminate these "abstract" duplicates. Perhaps with a little ad-hoc add-on infrastructure, and with moderator support, we could address these issues before they get out of hand. Thoughts?
-
10
Maybe we add a new close reason: "Minor Variant". – Aryabhata Mar 10 '11 at 5:37
6
+1 for raising the issue. I think that trying to keep track of minor variants (I like that terminology better than "abstract" duplicate question) is both something that would be very valuable for the site and something that would require significant time and diligence on the part of moderators and experienced users. So, if others agree that this is a worthy goal, we should brainstorm about strategies to try to do this in an efficient and minimally time-consuming way. – Pete L. Clark Mar 10 '11 at 5:44
@Moron: But that doesn't help remove hundreds of closed minor variants that litter search results. It's a vicious cycle: question poser can't find an answer by searching because there are too many variations or closed threads littering the search results, so they post yet another minor variant. – Gone Mar 10 '11 at 5:44
@Bill: It is one of the many things we would need to do... – Aryabhata Mar 10 '11 at 5:46
3
@Pete, @All: Generally speaking, it's probably a good idea. It would require the "survivor variant" to have answers that address both the general strategy (so as to be useful for future minor variants) and specific application to the question at hand (which shows how the general strategy applies to a specific problem). – Arturo Magidin Mar 10 '11 at 16:40
2
How about at least linking all the minor variants together, so once someone finds one they can find them all without much trouble. This would save a lot of time, and it would be easier to refer between posts. – Eric♦ Mar 10 '11 at 21:05
2
To me the art of answering a question is as valuable as asking a question. Why not accept that such a website will keep going over the same things and let the constant flow of newcomers deal with repeated questions? This will allow the natural flow of people to get their hands on at answering these widely recurrent questions. Trying to create permanent solutions isn't the point of a forum, people can collaborate on the wiki-proof project for that. So if we desire to keep this place young and vibrant, we'll have to accept re-runs of popular questions. – David Kohler Mar 31 '11 at 8:05
## 7 Answers
The current thinking is that subtly different variants of the same question be closed as duplicates of a more canonical, more general question and answer pair:
http://blog.stackoverflow.com/2011/01/the-wikipedia-of-long-tail-programming-questions/
If you keep seeing the same form of questions, whether it’s mod_rewrite rules on Server Fault, freezing computers on Super User, or how to use regular expressions to parse HTML, write a great, canonical answer, once and for all. Make it community wiki so that as many other people as possible can make it great. Work really hard on writing something that is clear, concise, and understandable by as wide an audience as possible.
Personally, I would express this sentiment as "old-timers are tired of answering what is essentially the same question in millions of tiny different varations".
Whenever you feel that is happening, I recommend approaching it as per the above.
-
I would suggest that:
• if there is a sufficiently-general form of the question, new variants should be closed as duplicate, but not deleted—this provides a way for someone searching for that variant to get to the more general result;
• if there is not a sufficiently-general form of the question, ask the general form of the question so that it can be answered in the general case (remember, you're allowed to ask and answer your own question as doing so provides valuable content) and future variants can be closed as duplicates, as above.
-
Are closed question deleted in general now? – draks ... Apr 7 '12 at 11:00
@draks: I don't know, nor have I been around enough lately to have an idea. Perhaps it would be best to ask that as its own question here on meta. – Isaac Apr 7 '12 at 18:02
(too long for comment)
I agree. It is not clear what should be done. Sometimes the problem can be fixed if the most general solution was posted. As an example, the following three threads are conceptually identical:
Limit of $\lim_{x \to\infty} 3\left(\sqrt{\strut x}\sqrt{\strut x-3}-x+2\right)$
Limit of algebraic function avoiding l'Hopital's rule
Limit of algebraic function $\ \lim_{x\to\infty} \sqrt[5]{x^5 - 3x^4 + 17} - x$
But here, the question can be generalized and solved. That is for polynomials we can show: $$\lim_{x\rightarrow\infty}\sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{1}x+a_{0}}-x=\frac{a_{n-1}}{n}$$
Should they be merged? I definitely think so. The proof techniques in the answers of each post are exactly the same.
A harder question is what to do with all the Fermat's little theorem problems They are all so similar, but different enough that there is no "universal case" that can be proven.
-
5
– Gone Mar 10 '11 at 6:14
1
There is also the practical issue of merging: non-abstract answers given to such questions cannot be reasonably merged under one thread. I would say just closing (and possibly deleting) would be better than merging. – Willie Wong♦ Mar 10 '11 at 12:26
@Bill: I guess you are right, since the askers of those questions were certainly first year calc students wanting to understand the simplest possible approach. – Eric♦ Mar 10 '11 at 21:02
A request for comment:
Perhaps it will be worthwhile to have a two-pronged approach:
• Maintain a list of FAQ This is not an FAQ in the sense of how to use the site. But actually a list of Frequently Asked Questions. This could be implemented as a Meta Thread (and to be incorporated in/linked to from the FAQ on how to use this website), or perhaps as a tag (though for the latter it'd be much more reasonable for this to be a "restricted" tag, with a reputation limit before one can tag a question as such). To this list (or a separate one) we can also add all those silly puzzle questions that crop up every now and then, and those inevitable questions about 0.999999... = 1.
• Aggressively closing and possibly deleting such minor variants and duplicates. If I am not mistaken, 10K+ users (of which there are quite a few now on this site) can vote to have an already closed question deleted. So with a bit of help from the community we can certainly clean up the clutter.
To populate the list of FAQ, each question in the FAQ should be a good CW, abstract question/answer to a canonical form of the question that often appears. And ideally we should also link to, in that question should be included links to some less abstract examples that has already appeared on this forum. So we address the abstract and the practical in one go.
Of course, this will require a lot of work from the community.
-
2
I thought this is what tag-wiki/faq is for. – Asaf Karagila Mar 10 '11 at 19:44
Why not just post the general questions on the regular site (not meta) but with the faq tag? (Each question separately, but tagged instead of one long hard to navigate thread) In any case, I feel this is a great suggestion. – Eric♦ Mar 11 '11 at 1:29
At least so far, I don't see this as a problem. It would be nice if a search turned up the earlier variants, but I don't know how to do that and the evidence is that the posters of these problems don't search anyway. I would support a relaxation of the wording of "exact duplicate" to "if you read this you should be able to figure it out", but the volume is not too bad. If I can find the previous answer I post a link, but often I can't.
-
I don't know what the moderator features are on the site, but it might be possible to merge the specific questions into the thread of the more general one under a category of "examples".
-
There is a lot of value to having multiple (if similar) examples of a certain type of question. Grouping them would be much better than deleting. – a little don Mar 3 '12 at 5:27
One of the reasons we get the same question over and over is people who are learning new concepts cannot tell if two problems are essentially the same. In a sense, what they are asking us is "what type of problem is this and how do I know it is that type?"
Deleting what looks, to trained eyes, to be "minor" variations will not stem the flow of questions... but grouping a large number of similar examples together just might.
Is there any way that we could do this?
-
Closing does not mean deleting. Once closed, the question asker can look at the linked section of the parent and see all the variations, if we ensure we usually choose the same parent everytime... – Aryabhata Mar 3 '12 at 16:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578304886817932, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/tagged/graphs-and-networks?sort=votes&pagesize=50
|
# Tagged Questions
Questions about handling graphs in Mathematica, graph theory, graph visualization, GraphPlot, the built-in Graph type and the Combinatorica` package.
3answers
638 views
### Automatically generating a dependency graph of an arbitrary Mathematica function?
Has anyone written a function to pull the function dependencies of a function? That is, it would be nice to have a function that returns a list of function dependencies as a set of rules, terminating ...
2answers
9k views
### How can I use Mathematica's graph functions to cheat at Boggle?
This question is inspired by a Stack Overflow question that I decided to solve using Mathematica. In addition to Mathematica, I thought I'd use some of the new version's graph-related functionality, ...
5answers
2k views
### Custom arrow shaft
Inspired by Sjoerd C. de Vries' nice answer to this question, and the desire to pimp a Graph I did with Mathematica recently I would like to know if there are ways to customize the arrow's shaft ...
1answer
371 views
### Finding the likeliest path in a Markov process
With Mathematica 9, we have the addition of various processes, among which the discrete Markov process. Given a transition probability matrix m, such a process is ...
1answer
349 views
### Reduce distances between vertices of graph to minimum possible?
This simple graph has vertex shapes of different sizes: ...
3answers
1k views
### Finding all shortest paths between two vertices
The built-in FindShortestPath and GraphDistance functions find the shortest path between two particular vertices in a graph. I ...
1answer
303 views
### Strategies for solving problems involving searches
As an example of a search-oriented problem, consider finding all constrained $n$-colorings of a graph. An "$n$-coloring" associates one of $n \ge 1$ colors to each vertex in such a way that no edge ...
1answer
574 views
### How to create regular (planar) graphs?
How to programmatically create and plot regular planar graphs with $k = 3, 4$ or $6$ (not hypercubes) and regular nonplanar graphs of $k = 8$ (see figure)? Note that what matters is the average ...
1answer
906 views
### Is there a way to convert an image into a Graph?
I'm trying to convert an image with several overlapping dots into a Graph. The goal is to be able to derive the Kirchhoff matrix for the randomly created "network ...
1answer
413 views
### Visualizing Rubik's Graph
After the August 2010 discovery that the diameter of the Rubik graph is 20, I wanted to make a way to visualize Rubik's graph. Since there are about $4.3 \times 10^{19}$ vertices in this graph, it is ...
3answers
438 views
### Finding a “not-shortest” path between two vertices
In designing a routine for making a simple three dimensional (5x5x5) labyrinth, I realized that my solutions (a solution is a labyrinth includes a single path from {1, 1, 1} to {5, 5, 5} in a 5 x5x5 ...
2answers
939 views
### Cycles of length N in a graph
If I have an undirected graph represented with an adjacency matrix, how can I find all the subgraphs which are a cycle of length N? I don't really know the math nor the programming language well, so ...
2answers
737 views
### How to plot planar graphs as such?
A planar graph is a graph that can be drawn in the plane such that no two edges cross. For example, the graph ...
4answers
503 views
### How can I replace bi-directional DirectedEdge pairs in a Graph with a single UndirectedEdge?
The Cayley graphs produced by Mathematica 8.0's CayleyGraph function represent actions that are their own inverses in an unconventional way: rather than using a ...
2answers
1k views
### How to play with Facebook data inside Mathematica?
I saw this post from Wolfram here and I would like to know how to import facebook data into Mathematica.
1answer
176 views
### Is UndirectedEdge[a,b] the same edge as UndirectedEdge[b,a]?
Is UndirectedEdge[a,b] the same edge as UndirectedEdge[b,a]? Then please consider this. ...
3answers
662 views
### How to generate a random tree?
Is it possible to generate a random tree without explicitly constructing a random adjacency matrix that satisfies tree properties? How about a random directed tree? Edit: incredible answer by ...
3answers
322 views
### How to get complete Documentation Center graph of guide pages?
On the very last image below you can see a typical path of walking through Documentation Center guide pages. What is the best way to get the graph data and visualize the whole structure of these ...
3answers
714 views
### Generating graphs interactively (GUI)
I want to create graphs interactively using a GUI. I thought of using a ClickPane[] environment. The code I have (in part borrowed from the Documentation) works ...
2answers
241 views
### Packed Graph or GraphPlot output with non-square layout?
Graph or GraphPlot produce square layouts for disconnected graphs: Graph[Table[i -> Mod[i^3, 100], {i, 1, 100}]] I´d like to get a rectangular layout with ...
1answer
413 views
### How to export and import Graphs with additional data?
I'm trying to export and then later import graphs with additional data added such as VertexLabels and EdgeLabels. When I export ...
1answer
263 views
### Convert GraphPlot[]s with many nodes into something that's human-understandable
This is more of an abstract/creative problem. I think everyone who has played around with GraphPlot, ended up with an image that looks like this, at least once in ...
3answers
1k views
### How to generate random directed acyclic graphs?
How to programmatically build random directed acyclic graphs (DAG)? I know about the AcyclicGraphQ predicate and the ...
1answer
411 views
### Graph vs GraphPlot
What is the difference (in purpose) between Graph and GraphPlot? Which function is bested suited to which tasks? Background: I ...
1answer
299 views
### Need tips on improving this directed graph
First we define a function that returns the least odd prime factor. lopf[n_] := FactorInteger[n][[2, 1]] Then we craft a routine that performs the 3x+1 steps ...
2answers
445 views
### Graph layout on a grid
Is it possible for Mathematica to layout a graph so that the connections lie on a grid? Furthermore, for graphs with multiple edges per vertex, is it possible to specify the location on the vertex ...
3answers
727 views
### How to get graph layout that reflects edge weights?
My heart soared to discover that Mathematica 8 offers support for specifying a weighted adjacency matrix in WeightedAdjacencyGraph. But it seems that, regardless of the weights between edges, the ...
2answers
377 views
### How can I speed up the classic GA for graph coloring?
I'm trying to compute the chromatic number of this graph (which is 28): g = Import@"http://www.info.univ-angers.fr/pub/porumbel/graphs/dsjc250.5.col"; My genetic ...
1answer
280 views
### When to use built-in Graph/GraphPlot vs. Combinatorica
What are the pros and cons of using built-in Graph/GraphPlot (and related) types vs. types in the Combinatorica package?
4answers
325 views
### How can I remove B -> A from a list if A -> B is in the list?
I have a list of transformations like this: list = {"A" -> "B", "B" -> "A", "C" -> "D"} As this is used to plot an undirected graph with ...
3answers
396 views
### How to find all vertices reachable from a start vertex following directed edges?
How to find all vertices reachable from a given start vertex following directed edges, in a cyclic directed graph given as ...
3answers
511 views
### Creating diagrams for category theory
Lately I've been doing algebra and I have found myself drawing a bunch of diagrams when I attempt to solve a problem. Most of the diagrams are very simple so I thought, I bet I can do this in ...
3answers
336 views
### Determining all possible traversals of a tree
I have a list: B={423, {{53, {39, 65, 423}}, {66, {67, 81, 423}}, {424, {25, 40, 423}}}}; This list can be visualized as a tree using ...
2answers
313 views
### How to measure segment length and branch angle
I am trying to measure segment length and branch angle or bifurcation angle between each pair of segments. My image after thinning looks like this: ...
1answer
259 views
### Visiting nodes in a tree bottom up
I have a problem that can be reduced to this one: Suppose I have an unbalanced (almost random) tree. The leaves of the tree have one property already set, and all vertex (including the leaves) have ...
2answers
124 views
### What is the most convenient way to change options for Graph[] objects?
What is the most convenient way to change options such as VertexLabels in existing Graph objects? (Version 7 users note: ...
2answers
846 views
### Solving the Travelling Salesman Problem
I've been trying to find some kind of mathematical computer software to solve the Travelling Salesman Problem. The Excel Solver is able to do it, but I've noticed there is a built-in function in ...
2answers
220 views
### Comfortable Edge Labeling of Undirected Graph
I'm working on a program that finds special [called "interval"] edge-colorings for graphs. Output is as follows: ...
1answer
114 views
### How can I conveniently call igraph through RLink?
Mathematica has lots of functions for working with graphs, but this functionality is not yet as mature as the igraph library. It is useful to have access to igraph from Mathematica, both for ...
1answer
349 views
### Drawing Graph Products
I need to draw graphs that are Cartesian products of 2 graphs. I need them to look like this: . So each copy of the factors can have LinearEmbedding, but the whole ...
2answers
770 views
### Simple algorithm to find cycles in edge list
I have the edge list of an undirected graph which consists of disjoint "cycles" only. Example: {{1, 2}, {2, 3}, {3, 4}, {4, 1}, {5, 6}, {6, 7}, {7, 5}} Each ...
3answers
310 views
### How to recompute the layout of a Graph?
I asked a question about changing Graph properties in a similar way to how Show can change ...
2answers
683 views
### How to convert an image to a graph and get the positions of the edges?
I am trying to convert an image to a graph. My image is already skeletonized and looks pretty much like this: What I need to do is identify all vertices and edges, and in particular get a list of ...
2answers
374 views
### Calculating morphometric properties
Is it possible to calculate segment length, segment diameter and branch angles for three dimensional geometry. Geometry contains vertex points and polygon surfaces as ...
1answer
127 views
### Layout a second graph of the same nodes in the same way
I have a number of graphs with the same nodes. I would like to be able to somehow save the layout that results from the spring embedded results of one graph (the first image), and have the other ...
1answer
376 views
### How to divide a graph into connected components?
Suppose I have some edges: edges = {1 -> 2, 2 -> 3, 3 -> 1, 4 -> 5, 3 -> 6, 7 -> 8, 8 -> 9, 8 -> 10}; And I make a graph: ...
1answer
178 views
### Ulam's Spiral with Opperman's diagonals (quarter-squares)
First we craft a function to return the quadrant boundary based on Oppermann's Conjecture a[n_] := (Mod[n, 2] + n^2 + 2 n)/4 Then we create a few lists ...
1answer
351 views
### Use Mathematica to do a network analysis of Mathematica.SE
Recently Wolfram|Alpha announced that you could use it to analyze your Facebook site. How could I do a network analysis of questioners and answers here on Mathematica.SE?
2answers
195 views
### Listing subgraphs of G isomorphic to SubG
If I have an undirected graph G, how could I write a function in Mathematica to obtain a list of subgraphs of G that are isomorphic to some other undirected graph SubG? I'd like to learn how to ...
4answers
322 views
### Ordering vertices in GraphPlot
I'm new here, so forgive me if this question is not well-posed/duplicates an earlier question - although I've searched for similar questions without success. I'm trying to present a plot of ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222262501716614, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/1307/monte-carlo-methods-for-vanilla-european-options-and-itos-lemma?answertab=active
|
# Monte carlo methods for vanilla european options and Ito's lemma.
I understand that by applying Ito's lemma to the following SDE
$$dX=\mu\,X\,dt+\sigma\,X\,dW$$
one obtains a solution to the above SDE which is as follows:
$${X}\left( t\right) =\mathrm{X}\left( 0\right) \,{e}^{\sigma\,\mathrm{W}\left( t\right) +\left( \mu-\frac{{\sigma}^{2}}{2}\right) \,t}$$
I have been told that I can use either of these equations (SDE or its solution) for applying monte carlo simulations to vanilla European options although the second one converges faster that the first one.
Can someone confirm this statement?
Furthermore, one point remains unclear to me: What is Ito's lemma used for in quantitative finance?
-
## 5 Answers
The difference between the two is that the first will lead you to a discretization scheme of the process.
So you will have to simulate a whole (approximate) trajectory of (meaning by that $X'_{t_0},...,X'_{t_n}$) up to time $T$ (the expiry of your vanilla option) to get to $X'_T$ which is then only an approximation of $X_T$.
The second method is exact and gives you the law of $X_T$ in one step only.
I simply don't get your second question.
-
Thanks for your reply. Yes I could have formulated my second question in a simpler manner: what is Ito's lemma used for in quantitative finance? – balteo Jun 17 '11 at 8:05
If we have some function $f(a,b,c,...)$, where $a,b,c,...$ can be stochastic or otherwise, then Ito's lemma is used to find $df(a,b,c,...)$.
1)
You can simply do raw Monte Carlo. Consider a contingent claim maturing in $6$ months. Then for each $i$-th simulation you can calculate:
$S(T)_i = S(t)e^{(r-q-\frac12 \sigma^2)0.5 + \sigma \sqrt{0.5}z_i)}$
where $z_i \sim N(0,1)$, and $\sqrt{\Delta t}(z_i)$ is equal in distribution to $W_{\Delta t}$.
2)
You can use the above methodology but create a sample path. For example you might want to generate a 6-point sample path. That is;
$S(t_i) = S(t_{i-1})e^{(r-q-\frac12 \sigma^2)\frac{0.5}{6} + \sigma \sqrt{\frac{0.5}{6}}z_i)}$
for $i \in \{2,3,4,5,6,7\}$.
3)
You can discretize the SDE itself using Euler-Marayama.
$\Delta S(t_i) = a(t,S)\Delta t + b(t,S)\Delta W(t_i)$
4)
You can discretize the SDE using Milstein:
$\displaystyle \ \ \Delta S(t_i) = a(t,S)\Delta t + b(t,S)\Delta W(t_i) + 0.5b(t,S)\frac{\partial b(t,S)}{\partial S}\bigg((\Delta W(t_i))^2 - \Delta t\bigg)$
5)
Consider the above methodologies to be the function $f(v)$, where $v$ are your random numbers. You can use a control variate $g(v)$ in the following fashion:
$\frac1N \sum_{i=1}^N [ f(v_i) - g(v_i) ] + E[g(v)]$.
In practice you want to use correlations and other stuff to improve the outcome; however this is how it's presented to students. $E[g(v)]$ might be the closed-form BS price, $g(v_i)$ might be the monte carlo BS price for a simple instrument, $f(v_i)$ might be some really complicated instrument that's closely correlated with $g(.)$.
6)
You can use antithetic sampling. That is,
$\text{MC estimate} = \frac12 [ f(v) + f(-v)]$.
There are some technical conditions you need to satisfy to make this worth the extra computation.
-
In quantitative finance, we sometimes find ourselves choosing a new stochastic model for what market variables are random, and how. For example, someone might decide that they like the SDE \begin{equation} dS = \mu\ S\ dt + \left( \frac{S_0}{S} \right)^{\frac32} \sigma\ S\ dW \end{equation} because they want to capture a leverage effect.
Now, this SDE may or may not have a closed-form solution. For example in your question, \begin{equation} X(t)=X(0)\ \exp{\left( σW(t)+(μ − \frac{σ^2}2)t\right)} \end{equation} is the solution to the Black-Scholes SDE. On the other had, I'm not even sure if the leverage SDE above has a solution.
Ito's Lemma is the mathematical tool we can use to prove that a potential solution to our SDE really satisfies the SDE. So in practice, that is where it is actively used in quantitative finance.
The lemma also underlies a lot of the stochastic calculus used in quantitative finance, for example the Girsanov theorem, but those uses are "hidden underneath" since the pure mathematics is reasonably mature by now.
To chime in on the question of convergence, if you possess a solution to your favorite SDE, then you can simulate terminal values $S_t$ of the process after macroscopic bits of time $t$. That then lets you quickly simulate, say, $M$ option values at expiration time $t$ and then average to form an estimate of their expected value at computational cost $O(M)$. If you do not possess a solution, then you must generate each $S_t$ by simulating a whole path of $S$ from time 0 to time $t$ in $J$ small increments of size $\Delta t$, using your SDE itself, at computational cost $O(J \times M)$.
-
To answer the more general question that seems to be giving you trouble, Ito's lemma is the stochastic version of the chain rule of standard calculus.
What is it useful for? That's like asking what the chain rule is useful for. Calculus is useful in quantitative finance, and in particular, for stochastic processes, you need to use the stochastic version of calculus. To compute the derivatives in this stochastic calculus, you need a chain rule, and that's what Ito's lemma provides.
I suspect you never really thought out what the chain rule in normal calculus is useful for. Once you understand that, it's clear what you need Ito's lemma for.
-
Difficult to determine what the main question really was, but good answer. – SRKX♦ Nov 26 '12 at 19:02
To add to the answer of TheBridge:
I understand your second question in the sense if you could use Ito's lemma for all stochastic processes. This is definitely not the case: It can also be used for processes with bounded quadratic variation (e.g. Wiener process) - you should google this term or look it up in wikipedia: http://en.wikipedia.org/wiki/Quadratic_variation
-
Thanks for your reply too. My second question was not clearly formulated. I meant to ask: What is Ito's lemma used for in quantitative finance? – balteo Jun 17 '11 at 8:08
@balteo You should edit your question to be more clear about what you meant to ask. – chrisaycock♦ Jun 17 '11 at 14:24
@chrisaycock: done – balteo Jun 18 '11 at 11:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.909547746181488, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/306327/where-are-the-values-of-the-sine-function-coming-from/306332
|
# Where are the values of the sine function coming from?
On high school, I was taught that I could obtain any sine value with some basic arithmetic on the values of the following image:
But I never really understood where these values where coming from, some days ago I started to explore it but I couldn't discover it. After reading for a while, I remembered that the sine function is:
$$\sin=\frac{\text{opposite}}{\text{hypotenuse}}$$
Then I thought that I just needed to calculate $\frac{1}{x}$ where $0 \leq x \leq 1$ but it gave me no good results, then I thought that perhaps I could express not as a proportion of the opposite and hypotenuse, I thought I could express it as the ratio between slices of the circumference, for example: circumference $=\pi$, then divided it by $4$ (to obtain the slice from $0$ to $90$ degrees) then I came with: $x/ \frac{\pi}{4}$ where $0\leq x \leq \frac{\pi}{4}$ but it also didn't work, the best guess I could make was $\sqrt{x/ \frac{\pi}{4}}$, the result is in the following plot:
The last guess I made seems to be (at least visually) very similar to the original sine function, it seems it needs only to be rotated but from here, I'm out of ideas. Can you help me?
-
are you interessted in the values of $\sin$ on your image, or for the value of $\sin$ for any $x$ ? – Dominic Michaelis Feb 17 at 17:19
@DominicMichaelis Values for any $x$. – Gustavo Bandeira Feb 17 at 17:32
Is your question answered? If yes, could you be so kind to accept an answer? – Dominic Michaelis Feb 17 at 18:51
12
In respectful disagreement with @Dominic, I think a questioner should wait a full day before accepting any answer, to judge among the various answers to identify the most helpful one. – Lubin Feb 17 at 19:11
## 7 Answers
All of the values in your picture can be deduced from two theorems:
1. The Pythagorean theorem: If a right triangle has sides $a,b,c$ where $c$ is the hypotenuse, then $a^2+b^2=c^2$
2. If a right triangle has an angle of $\frac{\pi}{6} = 30^\circ$, then the length of the side opposite to that angle is half the length of the hypotenuse.
Both can be proven with elementary high school geometry.
Let's see how this works for Quadrant I (angles between $0$ and $90^\circ$), as the rest follows from identities.
1. $\sin 0 = 0$; this is clear from the definition.
2. $\sin 90^\circ = 1$; less intuitive because it breaks the triangle, but $\sin 90^\circ =\cos0$, and $\cos 0 = 1$ because the adjacent side and the hypotenuse coincide when the angle is $0$.
3. $\sin 45^\circ = \frac{1}{\sqrt{2}}$; this corresponds to an isosceles triangle, and if we set the sides to be $1$, then by the Pythagorean theorem, the hypotenuse is $\sqrt{2}$.
4. $\sin30^\circ = \frac{1}{2}$; this follows immediately from theorem 2 above.
5. $\sin60^\circ = \frac{\sqrt{3}}{2}$; if we take a $30^\circ-60^\circ-90^\circ$ triangle, and set the side opposite to the $30^\circ$ angle to be $1$, then the hypotenuse is $2$ and the side opposite to the $60^\circ$ angle satisfies $x^2 + 1 = 2^2$, so its length is $\sqrt{3}$ - and thus $\sin 60^\circ = \frac{\sqrt{3}}{2}$.
-
This is petty much how I would have answered the question. – Lubin Feb 17 at 19:08
2
Alternatively, you can take an equilateral triangle, break it in half and find $\sin\left(30^\circ\right)$ and $\sin\left(60^\circ\right)$ using the Pythagorian theorem. – Ian Mateus Feb 17 at 19:17
The hypotenuse is $\sqrt{2}$? I thought I should consider that the hypotenuse is always one, due to this page on Stillwell's Mathematics and It's history. Now I'm confused. – Gustavo Bandeira Feb 24 at 17:05
@GustavoBandeira For a given triangle, say the one with angles $(45^\circ, 45^\circ, 90^\circ)$, we can "change the units" however we like. If we call the lengths of the sides $1$, then the length of the hypotenuse is $\sqrt{2}$, and if we call the length of the hypotenuse $1$, the lengths of the sides would be $\frac{1}{\sqrt{2}}$. It is true that when defining trigonometric functions on a circle, the hypotenuse corresponds to the radius and it usually makes sense to call it $1$, but the results are the same either way. – Alfonso Fernandez Feb 24 at 21:09
As in Alfonso Fernandez's answer, the remarkable values in your diagram can be calculated with basic plane geometry. Historically, the values for the trig functions were deduced from those using the half-angle and angle addition formulae. So since you know 30°, you can then use the half-angle formula to compute 15, 7.5, 3.25, 1.125, and 0.5625 degrees. Now use the angle addition formula to compute 0.5625° + 0.5625° = 2*0.5625°, and so on for 3*0.5625°, 4*0.5625°...
These would be calculated by hand over long periods of time, then printed up in long tables that filled entire books. When an engineer or a mariner needed to know a particular trig value, he would look up the closest value available in his book of trig tables, and use that.
Dominic Michaelis points out that in higher math the trig functions are defined without reference to geometry, and this allows one to come up with explicit formulae for them. You may reject this as mere formalist mumbo-jumbo, but conceptually I find that the university-level definitions for the trig functions make much more sense than the geometric ones, because it clears the mystery on why these functions turn up in situations that have nothing to do with angles or circles. So eventually you may lose your desire to have the values computed from the geometrical definition.
Of course, if you're going to be using the geometrical definition anyway, you could also just grab a ruler and a protractor and measure away all night, and compute a table of trig values that way.
One final note: you're still using the "ratio of sides of a triangle" definition for the trig functions. I strongly recommend you abandon this definition in favor of the circular definition: $sin(\theta)$ is the height of an angle $\theta$, divided by the length of the arm of the angle, $cos$ is the same for the width of an angle, and $tan$ is the slope of the arm of the angle. The reason why I recommend you use this definition is because, while it's as conceptually meaningful as the triangular one (once you think about it for a second), it allows you to easily see where the values for angles greater than 90° are coming from. The triangular definition is so limited that I personally find it destructive to even bother teaching in school, I wonder if it wouldn't be easier to jump right in with the circular definition. I know it held me back for years.
-
Yes, it seems you got some time stuck like me. For your comment on formalism, I really have no problem with them - I wanted to understand the source of the values at any cost. – Gustavo Bandeira Feb 17 at 18:47
[Images from Wikipedia. ] $$\sin x = \frac ah\quad \cos x = \frac bh \quad \tan x = \frac ab$$
$\qquad\qquad\qquad\qquad\qquad\qquad$
And see how this relates to the unit circle, where $$\sin\theta = \frac{\text{opposite}}{\text{hypotenuse}} = \frac y1 = y$$ $$\cos\theta = \frac{\text{adjacent}}{\text{hypotenuse}} = \frac x 1 = x$$
$\qquad\qquad\qquad\qquad\qquad\qquad$
So in your image, the values $(x, y)$ on the unit circle depends on the angle of interest: in particular, $(x, y) = (\cos \theta, \sin \theta)$:
These values of $x = \cos\theta, y =\sin\theta$ and $h = 1$ (the radius of the unit circle, and the hypotenuse of the corresponding right triangle) can then be computed using the Pythagorean Theorem: $$x^2 + y^2 = h^2 = 1$$ just as $$\sin^2x + \cos^2x = 1$$
You're on your way if you combine this theorem with the fact that when a right triangle has an angle of $\theta = \frac{\pi}{6} = 30^\circ$, then the length of the side opposite to that angle is half the length of the hypotenuse (the y-value of the triangle with hypotenuse 1, in the unit circle = 1/2). Hence, $y = \sin\left(30^\circ\right) = 1/2$.
For example, knowing this, we can find $$x^2 + y^2 = 1 \iff \cos^2(30^\circ) + \sin(30^\circ) = 1 \iff \cos^2(30) + (1/2)^2 = 1$$ $$\iff \cos^2(30) = 3/4 \implies \cos(30) = \sqrt 3/2$$
-
4
The drawings from Wikipedia's Sine entry are published under Creative Commons licence that contains the condition of attribution. Since these have two different authors, I assume that at least one of them is not yours. Reusing of available work is a good practice, but please abide the rules on which those pictures are shared. – dtldarek Feb 17 at 17:43
Thank you, @dtldarek, I should have been more careful with attributing the images; thanks for having done so. I am hardly the author of any of them, and certainly did not intend to suggest I was! – amWhy Feb 17 at 17:46
+1 Greeeeat! It looks like a poem! – Babak S. Feb 18 at 3:58
The values of the sine function can be calculated by $$\sin(x)=\sum_{k=0}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}=x-\frac{x^3}{1\cdot 2\cdot 3}+\frac{x^5}{1\cdot 2 \cdot 3 \cdot 4 \cdot 5} \mp \dots$$ In university the $\sin$ function is introduced without geometric motivation, it is introduced as a function from the exponential function (i hope you know this one) and $$\sin(x)=\frac{1}{2i} \cdot (e^{ix}-e^{-ix})$$ where $i$ is the imaginary unit.
Maybe another formula is more motivationed, but it's really complicated to make it rigorous, the main idea is, that you can write a polynomial as a product of its zeros and a number -- for example $$3(x-1)(x+2)=3x^2 +3x-6$$ Now you can try the same with the $\sin$ function but as $$x \cdot (x-\pi) (x+\pi) \cdot$$ is $\infty$ for nearly all $x$ you write it a bit different. In fact $$\sin(x)=x\cdot \prod_{k=1}^\infty \left(1-\frac{x^2}{k^2\pi^2}\right)$$ (note i used $(a+b)\dot (a-b)=a^2-b^2$. This one is more motivationed by the geometric interpretation, from the $0$ of the $\sin$ function.
As the comments were right, there is a connection. The Eulerformula is $$\exp(ix)=\cos(x)+i \sin(x)$$ and a number in the complex plane can be written as $$z=|z|\cdot e^{i \varphi}$$ where $$\varphi= \arctan\left(\frac{\Im(z)}{\Re(z)}\right)$$ and $\Im(z)$ is the imaginary part of $z$ and $\Re(z)$ is the real part of $z$.
-
It's impossible to obtain the sin values with the geometric motivation then? – Gustavo Bandeira Feb 17 at 17:06
i am trying it's not purely geometric – Dominic Michaelis Feb 17 at 17:10
1
The factorial should be outside the parentheses in your summation formula. – Clayton Feb 17 at 17:10
5
I have no idea how you crafted "In university the sin function is introduced without geometric motivation" – I admit, that there is no word "triangle" or "hypotenuse" in those formulae, but I cannot agree with you. Surely $e^{it}$ is a circle and $\sin t = \Im(e^{it})$ is exactly the same definition that children learn at school (just using different names). – dtldarek Feb 17 at 17:22
1
@dtldarek: Had this been in an answer I would have upvoted it. The connection with the complex unit circle is surely the perspective that unifies this topic. – DWin Feb 17 at 21:27
show 5 more comments
This question is really a classic question for me.
The first thing you want to know is how do you measure the angles. I will talk in terms of a "clock" to ease textual explanation, with 1pm - 12pm denoting the position of, let's say a second hand.
Consider that, at 3pm, the angle is $0^\circ$. To move up a few degrees, go anti-clockwise. The first angle you will see is $30^\circ$. $\sin30^\circ = \frac 1 2.$ Since the hypotenuse is also the radius of the circle, it takes the value of $1$. Since $\sin = \frac{\text{opposite}}{\text{hypotenuse}}$, the blue dot's vertical distance from the $x$-axis is definitely 0.5
Continue this in an anti-clockwise walk and you can see, at 12pm, you would have moved $90^\circ$. Continue down this track, at 9pm, it would be a half-turn, which explains the $180^\circ$...and so on. We know that the $\sin$ graph is cyclical...so if you have an angle greater than a full revolution ie $360^\circ$, just subtract revolutions until the range $0^\circ$ to $360^\circ$ is reached.
That is how you read this circle.
-
3
What's with the "Haha..." at the beginning? Is the question funny? – Peter Tamaroff Feb 17 at 23:13
1
the question isnt funny, but the good old days when i was learning this 10 years ago were. – bryansis2010 Feb 18 at 1:38
If we go to the minor detail of how we get the values of sin,cos, tan it is the function of numerical analysis is which we have different methods to get the values of these one important method i remember is newton ralphson.
i studied numerical analysis 2years back. howeever if you need to really find out you can search in this area because this is the origin.
See Newton's method (Wikipedia)
-
$cos(\phi)$ is the $x$ value of a point at angle $\phi$, and $sin(\phi)$ is the $y$ value of a point at angle $\phi$; on the unit circle (at radius 1).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 90, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391369223594666, "perplexity_flag": "head"}
|
http://headinside.blogspot.com/2012/11/bayes-theorem.html
|
0
## Bayes' Theorem
Published on Sunday, November 11, 2012 in controversy, innumeracy, math, self improvement, videos
Ever wonder what happens to those amazing breakthroughs you hear about on the news, but never hear about again? Somehow, when they're finally released, the amazing qualities of, say, that new wonder drug, never seem to reduce the suffering the way most people hoped.
Look through the reports on the test results of those breakthroughs, and you'll frequently find one line that says p < 0.05. In other words, the tests indicate that the results reported on in the report had only a 5% chance of happening randomly.
If I flip a coin 20 times, and heads shows up 15 or more times (in other words, greater than 14 times), we can work out that there is roughly a 2.07% chance of that happening at random. Reporting on this, we'd note that p < 0.05, and use this to justify examining whether the coin is really fair.
That works great for events dealing with pure randomness, such as coins, but how do you update the probabilities for non-random factors? In other words, how do you take new knowledge into account as you go? This is where Bayes' theorem comes in. It's named after Thomas Bayes, who developed it in the mid-1700s, but the basic idea has been around for some time.
You should be familiar, of course, with the basic formula for determining the probability of a targeted outcome:
$Probability=\frac{targeted \ outcome(s)}{total \ possibilities}$
The following video describes the process of Bayes' theorem without going into any more mathematics than the above formula, using the example of an e-mail spam filter:
To get into the mathematical theorem itself, it's important to understand a few things. First, Bayes' theorem pays close attention to the differences between the event (an e-mail actually being spam or not, in the above video) and the test for that event (whether a given e-mail passes the spam test or not). It doesn't assume that the test is 100% reliable for the event.
BetterExplained.com's post An Intuitive (and Short) Explanation of Bayes’ Theorem takes you from this premise and a similar example, all the way up to the formula for Bayes' theorem. It's interesting to note that it's effectively the same as the classic probability formula above, but modified to account for new knowledge.
The following video uses another example, and is also simple to follow, but delves into the math as well as the process. Understanding the process first, and then seeing how the math falls into place helps make it clear:
The tree structure used in this video helps dramatize one clear point. Bayes' theorem allows you to see a particular result, and make an educated guess as to what chain of events led to that result.
The p < 0.05 approach simply says “We're at least 95% certain that these results didn't happen randomly.” The Bayes' theorem approach, on the other hand, says “Given these results, here are the possible causes in order of their likelihood.”
If I shuffle a standard 52-card deck, probability tells us that the odds of the top card being an Ace of Spades is 1/52. If I turn up the top card and show you that it's actually the 4 of Clubs, our knowledge not only chance the odds of the top card being the Ace of Spades to 0/52, but gives us enough certain data we can switch to employing logic. Having seen the 4 of Clubs on top and knowing that all the cards in the deck are different, I can logically conclude that the 26th card in the deck is NOT the 4 of Clubs.
We can switch from probability to logic in this manner because we've gone from randomness to certainty. What if I don't introduce certainty, however? What if I look at the top card without showing it to you, and only state that it's an Ace?
This is the strength of Bayes' theorem. It bridges the ground between probability with logic, by allowing you to update probabilities based on your current state of knowledge, not just randomness. That's really the most important point about Bayes' theorem.
There's much more to Bayes' theorem than I could convey in a short blog post. If you're interested in a more in-depth look, I suggest the YouTube video series Bayes' Theorem for Everyone. I think you'll find it surprisingly fascinating.
### Post Details
Posted by Pi Guy on Nov 11, 2012
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9629607200622559, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/138331/tensor-product-definition-in-wikipedia?answertab=votes
|
Tensor product definition in Wikipedia
In Wikipedia - http://en.wikipedia.org/wiki/Tensor_product#Tensor_product_of_vector_spaces, tensor product space is explained as the following:
$$F(V\times W) = \left\{\sum_{i=1}^n \alpha_i e_{(v_i, w_i)} \ \Bigg| \ n\in\mathbb{N}, \alpha_i\in K, (v_i, w_i)\in V\times W \right\}$$
equivalence relation: \begin{align} e_{(v_1 + v_2, w)} &\sim e_{(v_1, w)} + e_{(v_2, w)}\\ e_{(v, w_1 + w_2)} &\sim e_{(v, w_1)} + e_{(v, w_2)}\\ ce_{(v, w)} &\sim e_{(cv, w)} \sim e_{(v, cw)} \end{align}
and $V \otimes W = F(V \times W) / R$
where $R$ is the space created by equivalence relations.
So, the first question is why is free vector space defined as a sum? I cannot get how vector space can be a sum. Second question is how can we take a quotient space when there is only $e_{(v_i,w_i)}$ and not $e_{(v_i,w_j)}$?
If there is anything I was mistaken, please inform me.
Thanks.
-
3
If you don't like "formal sums", let $\mathcal{F}$ denote the set of all functions $V \times W \to K$. This is a vector space in an obvious way. Think of $e_{(v,w)}$ as just another name for the function $V \times W \to K$ that maps $(v,w)$ to $1$ and everyhing else to $0$. The addition and scalar multiplication appearing in the definition of $F(V \times W)$ are now those of $\mathcal{F}$ and $F(V \times W)$ is thus defined as a certain subspace of $\mathcal{F}$. FWIW, I recommend learning this material from a single textbook (any textbook) and not Wikipedia. – leslie townes Apr 29 '12 at 6:56
1
^ With the caveat that the functions $V\times W\to K$ are zero at all but finitely many elements of $V\times W$. To go between this functional interpretation and the constructive one, I find it helpful to think of it as a function that sends elements $(v,w)$ to their coefficient in the free vector space. (This is all prior to the quotient process to create the tensor product of course.) – anon Apr 29 '12 at 7:42
3 Answers
The free vector space construction is not just a single sum. Here's an intuitive picture of how I often think of the free vector space construction. We have a set $X$ - note that if this set has any additional structure defined on it, like a binary operation and some axioms, we decide to forget about it for the purpose of our construction. So, either naturally or artificially, we will assume that the elements of $X$ are not things that can truly be added together, like $X=\{\rm \square,~dog,~2\}$. Furthermore, we have a field $K$. We then pretend we can add things in $X$ together, and multiply them by scalars from $K$, and we end up with formal $K$-linear combinations of the form
$$\sum_{x\in X}\alpha_xx= \alpha_{\square} \square+\alpha_{\rm dog}\mathrm{dog}+\alpha_{2} 2.$$
There is a problem with this though. That numeral "$2$" there is supposed to have no structure to it, as part of our construction process, yet it also designates an element of $K$ too! We don't want to confuse ourselves, so let's put the elements of $X$ as subscripts of a generic "$e$." We then have linear combinations of the form
$$\sum_{x\in X}\alpha_x e_x=\alpha_\square e_\square +\alpha_{\rm dog} e_{\rm dog}+\alpha_2e_2.$$
Keep in mind the $\alpha$ coefficients are scalars from $K$. On top of this, we're well within our rights to now "pretend" that the set of all of these formal sums satisfies every single vector space axiom. We have by virtue of imagination created a new vector space out of $X$. (You will notice it is isomorphic to any other vector space of dimension three, or any other free vector space generated by a set of three elements. As I suggested earlier, the actual contents of $X$ are moot, they ultimately play the part of indexing a basis for our space.)
Remark. A vector space does not necessarily allow infinite sums. (It can be defined in finite-dimensional vector spaces, or on finite-dimensional subspaces of vector spaces, over a field that is also a complete metric space so that it allows infinite sums in the scalar field. Look up Hilbert or Banach spaces.) Thus in our formal sums, when we write $x\in X$ in the subscript, we run the risk of having an infinite sum if $X$ is itself infinite! To prevent this occurrence, we may instead write
$$\sum_{i=1}^n \alpha_i e_{x_{\Large i}} :~~ x_i\in X.$$
These are the actual forms our desired formal combinations will now have.
At this point it gets tricky, because we are going to create the free vector space out of another vector space (in fact, out of a Cartesian product of vector spaces). At this point in our discussion, we must completely forget about the fact that $V$ and $W$ are vector spaces and have algebraic structure; temporarily to us they are just sets with no further facts about them available for use. As before, we have our set of formal linear combinations:
$$\left\{\sum_{i=1}^n \alpha_i e_{(v_i,w_i)}: n\in\Bbb N, (v_i,w_i)\in V\times W \right\}$$
Note: These $(v_i,w_i)$ are not the 2-tuples of basis vectors from $V$ and $W$. They are just a set of $n$ arbitrary vectors from $V\times W$. In fact we have designated no basis for $V$ or $W$. The $v_i$ and $w_i$, for each $i$, can be any two elements from $V$ and $W$ respectively.
Now, finally, after all of this, we can remember the vector space structure of $V\times W$ and create the relations given on Wikipedia, and quotient out by them. After the quotient, we can rename the $e_{(v,w)}$ objects (these were the basis vectors when we were still at the stage of a free vector space) as $v\otimes w$. (It gets tedious writing everything in subscripts!)
There are a few very important differences between $V\times W$ and $V\otimes W$ that need to be identified: even though the pure tensors $v\otimes w$ have two components, each taken from $V$ and $W$ resp., our tensor product will have sums of pure tensors (so, e.g., $u\otimes v+x\otimes y$) that cannot always be written as pure tensors, because the two components of pure tensors are independently linear. For example, in $V\times W$, we can split $(v+x,w+y)$ into $(v,w)+(x,y)$ (you cannot split the first component without also having to split the second), whereas in the tensor product we have to split each individually:
$$\begin{array}{c c c} (v+x)\otimes(w+y) && =v\otimes(w+y) & +x\otimes(w+y) \\ && =v\otimes w +v\otimes y & +x\otimes w+x\otimes y. \end{array}$$
Finally, scalar multiplication of $u\otimes v$ does not affect both componenets at once; only one or the other, resulting in $c(u\otimes v)=(cu)\otimes v=u\otimes(cv)$. In the Cartesian product, scalar multiplication affects both components simultaneously, where $c(u,v)=(cu,cv)$. Hope this helps!
-
1
+1 for the "dog" thing. XD I like your answer. – Patrick Da Silva Apr 29 '12 at 7:43
What they mean is that you take the free vector space on $V\times W$. Which means you take an indexed set $\{e_{(v,w)}\}_{(v,w)\in V\times W}$ and you consider all formal (finite) sums $\displaystyle \sum_{(v,w)\in V\times W}\alpha_{(v,w)}e_{(v,w)}$--these are the ELEMENTS of the free vector space, not the vector space itself.
So, if you had the vector space $\mathbb{Z}_3$ and $\mathbb{Z}_3^2$ a typical element of $\mathbb{Z}_3(\mathbb{Z}_3\times\mathbb{Z}_3^2)$ is $2e_{(1,(0,1))}+1e_{(2,(1,2))}$
As for your second question, I could see how that could be slightly confusing. Ignore their notation for a second and look at mine. There is no "dependence" on the indices in the sum, as it sort of appears in their sum (even though really there isn't)--just as there is no "dependence" in the indices of the relation you are quotienting by.
-
The free vector space is not defined as a sum but rather as a set of sums. Namely, all linear combinations of elements of the basis $\{e_{(v,w)}\}$ with coefficients in $K$.
This is not much different from what you call a linear span, i.e. the vector space you get if you start with a set of vectors. For example, if we are in $\mathbb R^3$, the vector space spanned by $B = \{\left ( \begin{array}{c} 1 \\ 0 \\ 0\end{array} \right ), \left ( \begin{array}{c} 0 \\ 1 \\ 0\end{array} \right ) \}$ is $\mathbb R^2$ and you can denote it by $\{ \sum_{i=1}^2 r_i e_i \mid e_i \in B, r_i \in \mathbb R \}$.
As for your second question: the quotient relation is defined in this way because we want the tensor product to be bilinear. Note that we can define any equivalence relation on any set we like. The set we are taking the quotient over in this case is all linear combinations of elements in $\{(v,w) \mid v \in V, w \in W \}$.
Please let me know if this answers your questions, I am not sure I understand your second question and I'm happy to elaborate.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458563923835754, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/134408/irreducibility-of-polynomials?answertab=oldest
|
# Irreducibility of polynomials
This is a very basic question, but one that has frustrated me somewhat. I'm dealing with polynomials and trying to see if they are irreducible or not. Now, I can apply Eisenstein's Criterion and deduce for some prime p if a polynomial over Z is irreducible over Q or not and I can sort of deal with basic polynomials that we can factorise easily.
However I am looking at the polynomial $t^3 - 2$. I cannot seem to factor this down, but a review book is asking for us to factorise into irreducibles over a) $\mathbb{Z}$, b) $\mathbb{Q}$, c) $\mathbb{R}$, d) $\mathbb{C}$, e) $\mathbb{Z}_3$, f) $\mathbb{Z}_5$, so obviously it must be reducible in one of these.
Am I wrong in thinking that this is irreducible over all? (I tried many times to factorise it into any sort of irreducibles but the coefficients never match up so I don't know what I am doing wrong).
I would really appreciate if someone could explain this to me, in a very simple way. Thank you.
-
## 4 Answers
Over $\mathbb{Z}$, since the polynomial is primitive (no common factors of the coefficients) and of degree $3$, it either has a root or is irreducible. Since the polynomial has no rational roots, it is irreducible over $\mathbb{Z}$.
Over $\mathbb{Q}$, we just need to check for roots. There aren't any, so the polynomial is irreducible over $\mathbb{Q}$ as well.
Over $\mathbb{R}$, the polynomial has at least one real root, $\sqrt[3]{2}$. This gives $$f(x) = x^3-2 = (x-\sqrt[3]{2})(x + \sqrt[3]{2} + \sqrt[3]{4}).$$ Now we need to check if the quadratic is reducible or irreducible over $\mathbb{R}$. The discriminant is $$\left(\sqrt[3]{2}\right)^2 - 4\sqrt[3]{4} = \sqrt[3]{4}-4\sqrt[3]{4}=-3\sqrt[3]{4}\lt 0.$$ Since the discriminant is negative, the quadratic is irreducible over $\mathbb{R}$. So this gives you the factorization into irreducibles in $\mathbb{R}$.
To get the factorization in $\mathbb{C}$, just factor the quadratic: $$f(x) = (x-\sqrt[3]{2})(x-\omega\sqrt[3]{2})(x-\omega^2\sqrt[3]{2})$$ where $\omega=\frac{-1+i\sqrt{3}}{2}$ is a primitive cubic root of unity. You can get this either by using the quadratic formula on $x^2+\sqrt[3]{2}x+\sqrt[3]{4}$, or by noting that the three roots of $x^3-2$ are the three complex cubic roots of $2$. If $\alpha$ and $\beta$ are two cubic roots of $2$, then $\alpha/\beta$ is a cubic root of $1$ (just cube it to see it equals $1$; if $\alpha\neq \beta$, then $\beta=\alpha\omega$ or $\beta=\alpha\omega^2$.
Over $\mathbb{Z}_3$, we have the "freshman's dream": $(a+b)^3 = a^3+b^3$, because the characteristic is $3$. Since $2^3\equiv 2\pmod{3}$, we get $$x^3-2 = x^3-2^3 = (x-2)^3$$ so the factorization into irreducibles is $x^3-2 = (x-2)(x-2)(x-2)$.
Over $\mathbb{Z}_5$, we have $3^3\equiv 2\pmod{5}$, so $x-3$ divides $x^3-2$. We have $$x^3-2 = (x-3)(x^2+3x+4).$$ Now we check the quadratic. The discriminant is $9-16 = -7 \equiv 3\pmod{5}$. Since $3$ is not a square modulo $5$, the discriminant is not a square in $\mathbb{Z}_5$, so the quadratic is irreducible. This gives you the factorization in $\mathbb{Z}_5$.
-
Hi there, thank you for such a wonderfully detailed explanation, it was extremely helpful! I was wondering, should the quadratic in Z5 read (x^2+3x+4) rather than (x^2+2x+4)? And hence the discriminant be 9-16=-7 congruent to 3 mod 5? Also, as for the complex factorisation, could you please explain to me why we are going about it by introducing the primitive root of unity (I understand what the primitive root of unity is, etc.)? Thank you. – user29553 Apr 20 '12 at 15:51
@user29553: Yes, you are right about the error in $\mathbb{Z}_5$. As for the factorization in $\mathbb{C}$, it just so happens that the two roots of complex roots of $x^2+\sqrt[3]{2}+\sqrt[3]{4}$ are given by $$\frac{-\sqrt[3]{2}+\sqrt{\sqrt[3]{4}-4\sqrt[3]{4}}}{2} = \frac{-\sqrt[3]{2}+\sqrt[3]{2}\sqrt{-3}}{2} = \sqrt[3]{2}\left(\frac{-1+\sqrt{-3}}{2}\right)$$ – Arturo Magidin Apr 20 '12 at 16:18
Thanks for the clarification, I understand the use of the primitive cube root of unity now :) – user29553 Apr 20 '12 at 16:59
## Did you find this question interesting? Try our newsletter
email address
Note that since your polynomial $f(t)=t^3-2$ is a cubic, it's either irreducible or has a linear factor, and hence a root. This should simplify things somewhat.
Over $\mathbb{Z}$, $f$ is irreducible by Eisenstein's criterion with $p=2$. Thus $f$ is irreducible over $\mathbb{Q}$ by Gauss's lemma.
Over $\mathbb{R}$, $f$ has a root (namely $\sqrt[3]{2}$) and so is reducible. Similarly over $\mathbb{C}$.
Over $\mathbb{Z}_3$ and $\mathbb{Z}_5$ (which I assume mean $\mathbb{Z}$ modulo $3$ and $5$, not the $3$ and $5$-adics), there's only a few possible roots to check. In $\mathbb{Z}_3$, $0^3=0$, $1^3=1$, $2^3=8=2$, so $t=2$ is a root. In $\mathbb{Z}_5$, $0^3=0$, $1^3=1$, $2^3=8=3$, $3^3=27=2$, so $3$ is a root.
-
Eisenstein's criterion is all very nice, but I think it should be noted that you don't even need it here. You only need to check for roots in $\mathbb{Z}$ to decide irreducibility over $\mathbb{Z}$ and $\mathbb{Q}$, but a root has to be a divisor of the absolute term 2. Also, make sure to use Gauss's lemma only on primitive polynomials (obviously $f$ is primitive, since it's monic). – m_l Apr 20 '12 at 14:52
Hi there, thank you for such a prompt reply! I'm terribly sorry but I have edited my question, what I meant to ask is how do we factorise f(t) into irreducibles over the above given list. – user29553 Apr 20 '12 at 14:52
$t^3-2=(t-\sqrt[3]{2})(t^2+(\sqrt[3]2)t+\sqrt[3]4)$
-
2
...over $\mathbb{R}$, yes. – m_l Apr 20 '12 at 14:47
@m_l Thanks for your supplement – Yangzhe Lau Apr 20 '12 at 15:30
If you find a root $t=a$ then $t-a$ is a factor of the original polynomial. This means (equating coefficients of $t^3$):
$$t^3-2 = (t-a)(t^2+bt+c)$$
Equating coefficients we get that $ac=2$ (constant term) and $-a+b=0$ (quadratic term), so you can compute $b$ and $c$ and complete the factorisation. You know the linear term will work out because you have checked that $a$ is a root, but this can be used to check your arithmetic.
-
Thank you for explaining this! – user29553 Apr 20 '12 at 15:08
Btw, I now understand how we can factorise this into irreducibles over R, but what about C? If there is no imaginary part, is it still a valid factorisation in C? Sorry if that is too basic a question. – user29553 Apr 20 '12 at 15:11
Is an Integer also a Rational Number? Is a Real Number also a Complex Number? Since there are more Complex Numbers than Reals a quadratic which is irreducible over $\mathbb{R}$ might factorise over $\mathbb{C}$, but you will know how to find the roots of a quadratic over $\mathbb{C}$. – Mark Bennet Apr 20 '12 at 15:13
It can be expressed as such, yes. I think I understand your meaning, thank you :) – user29553 Apr 20 '12 at 15:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 93, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420247673988342, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/164670-functional-series-uniform-convergence-question.html
|
# Thread:
1. ## Functional Series and Uniform Convergence Question
Let a $\in$ (0,1). Show that the functional series
$\displaystyle\sum_{j=0}^{\infty}(-t^2)^j$ where $t \in [-a,a]$
is uniformly convergent with the limit function
$f(t) = \frac{1}{1+t^2}$
I have absolutely no clue where to go from here.
Any help???
2. Originally Posted by garunas
Let a $\in$ (0,1). Show that the functional series
$\displaystyle\sum_{j=0}^{\infty}(-t^2)^j$ where $t \in [-a,a]$
is uniformly convergent with the limit function
$f(t) = \frac{1}{1+t^2}$
I have absolutely no clue where to go from here.
Any help???
Merely note that $|f_j(t)|=t^{2j}\leqslant (a)^{2j}$ for $t\in[-a,a]$, but since $\displaystyle \sum_{j=}^{\infty}a^{2j}$ converges, it follows by the Weierstrass M-test that $\displaystyle \sum_{j=0}^{\infty}f_j(t)$ converges uniformly on $[-a,a]$. To prove what it sums to merely note that $\displaystyle \frac{1}{1-\left(-t^2\right)}=\sum_{j=0}^{\infty}\left(-t^2\right)^j$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.860271155834198, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.