url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/31714/is-tor-always-torsion
|
Is Tor always torsion?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Question: Is the following statement true?
Let $R$ be an associative, commutative, unital ring. Let $M$ and $N$ be $R$-modules. Let $n\geq 1$. Then $Tor_n^R(M,N)$ is torsion.
By " $Tor_n^R(M,N)$ is torsion" I mean that every of its elements is a torsion element. Maybe I want to assume that $R$ is an integral domain.
Remark: The above statement is true if $R$ is a principal ideal domain (then $Tor_n^R$ vanishes for $n\geq 2$) and $M$ and $N$ are finitely generated (then we can apply the structure theorem).
-
By the way, is my question in some sense ill-posed, if $R$ has zero divisors? $R$'s being an integral domain seems crucial at least for Sasha's and David's approaches. – Rasmus Jul 13 2010 at 18:20
1
Rasmus, I think you are right. If R is not a domain, sometimes "torsion" means being killed by a nonzero divisor. If that's the case, then Tor(M,N) may not be torsion. As an example, take $R=k[x,y]/(x^2)$ and $M=R/(x)$. Then $Tor_i(M,M)$ is either $0$ or $M$, which is not torsion in that sense. – Hailong Dao Jul 13 2010 at 18:46
3 Answers
$Tor$ commutes with extension of scalars, hence (if $R$ is an integral domain and $K$ is its field of fractions), we have $$Tor_n^R(M,N) \otimes_R K = Tor_n^K(M\otimes_R K,N\otimes_R K).$$ The right-hand-side vanishes for $n\ge 1$, because $K$ is a field. Hence $Tor$ vanishes after tensoring with $K$, which means that $Tor$ is torsion.
-
Do you have a reference or explanation for "Tor commutes with extension of scalars"? – Rasmus Jul 13 2010 at 16:26
11
Tor does not commute with extension of scalars, unless the extension of scalars is flat. The localization which you need inthis case is flat, though. – Mariano Suárez-Alvarez Jul 13 2010 at 16:47
1
In the derived sense, it commutes always. Just because the tensor product commutes with extension of scalars: $(M\otimes_R R')\otimes_{{R'}} (N\otimes_R R') \cong (M\otimes_R N)\otimes_R R'$. – Sasha Jul 13 2010 at 20:17
1
But your formulas are not about derived tensor products but their homologies... – Mariano Suárez-Alvarez Jul 14 2010 at 7:19
2
Tensor product commutes with extension of scalars (which in fact is a tensor product itself), hence their derived functors also commute --- $(M\otimes_R^L R')\otimes_{{R'}}^L (N\otimes_R^L R') \cong (M\otimes_R^L N)\otimes_R^L R'$. In the case of $R' = K$, as it was mentioned by Mariano, $R' = K$ is flat over $R$, so the derived product can be replaced with the usual one in the corresponding places. This gives $(M\otimes_R K)\otimes_K^L (N\otimes_R K) \cong (M\otimes_R^L N)\otimes_R K$. Taking the cohomology, we get $Tor_n^K(M\otimes_R K,N\otimes_R K) \cong Tor_n^R(M,N)\otimes_R K$. – Sasha Jul 14 2010 at 14:55
show 1 more comment
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Some thoughts:
1. Since Tor commutes with colimits, one can reduce to the finitely generated case.
2. By choosing projective resolution, we can reduce to $\mathrm{Tor}_1^R$.
3. If $M = R/r$ is cyclic, we have $\mathrm{Tor}_1^R(R/r, N) = {}_rN$ the $r$-torsion in $N$.
-
Could you please elaborate on 2.? – Rasmus Jul 13 2010 at 15:54
Isn't 2 just connected with Tor and Ext dropping in dimension if you replace the correct of its argument modules with a projective cover of it? So you could do things like $Tor_n(M,N) = Tor_{n-1}(M,\Sigma N)$ (likely to be wrong in details - I have no books around me now!) – Mikael Vejdemo-Johansson Jul 13 2010 at 15:59
Yes. Set $K = \mathrm{Frac} \ R$.
Lemma: Let $\ldots \to C_2 \to C_1 \to C_0$ be a complex of $R$-modules. Suppose that $C^{\bullet} \otimes_R K$ is exact (but not necessarily surjective at $C_0$). Then $H_k(C_{\bullet})$ is $R$-torsion for $k>0$.
Proof: Let $v \in C_k$ with $dv=0$. So $d(v \otimes 1)=0$. By the exactness of $C^{\bullet} \otimes_R K$, there is $u \in C_{k+1} \otimes_R K$ with $du=v$. Write $u=\sum w_i \otimes (f_i/g_i)$, with $f_i/g_i \in K$ and $w_i \in C_{k+1}$. Set $g=\prod g_i$ and $w=\sum \left( \prod_{j \neq i} g_j \right) f_i w_i$. Then $dw=gv$, so $[v]$ is $g$-torsion in $H_k(C_{\bullet})$. QED
Take resolutions $A_{\bullet} \to M$ and $B_{\bullet} \to N$ by free $R$-modules. Then $\mathrm{Tor}_{\bullet}(M,N)$ is the homology of the complex formed by collappsing the double complex $A_{\bullet} \otimes_R B_{\bullet}$. Note that $\left( A_{\bullet} \otimes_R B_{\bullet} \right) \otimes_R K \cong (A_{\bullet} \otimes_R K) \otimes_K (B_{\bullet} \otimes_R K)$.
Since $A^{\bullet}$ is exact, so is $A^{\bullet} \otimes_R K$. Thus $A_{\bullet} \otimes_R K$ breaks up as a direct sum of complexes of the form $\ldots \ldots 0 \to K \to K \to 0 \ldots$, and the complex $\ldots \to 0 \to K$, with the $K$ in the last position. (This uses the Axiom of Choice; I suspect you should be able to avoid it, but I don't see how right now.) The complex $B \otimes_R K$ breaks up into pieces of the same kind. So the double complex breaks up into squares $\begin{smallmatrix} K & \to & K \\ \downarrow & & \downarrow \\ K & \to & K \end{smallmatrix}$, vertical strips $\begin{smallmatrix} K \\ \downarrow \\ K \end{smallmatrix}$, horizontal strips $\begin{smallmatrix} K & \to & K \end{smallmatrix}$ and, in position $(0,0)$, some isolated copies of $K$.
Only summands of the last type contribute to the cohomology of the double complex, so the double complex obeys the hypotheses of the lemma and we are done.
-
$A_\bullet\otimes K$ is a resolution of $M\otimes K$ as a $K$-module, and the same with $B_\bullet\otimes K$, so the complex $(A\otimes B)\otimes K$ computes $\Tor^K(M\otimes K,N\otimes K)$. The vanishing in positive degrees follows from $K$'s being a field. – Mariano Suárez-Alvarez Jul 13 2010 at 16:13
«computes $\mathrm{Tor}^K(M\otimes K,N\otimes K)$» – Mariano Suárez-Alvarez Jul 13 2010 at 16:14
You're right; that's a cleaner way to finish. – David Speyer Jul 13 2010 at 16:19
(Not that it avoids AoC, I guess) In fact, the cleanest would be to remark that $\mathrm{Tor}$ commutes with localization (which is, essentially, what you did), so that $\mathrm{Tor}^R(M,N)_{(0)}=\mathrm{Tor}^K(M\otimes K,N\otimes K)$, which is zero in positive degrees. – Mariano Suárez-Alvarez Jul 13 2010 at 16:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 92, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222463369369507, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/5382/is-it-safer-to-encrypt-twice-with-rsa
|
# Is it safer to encrypt twice with RSA?
I wonder if it's safer to encrypt a plain text with RSA twice than it is to encrypt it just once. It should make a big difference if you assume that the two private keys are different, and that the only way used to crack it is brute force. I submitted these theories to my teacher, but he claims that a double encryption doesn't make it any safer. I didn't follow his arguments entirely, so I decided to ask here.
So, if I encrypt a message with one key once, and the encrypt the resulting cipher text once more with a different key, does this make the encryption simpler?
EDIT: My teacher said that "it doesn't get safer with a double encryption, at least not if n is the same and e is different.". This is the part I don't follow, since you'd still need both p and q to derive the two different private keys that this would produce. I have made a few calculations and I do not quite understand. Why would the particular statement my teacher sent me mean that it doesn't get safer?
-
1
One should note the second public key would have to be larger than the first public key to guarantee correct operation. – Thomas Nov 15 '12 at 13:27
1
For that matter, why not just encrypt with a longer key to begin with? – Stephen Touset Nov 15 '12 at 19:38
@StephenTouset: This is a theoretical problem, not a practical. Therefore the option to choose a longer key is irrelevant, but thanks for the input. – Psyberion Nov 16 '12 at 15:33
4
– CodesInChaos Nov 16 '12 at 16:04
@CodesInChaos: Thank you, I'll take a look at that! – Psyberion Nov 16 '12 at 19:43
## 3 Answers
Well, think about it this way. If breaking one encryption with brute force will take longer than the lifetime of the universe, are you any safer with an encryption scheme that will take twice the lifetime of the universe? No. The first encryption cannot be broken. Adding a second encryption just adds computation overhead with no real benefit.
Think about it this way, if it is estimated to take 500 years for a prisoner to chew through the bars on his prison cell to escape, is the public any safer if we add a second set of bars so that it will take 1000 years to chew through the two sets before the prisoner can escape? Not really.
UPDATE
Given the update in the question, I thought I'd update.
So, you fix an $n$ and choose $e_1$ and $e_2$ as public exponents and compute $d_1$ and $d_2$ as the private exponents.
To encrypt, you are proposing $(m^{e_1})^{e_2}\bmod{n}$ and wondering why this is not stronger than just $m^{e_1}\bmod{n}$ in a brute-force attack[*].
So, you haven't given detail as to what the "brute-force" attack is, so let's look at two options.
1. Factoring $n$. If I factor $n$ using a brute-force attack, I then use the factorization to compute $d_1$ and $d_2$. Computing both $d_1$ and $d_2$ is not much more than just computing $d_1$ since you broke the factorization.
2. Instead of factoring $n$, what if you try to brute force $d_1$ and $d_2$. Recall that $d_i$ is chosen such that $e_i d_i\equiv 1\bmod{n}$. Furthermore, $(m^{e_1})^{e_2}=m^{e_1e_2}$. Raise that to $d_1d_2$ and you get $m$ back. Therefore, you really need to bruteforce $d_1d_2$ instead of $d_1$ and then $d_2$ (or vice-versa). If you assume each of the $d$s are $l$ bits, brute forcing $d_1$ then $d_2$ would be like brute forcing $l^2$ bits. Brute forcing $d_1d_2$ on the other hand is $2l$ bits. One could argue that this is harder, but asymptotically it isn't.
3. Brute force only $d_1$ then factor. It turns out if you know $d_1$ you can easily factor $n$ then use the factorization to compute $d_2$. (This comes from @CodesInChaos comment).
Any other brute force options you had in mind?
[*] My description of double encrypted RSA here is assuming textbook RSA. For padded RSA (which is what you find in the real world), points 1 and 3 are still valid, 2 however is not.
-
Furthermore, if the prisoner is given a hacksaw, he can cut through the two sets of bars pretty quickly anyway. Even if brute force is the only way to break algorithm X today, it may not be in ten years from now. This is why we don't see people boasting RSA-65536 keys - it's pointless and just serves to inconvenience the user. – Thomas Nov 15 '12 at 14:02
I do follow your arguments, and I agree with them. Although, I was looking for a more mathematical explanation. I've added some info to the question to make it clearer. – Psyberion Nov 16 '12 at 15:32
@Psyberion, I've added some mathematical arguments given your update. – mikeazo♦ Nov 16 '12 at 16:20
3
Not sure if you can call $(m^{e_1})^{e_2}$ encrypting twice since that only works with textbook RSA. With real RSA, you'd add padding before each encryption, making the intermediate ciphertext too large for a single exponentiation of the second encryption. – CodesInChaos Nov 16 '12 at 16:27
@CodesInChaos, good point. I always seem to assume textbook RSA. Need to get all the intricacies of real RSA more firmly ingrained in my brain :) – mikeazo♦ Nov 16 '12 at 16:31
show 3 more comments
## Did you find this question interesting? Try our newsletter
email address
Double encryption/decryption with RSA is equal to single encryption/decryption with public/private exponents raised to the square. It doesn't make brute-forcing the private exponent harder. More, it doesn't complicate the factorization of N.
So, it is not more secure.
-
Not true if you include padding. Also, the OP specifically says that $e$ is different. – mikeazo♦ Nov 19 '12 at 11:55
It will be impossible with padding due to encrypted message size. – Pavel Ognev Nov 19 '12 at 17:28
Assuming that you use a IND-CPA,CCA secure assymetric enncryption scheme that leaks some kind of information from the ciphertext. By re-encrypting the message is like you encode into a different form and you achieve all-or-nothing security. That means that the attacker in order to reveal 1 single block he should break the message in its entire form. A second point is that you actually re-encrypt sth if you have an intermediate node that transforms messages of key k1 into messages encrypted with key k2. This is so callade proxy re-encryption and is done for delegation of operations and transitivity purposes.
-
If it is IND-CPA secure, the ciphertext can't leak information -- that is self-contradictory. I'm not entirely sure I follow the rest of the answer, but it doesn't look right to me. – D.W. Nov 16 '12 at 5:45
Does the fact that a tiny amount of information is discovered for the plaintext from the ciphertext violates the IND-CPA security? I am wondering as IND-CPA refers to fully recover the plaintext – curious Nov 16 '12 at 12:47
4
Breaking IND-CPA security does not require fully recovering the plaintext -- it just requires you to learn enough about it to have a better than random chance of telling the encryptions of two plaintexts (of your choosing) apart. – Ilmari Karonen Nov 16 '12 at 15:07
1
"Does the fact that a tiny amount of information is discovered for the plaintext from the ciphertext violates the IND-CPA security?" - In short: yes, it does. – D.W. Nov 16 '12 at 19:33
@D.W. Is there a quantitative assessment in how large this partial plaintext recovering should be in order to be conjectured as CPA? – curious Nov 19 '12 at 10:35
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282653331756592, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/08/20/the-underlying-category/?like=1&source=post_flair&_wpnonce=8cb43e0c56
|
# The Unapologetic Mathematician
## The Underlying Category
In the setup for an enriched category, we have a locally-small monoidal category, which we equip with an “underlying set” functor $V(C)=\hom_{\mathcal{V}_0}(\mathbf{1},C)$. This lets us turn a hom-object into a hom-set, and now we want to extend this “underlying” theme to the entire 2-category $\mathcal{V}\mathbf{-Cat}$.
Okay, we could start by finding “underlying” analogues for each piece of the whole structure, but there’s a better way. We just take the setup of the “underlying set” from our monoidal categories and port it over to our 2-categories of enriched categories.
In particular, there’s a $\mathcal{V}$-category $\mathcal{I}$ that has a single object $I$ and $\hom_\mathcal{I}(I,I)=\mathbf{1}$. This behaves sort of like a “unit $\mathcal{V}$-category”, and we define $(\underline{\hphantom{X}})_0:\hom_{\mathcal{V}\mathbf{-Cat}}(\mathcal{I},\underline{\hphantom{X}})$. This is a 2-functor from $\mathcal{V}\mathbf{-Cat}$ to $\mathbf{Cat}$, and it assigns to an enriched category the “underlying” ordinary category. Let’s look at this a bit more closely.
A $\mathcal{V}$-functor $F:\mathcal{I}\rightarrow\mathcal{C}$ picks out an object $F(I)\in\mathcal{C}$, while a $\mathcal{V}$-natural transformation $\eta:F\rightarrow G$ consists of the single component $\eta_I:\mathbf{1}\rightarrow\hom_\mathcal{C}(F(I),G(I))$ — an element of $V(\hom_\mathcal{C}(F(I),G(I)))$. Thus the underlying category $\mathcal{C}_0$ has the same objects as $\mathcal{C}$, while $\hom_{\mathcal{C}_0}(A,B)$ is the “underlying set” of $\hom_\mathcal{C}(A,B)$.
Given a $\mathcal{V}$-functor $T:\mathcal{C}\rightarrow\mathcal{D}$ we get a regular functor $T_0:\mathcal{C}_0\rightarrow\mathcal{D}_0$. It sends the object $F:\mathcal{I}\rightarrow\mathcal{C}$ of $\mathcal{C}_0$ to the object $T\circ F:\mathcal{I}\rightarrow\mathcal{D}$ of $\mathcal{D}_0$. Its action on arrows of $\mathcal{C}_0$ (natural transformations of functors from $\mathcal{I}$ to $\mathcal{C}$ shouldn’t be too hard to work out.
Given a $\mathcal{V}$-natural transformation $\eta:S\rightarrow T$ of $\mathcal{V}$-functors we get a natural transformation $\eta_0:S_0\rightarrow T_0$. Its component $\eta_{0A}:S(A)\rightarrow T(A)$ in $\mathcal{D}_0$ is an element of an “underlying hom-set” — an arrow from $\mathbf{1}$ to the appropriate hom-object. But this is just the same as the component $\eta_A$ of the $\mathcal{V}$-natural transformation we started with, so we don’t really need to distinguish them.
At this point, some of these conditions tend to diverge. The ordinary naturality condition for a transformation between functors acting on the underlying categories turns out to be weaker than the $\mathcal{V}$-naturality condition for a transformation between $\mathcal{V}$-functors, for example. In general if I start talking about $\mathcal{V}$-categories then everything associated to them will be similarly enriched. If I mean a regular functor between the underlying categories I’ll try to say so. That is, once I lay out $\mathcal{V}$-categories $\mathcal{C}$ and $\mathcal{D}$, then if I talk about a functor $F:\mathcal{C}\rightarrow\mathcal{D}$ I automatically mean a $\mathcal{V}$-functor. If I mean to talk about a regular functor $F:\mathcal{C}_0\rightarrow\mathcal{D}_0$ I’ll say as much. Similarly, if I assert a natural transformation $\eta:S\rightarrow T$ I must mean $\mathcal{V}$-natural, or I would have said $\eta:S_0\rightarrow T_0$.
### Like this:
Posted by John Armstrong | Category theory
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 52, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8938553333282471, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/42472/projective-plane-vs-reference-plane
|
# Projective Plane vs. Reference Plane
I was told that the Projective Plane was also known as the Reference Plane in Projective geometry, but when I told my professor this, he freaked and told me I was completely wrong. He said that the Projective plane is the lines that go through the origin that intersect the Reference plane at a point. He said "a 'point' in the Projective plane is a line", and "even though they look like lines, they are called 'points'"...
This is word for word what he said, and he is the one grading my presentation. So I am going to believe what he says, but I still don't understand this idea and the fact the Projective plane is NOT the Reference plane in Projective geometry.
Thanks in advance!
-
Dear Ellette, Could you give some information on the course you're taking? It might help people in formulating their answers. Regards, – Matt E Jun 1 '11 at 1:33
## 1 Answer
I am not sure that this is what you are asking, but it seems that you are talking about two different models for the projective plane.
One can develop projective geometry from a completely synthetic viewpoint, which means that you prove theorems using basic axioms without ever referencing to what the points actually are.
Now, projective geometry can be seen in (at least) two ways. The first way is imagining the regular two dimensional euclidean plane, and adding a "point at infinity" for each set of parallel lines. This means that parallel lines will all meet at the same point at infinity (but a different point for each parallelism class). This seems to be what your teacher called the "Reference Plane" (although I have never heard that terminology, and google doesn't seem to give many results).
The second way is to call lines through the origin of $R^3$ points.
The link between the two models is easy to see, given the model of lines through the origin, you can map each line to the $R^2$ plane by taking its intersection with the plane $z=1$. The only lines that don't intersect the plane $z=1$ are those contained in the $xy-$plane, and they correspond to the points at infinity that we added in the previous model.
-
Dear Vhailor, I also found it hard to find instructive uses of the term "Reference Plane" via google; the limited results I got suggest that it is more common in applied projective geometry courses (and I was surprised to find that such courses exist!). This is one reason that I asked @Ellette to say a little more about the course she is taking. Regards, – Matt E Jun 1 '11 at 13:14
Vhailor, thank you so much for your response. What you said was very interesting and actually clears a lot of things up for me! I have been researching Projective geometry for 4 days straight now and your answer, I have to say, made more sense than anything I have read about so far! hahah. Thank you for such an extensive and clear response! :) and if I had a higher "reputation" I would "thumbs up" your answer. Thank you! – Ellette Jun 2 '11 at 4:02
@Matt E., I am currently taking a math history course (Math 212), and I am required to teach my class about Projective and Perspective geometry for my final project. It is actually a course required for my education major, but there are entire courses, I believe, that teach just Projective geometry! :) – Ellette Jun 2 '11 at 4:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9763835668563843, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/182588-plane-tangent-3d-surface-mmn14-3a.html
|
# Thread:
1. ## plane tangent to 3d surface mmn14 3a
i have a suface $\sqrt{x}+\sqrt{y}+\sqrt{z}=\sqrt{a}$
find the formula of a plane which is tangent to it in (x0,y0,z0)
i need to find the vector whic is vertical to this surface
the formula of it is $<f_x,f_y,-1>$
but i dont know what is the formula for f(x,y) ,
i have a formula in other form
how to write it in the f(x,y) form
?
2. If $S\equiv f(x,y,z)=0$ then, the tangent plane to $S$ at $P_0(x_0,y_0,z_0)$ is
$\dfrac{\partial f}{\partial x}(P_0)(x-x_0)+\dfrac{\partial f}{\partial y}(P_0)(y-y_0)+\dfrac{\partial f}{\partial z}(P_0)(z-z_0)=0$
3. i need to find the vector whic is vertical to this surface
the formula of it is $<f_x,f_y,-1>$
but i dont know what is the formula for f(x,y) ,
i have a formula in other form
how to write it in the f(x,y) form
?
4. Originally Posted by transgalactic
i need to find the vector whic is vertical to this surface
the formula of it is $<f_x,f_y,-1>$
but i dont know what is the formula for f(x,y) ,
i have a formula in other form
how to write it in the f(x,y) form
Sincerely, it is rather difficult to understand your question. Perhaps you mean that if the surface is written in explicit form $z=f(x,y)$ then, the equation of the tangent plane at $P_0(x_0.y_0,z_0)$ is $\pi\equiv <(f_x(P_0),f_y(P_0),-1),(x-x_0,y-y_0,z-z_0)>=0$ . For the general case, see my previous post.
5. Originally Posted by FernandoRevilla
If $S\equiv f(x,y,z)=0$ then, the tangent plane to $S$ at $P_0(x_0,y_0,z_0)$ is
$\dfrac{\partial f}{\partial x}(P_0)(x-x_0)+\dfrac{\partial f}{\partial y}(P_0)(y-y_0)+\dfrac{\partial f}{\partial z}(P_0)(z-z_0)=0$
i was told that the norm vector <df/dx,df/dy,-1>
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499549865722656, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/tagged/public-key+zero-knowledge-proofs
|
# Tagged Questions
1answer
137 views
### How can I prove in zero knowldege that an ElGamal shuffle is correct for a special setting? [closed]
In a special ElGamal encryption scheme, every user has an ElGamal encryption key-pair using the same cyclic group $G$ and generator $g$. The system has a special function : ...
3answers
208 views
### Is there a public key semantically secure cryptosystem for which one can prove in zero knowledge the equivalence of two plaintexts?
If Alice encrypts two messages $a$ and $b$, such that $x=E(a)$, $y=E(b)$. Can Alice prove (without revealing $a$, $b$ or the private key) that $a = b$? Obviously the proof must not be too long and it ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8874303102493286, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/3344/definition-for-the-viscosity-in-financial-market-data-series
|
# definition for “the viscosity” in financial market data series
I am willing to calculate and monitor the evolution of extreme-viscosity in the financial markets data series.
Wikipedia says "Put simply, the less viscous the fluid is, the greater its ease of movement ". So rather than looking for the mighty viscosity should I simply focus on ease-of-movement? Well, the "ease of movement - (EOM)" is a catchy phrase since there is a well known indicator with the exact same name. That EOM indicator is defined in investopedia as: "A technical momentum indicator that is used to illustrate the relationship between the rate of an asset's price change and its volume. This indicator attempts to identify the amount of volume required to move prices."
In elementary school mathematics it is as simple as: EOM = (Close of today - Close of yesterday) / Volume
Do think "extreme viscosity can be monitored by the extremes in EOM"... Or would you suggest something else to calculate viscosity?
-
Did you make up this concept yourself? Is there any source for the use of the term "viscosity" in a financial context? – Tal Fishman Apr 25 '12 at 18:17
Dear Tal, There are lots of things that I made up but this is not one of them. When asked about viscosity in financial data series the first book that comes into my mind is "An introduction to econophysics : Correlations and complexity in finance - Rosario Mantegna & Eugene Stanley". – Sts Apr 26 '12 at 8:39
I see where they use it as an analogy, but it still looks like they do not directly apply the terms "viscosity" or "ease-of-movement" to financial time series. Also, your definition of ease of movement seems arbitrary. I do not understand why you would look at the absolute price change divided by volume. Also, since no trading takes place overnight, perhaps you should look at close - open? Perhaps look at log price changes? Perhaps log volume, too? – Tal Fishman Apr 26 '12 at 14:53
1
You can measure whatever you want...there is no standard definition of viscosity in financial mathematics. Most stochastic modelers would take it to mean terms involving $\nabla^2 V$ in the associated Feynman-Kac PDE, which is clearly very different from the simple heuristics you have in mind here. – Brian B Apr 26 '12 at 17:35
## 2 Answers
Using intra-day data, the concept of viscosity is easier to define. At the microstructure scale, you can see the price moves as a diffusion constrained by the quantities in the order books. Viscosity is a mix of pressure of volumes, rounding by the tick size, and bid-ask bounce.
See for instance A New Approach for the Dynamics of Ultra-High-Frequency Data: The Model with Uncertainty Zones, by M Rosenbaum and C Y Robert, In Jnl of Financial Econometrics Volume 9, Issue 2, pp 344-366. In this paper, authors present a way to estimate simultaneously the volatility and a rounding adjustement level $\eta$ (eta). This parameter can be seen as the viscosity of the studied stock.
-
A standard definition of "viscosity" is "stickiness. The MORE viscous something is, the LESS ease movement.
So "viscosity" in this context would refer to the "stickiness" of one price compared to another in a time series.
-
Dear Tom, Thanks for the "English" definition. I will use the term stickiness since it is more understandable. But I would be glad if you can suggest a "mathematical" definition too. – Sts Apr 26 '12 at 8:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932697594165802, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/15370/tools-for-the-langlands-program/33142
|
## Tools for the Langlands Program?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi,
I know this might be a bit vague, but I was wondering what are the hypothetical tools necessary to solve the Langlands conjectures (the original statments or the "geometic" analogue). What I mean by this is the following: for the Weil Conjectures it became clear that, in order to prove them, one needed to develop a marvelous cohomology theory that would explain Weil's observations. Of course, we all know that etale cohomology is that marvelous tool. By analogy, what "black box" tools are necessary for the Langlands program? Broadly speaking what tools do we need for the Langlands program?
Curious grad student, Ben
-
2
This question is too broad in my opinion. I'm sure there are good papers giving an overview of the Langlands program. I know that Mitya Boyarchenko and David Ben Zvi have some stuff about geometric Langlands that might help you get up to speed. – Harry Gindi Feb 15 2010 at 21:37
2
math.uchicago.edu/~mitya/langlands.html and math.utexas.edu/users/benzvi/Langlands.html – Harry Gindi Feb 15 2010 at 21:39
2
No, I mean, it's a really really huge program. I think that any satisfying answer to this question either doesn't exist or could take tens if not hundreds of pages. – Harry Gindi Feb 15 2010 at 21:55
2
If you check out either of the links I gave you, you'll see just how much stuff is actually being used. – Harry Gindi Feb 15 2010 at 21:56
8
Here is my vague, limited and non-geometric understanding. There is not a black box theory whose existence would prove the conjectures of Langlands as there was with the Weil conjectures. However, one general strategy is to use the Arthur-Selberg trace formula on two different reductive groups, match up the geometric sides of the formula (as much as possible), and then use the spectral sides to relate the automorphic forms of the groups. There are several technical difficulties in getting this to work in general, with a major problem having been the Fundamental Lemma, finally resolved by Ngo. – Zavosh Feb 15 2010 at 22:53
show 3 more comments
## 4 Answers
There are all sorts of problems with the Langlands conjectures that we (as far as I know) have no idea at all how to approach. As a very simple example of an issue for $GL(2)$ over $\mathbf{Q}$ that we cannot do, consider this: there should be a canonical bijection between continuous even (i.e. det(complex conj)=+1) irreducible 2-dimensional representations $Gal(\overline{\mathbf{Q}}/\mathbf{Q})\to GL(2,\mathbf{C})$ and normalised algebraic cuspidal Maass new eigenforms on the upper half plane. This is a sort of non-holomorphic analogue of the Deligne-Serre theorem which relates the odd irreducible Galois representations to holomorphic weight 1 newforms. One way of nailing this bijection is that given a Maass newform, then for all primes $p$ not dividing the level, the eigenvalue of $T_p$ (suitably normalised) should be the trace of the representation evaluated at the Frobenius element in the Galois group.
You want a black box which will solve all of Langlands---then you need a black box which will solve this. Unfortunately it seems to me that firstly you'll need several good new ideas to resolve even this simple case, and secondly there is more than one strategy and it's not clear what will work first. As examples of the problems one faces: given the Galois representation, that's just a lump of algebra---a finite amount of data. However is one going to construct a bunch of analysis from it?? One way might be via the theory of base change, which works a treat for cyclic extensions, and just enough has been developed in order to resolve the problem for Galois representations with solvable image (one uses a lot more than the statement that the group is solvable---one uses that it is also "small"---this is not just a formal consequence of cyclic base change). This is the Langlands-Tunnell theorem, which gives the Maass form from the Galois representation if it has solvable image. In the non-solvable case one can dream of non-solvable base change, but non-solvable base change is really nothing but a dream at this point. So there's one big black box but that will only resolve one direction of one small fragment of the Langlands conjectures.
Now what about the other way? Well here we're even more in the dark. Given an algebraic Maass form, we can't even prove that its Hecke eigenvalues are algebraic numbers, let alone the sum of two roots of unity. In the holomorphic modular form case we can get bases of the spaces of forms using e.g. coherent cohomology of the modular curve considered as an algebraic curve over $\mathbf{Q}$, or (in weights 2 or more) singular cohomology of a (typically non-trivial) local system on the curve. Both these machines produce $\mathbf{Q}$-vector spaces with Hecke actions, and hence char polys are in $\mathbf{Q}[x]$ and so eigenvalues are algebraic. But with algebraic Maass forms we have no such luxury. They are not cohomological, so we can't expect to see them in singular cohomology of a local system, and they are not holomorphic, so we can't expect to see them in coherent cohomology either. So we, vaguely speaking, need a black box which, given certain finite-dimensional complex vector spaces with Hecke actions, produces finite-dimensional $\mathbf{Q}$-vector spaces out of thin air, which when tensored up to the complexes give us back our groups. People have tried using base change to do this, or other known instances of functoriality, but everything so far has failed and it's not clear to me that one even has a conjectural approach for doing this direction. And I'm only talking about proving that the eigenvalues are algebraic---not even coming close to attaching the Galois representation!
So one vague black box "non-abelian base change", and one hard problem that as far as I know no-one has ideas about, and, if you put these together, you would solve one teeny tiny insy winsy little part of the Langlands programme. Makes the Weil conjectures look like a walk in the park!
-
"Makes the Weil conjectures look like a walk in the park!" - This was exactly my point in my comment. That's why it's called the Langlands program rather than the Langlands conjecture(s). – Harry Gindi Feb 16 2010 at 0:10
6
I thought that the reason it's called the Langlands Programme rather than the Langlands conjectures was that actually many of the statements are quite vague, or come in several forms, so it's difficult to say really what is conjectured and what is just a good motivating idea. For example transfer of automorphic reps via a morphism of L-groups should obey local Langlands everywhere, but local Langlands is a bit vague: "there should be a canonical bijection..." and there are issues of strong mult 1 and so on. The true force is in the most powerful statements but these are typically ill-defined. – Kevin Buzzard Feb 16 2010 at 9:03
4
[grr I want to make longer comments!]. For example the existence of the global Langlands group is a conjecture that, it seems to me, is almost unfalsifiable. Langlands makes some conjecture in Corvallis of the form "this set (iso classes of reps of GL_n(adeles) for all n at once) should have the structure of a Tannakian category in some natural way" for example. Is that really a conjecture or just a really good idea? – Kevin Buzzard Feb 16 2010 at 9:05
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This answer deals with the classical Langlands program (if you like, the Langlands program for number fields).
There are (at least) two aspects to this program:
(a) functoriality: this is Langlands original conjecture, explained in the letter to Weil, and further developed in "Problems in the theory of automorphic forms" and later writing. It is a conjecture purely about automorphic forms. Langlands has outlined an approach to proving it in general is his papers on the topic of "Beyond endoscopy" (available online at his collected works).
A proof of functoriality would imply, among other things, the non-solvable base-change discussed in Kevin's answer.
It seems that for the "beyond endoscopy" program to work as Langlands envisages it, one would need unknown (and seemingly out of reach) results in the analytic number theory of $L$-functions.
(b) reciprocity: this is the conjectured relationship between automorphic forms and Galois representations/motives. It has two steps: attaching Galois representations, or even motives, to (certain) automorphic forms, and, conversely, showing that all Galois representations of motives arise in this way. (This converse direction typically incorporates the Fontaine--Mazur conjecture as well, which posits a purely Galois-theoretic criterion for when a Galois representation should arise from a motive.)
If one is given the direction automorphic to Galois, then there are some techniques for deducing the converse direction, namely the Taylor--Wiles method. However this method is not a machine that automatically applies whenever one has the automorphic to Galois direction available; in particular, it doesn't seem to apply in any straightforward way to Galois representations/motives for which some $h^{p,q}$ is greater than 1 (in more Galois-theoretic terms, which have irregular Hodge--Tate weights). Thus in particular, even if one could attach Galois representations to (certain) Maass forms, one would still have the problem of proving that every even 2-dimensional Artin representation of $G_{\mathbb Q}$ arose in this way.
As to constructing Galois representations attached to automorphic forms, here the idea is to use Shimura varieties, and one can hope that, with the fundamental lemma now proved, one will be able to get a pretty comprehensive description of the Galois representations that appear in the cohomology of Shimura varieties. (Here one will also be able to take advantage of recent progress in the understanding of integral models of Shimura varieties, due to people like Harris and Taylor, Mantovan, Shin, Morel, and Kisin, in various different contexts.)
The overarching problem here is that, not only do not all automorphic forms contribute to cohomology (e.g. Maass forms, as discussed in Kevin's answer), but also, not all automorphic forms appear in any Shimura variety context at all. Since Shimura varieties are currently the only game in town for passing from automorphic forms to Galois representations, people are thinking a lot about how to move from any given context to a Shimura variety context, by applying functoriality (e.g. Taylor's construction of Galois reps. attached to certain cuspforms on $GL_2$ of a quadratic imaginary field), or trying to develop new ideas such as $p$-adic functoriality. While there are certainly ideas here, and one can hope for some progress, the questions seem to be hard, and there is no one black box that will solve everything.
In particular, one could imagine having functoriality as a black box, and asking if one can then derive reciprocity. (Think of the way that Langlands--Tunnell played a crucial role in the proof of modularity of elliptic curves.) Langlands has asked this on various occasions. The answer doesn't seem to be any kind of easy yes.
-
I happened to come across this paper yesterday, but haven't been able to read it because of the prohibitive price. You may access this article for 1 day for US\$12.00. Ash, Avner; Gross, Robert Generalized non-abelian reciprocity laws: a context for Wiles' proof. Bull. London Math. Soc. 32 (2000), no. 4, 385--397. – Chandan Singh Dalawat Feb 16 2010 at 4:02
1
if you google for the title, the second link gives you the pdf file. The authors expanded on this in their book "fearless symmetry", btw. – Franz Lemmermeyer Feb 16 2010 at 8:33
For me it was the first search result. Vielen Dank. – Chandan Singh Dalawat Feb 16 2010 at 14:11
Hey! We're making progress. It used to be called the Langlands philosopy. [Oops, this was meant to be a comment on fpqc's comment.]
-
1
+1 Just because why not. Wasn't it called the "philosophy of cusp forms" even before that? – Harry Gindi Feb 16 2010 at 0:52
9
The philosophy of cusp forms refers to earlier ideas, due to Gelfand and Harish-Chandra, roughly related to the decomposition of the automorphic spectrum into discrete and continuous parts. The general theory of Eisenstein series was established by Langlands, but this was before he enunciated his functoriality conjecture, and they are not the same thing. (The philosophy of cuspforms/theory of Eisenstein series is about moving automorphic forms from a Levi to the group. The functoriality conjecture is a much more general statement about moving automorphic forms from one group to another.) – Emerton Feb 16 2010 at 4:00
I don't know a whole lot about the Langlands program, but if there is one tool that seems to come up a lot in geometric Langlands, it's perverse sheaves. You see a lot of singular algebraic varieties in geometric Langlands, and perverse sheaves are meant as a singular generalization of a vector bundle with a flat connection. Ordinary sheaves are already a singular generalization of vector bundles, but not the relevant one. Perverse sheaves (which are made from sheaves but not sheaves themselves) are a more apropos generalization that incorporates and sort-of just is intersection (co)homology.
I can also say that I wasn't going to learn about perverse sheaves until I had to. However, I have now seen several important papers, in the related categorification program, that read this way: "Perverse sheaves + necessary restrictions = a good solution". So now I might be slowly getting used to them. I can also see that even the formalism perverse sheaves or intersection homology is sort-of inevitable. In some of the simpler constructions, the varieties (over $\mathbb{C}$, say) are non-singular and certain answers arise as ordinary cohomology products or intersection products. For instance, the Schubert calculus in a Grassmannian manifold. What choice do you have if the Grassmannian is replaced by a singular variety $X$? For some of these categorification/Langlands questions, you can either propose wrong answers, or ad hoc answers, or you can automatically get the right answer by using intersection homology on $X$. (With middle perversity, as they say.)
-
Intersection cohomology (either l-adic or Betti or Hodge) also plays a central role in the cohomological study of non-compact Shimura variety and the application of trace formula methods, see e.g Zucker's conjecture or Sophie Morel's PhD thesis. But I don't really see why this is a feature of the Langlands program, rather than a by-product of the fact that many interestingly singular varieties pop up in compactifications of moduli problems. – Simon Pepin Lehalleur Aug 8 2010 at 19:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942956805229187, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/5252-simple-de-some-print.html
|
# A simple DE...for some.
Printable View
• August 31st 2006, 02:52 AM
a4swe
A simple DE...for some.
If $y=e^{ax}cos(bx)$ and $y'''+2y'+3y=0$ what is a and b?
That is the problem to solve and I don't even know what to do.
Can anyone help me out a little and give me a suggestion or two for the solution of this one?
• August 31st 2006, 03:05 AM
CaptainBlack
Quote:
Originally Posted by a4swe
If $y=e^{ax}$ and $y'''+2y'+3y=0$ what is a and b?
That is the problem to solve and I don't even know what to do.
Can anyone help me out a little and give me a suggestion or two for the solution of this one?
You don't appear to have a "b".
But never mind. What you do is calculate the derivatives from your given
form for y(x), and substitute them into the DE. That will give you an
algebraic equation (after dividing the common factor $e^{ax}$ which will occur
on the left hand side of the equation), this will allow you to determine what
a (and b?) have to be.
RonL
• August 31st 2006, 03:13 AM
a4swe
Excuse me, that should be
$<br /> y=e^{ax}cos(bx)<br />$
• August 31st 2006, 04:28 AM
CaptainBlack
Quote:
Originally Posted by a4swe
Excuse me, that should be
$<br /> y=e^{ax}cos(bx)<br />$
The basic idea is to keep differentiating, and at each stage substitute for
complicated expression simpler ones base on what you already know.
Finally you should arrive at an equation of the required form, then you can
equate coefficients between the given DE and what you have found from
repeated differentiateion to get some algebraic equations for a and b.
(You will need to check the algebra in what follows)
If:
$<br /> y=e^{ax}\cos(bx)<br />$
Then:
$<br /> y'=ae^{ax}\cos(bx)- be^{ax}\sin{bx}$ $=a y - be^{ax}\sin{bx}<br />$,
and so:
$<br /> y''=a y' - bae^{ax}\sin{bx} -b^2e^{ax}\cos(bx)$ $<br /> =ay'-b^2y- bae^{ax}\sin{bx}=ay'-b^2y+ay'-a^2y<br />$,
hence:
$<br /> y''=2ay'-(a^2+b^2)y<br />$
so:
$<br /> y'''=2ay''-(a^2+b^2)y'=4a^2y'-2a(a^2+b^2)y-(a^2+b^2)y<br />$
which you should now rearrange into the form $y'''+Ay'+By=0$ then equate
coefficients with the given form to get the algebraic equations for $a$ and
$b$.
RonL
• August 31st 2006, 04:46 AM
a4swe
Thank you very much, this helps a lot.
• August 31st 2006, 04:47 AM
ThePerfectHacker
Quote:
Originally Posted by a4swe
If $y=e^{ax}cos(bx)$ and $y'''+2y'+3y=0$ what is a and b?
That is the problem to solve and I don't even know what to do.
Can anyone help me out a little and give me a suggestion or two for the solution of this one?
Here is another way.
Since this is a homogenous linear differencial equation of order three with constand coefficients look at the charachachteriistic polynomial.
---
$k^3+2k+3=0$
Trivially, $k=-1$ is a solution.
Synthetic division,
$k^3+2k+3\div k+1=k^2-k+3$
The solution of,
$k^2-k+3=0$ are,
$k=\frac{1}{2}\pm i\frac{\sqrt{11}}{2}$
Therefore the general solution of this differential equation is,
$C_1e^{-x}+C_2e^{1/2 x}\sin \left( \frac{\sqrt{11}}{2} x\right)+C_3e^{1/2 x}\cos \left( \frac{\sqrt{11}}{2} x\right)$
The particular solution you have is when $C_1=C_3=0,C_2=1$
Thus,
$e^{1/2 x}\sin \left( \frac{\sqrt{11}}{2} x \right)=e^a \sin (bx)$
Thus,
$a=1/2 \mbox{ and }b=\frac{\sqrt{11}}{2}$
All times are GMT -8. The time now is 04:38 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446085691452026, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/150810-dense.html
|
# Thread:
1. ## dense
Hi--
is the set {1/n | n \in N} dense in [0,1].
2. Originally Posted by Chandru1
Hi--
is the set {1/n | n \in N} dense in [0,1].
A set is dense in $[0, 1]$ if the set unioned with its limit points gives you $[0, 1]$. Now, there is nothing between 1/2 and 1 in your set; your set intersect trivially with $(1/2, 1)$. So, well, does there exist a sequence in your set which has limit, say, 3/4?
3. simply ask yourself the following:
is every number in [0,1] either a limit point of the given set or in it?
if yes, then you have a dense set in [0,1]
else, you don't
in this case, the example given (3/4) does not satisfy any of the two conditions. (why?)
Conclude.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9119635820388794, "perplexity_flag": "middle"}
|
http://wiki.panotools.org/index.php?title=Partial_Panoramas_using_ROI_in_PTViewer&diff=11679&oldid=4395
|
# Partial Panoramas using ROI in PTViewer
From PanoTools.org Wiki
(Difference between revisions)
| | | | |
|-------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | Stuuf (Talk | contribs) (fix degree/apostrophe symbols, latex formulas) | |
| (8 intermediate revisions by 4 users not shown) | | | |
| Line 1: | | Line 1: | |
| − | If you have a panorama that is not fully 360�x180�, and you still want to use PTViewer to immerse your audience into your panorama, there are a few methods to do that. | + | If you have a panorama that is not fully 360°x180°, and you still want to use PTViewer to immerse your audience into your panorama, there are a few methods to do that. |
| | | | |
| | You can expand your Panorama with blank space around, and use the normal way of displaying a panorama in PTViewer. The disadvantage of this is that if you put this picture online, the download times can be significantly longer because of all the blank space. | | You can expand your Panorama with blank space around, and use the normal way of displaying a panorama in PTViewer. The disadvantage of this is that if you put this picture online, the download times can be significantly longer because of all the blank space. |
| Line 22: | | Line 22: | |
| | The picture must have [[Equirectangular|equirectangular]] projection | | The picture must have [[Equirectangular|equirectangular]] projection |
| | | | |
| − | From the stitcher we should be able to get the Horizontal Field of View (HFOV). In this case 160� | + | From the stitcher we should be able to get the Horizontal Field of View (HFOV). In this case 160° |
| | | | |
| | From these 4 numbers we should be able to calculate the parameters necessary for PTViewer to display a partial panorama. | | From these 4 numbers we should be able to calculate the parameters necessary for PTViewer to display a partial panorama. |
| Line 34: | | Line 34: | |
| | Since we know the ROI Width of the picture as well as the Horizontal Field of View (HFOV), we can calculate the field of view for 1 pixel. | | Since we know the ROI Width of the picture as well as the Horizontal Field of View (HFOV), we can calculate the field of view for 1 pixel. |
| | In this case | | In this case |
| − | <pre> | + | |
| − | 160� / 800 pixels = 0.2�/px | + | <math> |
| − | </pre> | + | \frac{160^\circ}{800\text{px}} = \frac{0.2^\circ}{\text{px}} |
| | | + | </math> |
| | | | |
| | We need this number to convert from pixels to degrees and vice versa. You need an accuracy of a couple of decimals otherwise it won't work. | | We need this number to convert from pixels to degrees and vice versa. You need an accuracy of a couple of decimals otherwise it won't work. |
| | | | |
| − | The objective is to place the ROI picture inside the 360�x180� panorama with the horizon in the ROI image over the horizontal 0� line of the pano, and the middle of the ROI image in the middle of the panorama. | + | The objective is to place the ROI picture inside the 360°x180° panorama with the horizon in the ROI image over the horizontal 0° line of the pano, and the middle of the ROI image in the middle of the panorama. |
| | | | |
| | == pwidth and pheight == | | == pwidth and pheight == |
| Line 46: | | Line 47: | |
| | The calculation is done by using the number of degrees per pixel. | | The calculation is done by using the number of degrees per pixel. |
| | Since we know the degrees, we can calculate the number of pixels. | | Since we know the degrees, we can calculate the number of pixels. |
| − | <pre> | + | |
| − | Panorama Width (Pwidth) = 360� / 0.2�/px = 1800 px | + | <math> |
| − | Panorama Height (Pheigth) = 180� / 0.2/px = 900 px | + | \begin{align} |
| − | </pre> | + | \text{Panorama Width}\, pwidth & = \frac{360^\circ}{\frac{0.2^\circ}{\text{px}}} = 1800\text{px} \\ |
| | | + | \text{Panorama Height}\, pheight & = \frac{180^\circ}{\frac{0.2^\circ}{\text{px}}} = 900\text{px} |
| | | + | \end{align} |
| | | + | </math> |
| | | | |
| | == x and y insertion point == | | == x and y insertion point == |
| | To calculate the x and y position of the insertion point (the point where the picture needs to be placed) we can take half of the panorama height and subtracting the horizon position in the ROI. | | To calculate the x and y position of the insertion point (the point where the picture needs to be placed) we can take half of the panorama height and subtracting the horizon position in the ROI. |
| − | <pre> | + | |
| − | Y position of the insertion point = 900px/2 � 227px = 223px | + | <math> |
| − | </pre> | + | Y \text{position of the insertion point} = \frac{900\text{px}}{2} - 227\text{px} = 223\text{px} |
| | | + | </math> |
| | | | |
| | Similarly we can calculate the x offset. | | Similarly we can calculate the x offset. |
| − | In most circumstances, you either don�t know, or don�t care about the direction the picture was taken. In that case it is good practice to place the ROI in the middle of the large pano where 0� is the middle of the picture. You can do this by taking half of the total panorama width and subtracting half the size of the picture | + | In most circumstances, you either don't know, or don't care about the direction the picture was taken. In that case it is good practice to place the ROI in the middle of the large pano where 0° is the middle of the picture. You can do this by taking half of the total panorama width and subtracting half the size of the picture |
| − | <pre> | + | |
| − | X position of the insertion point = 1800px/2 � 800px/2 = 500 px | + | <math> |
| − | </pre> | + | X \text{position of the insertion point} = \frac{1800\text{px}}{2} - \frac{800\text{px}}{2} = 500\text{px} |
| | | + | </math> |
| | | | |
| | == panmin, panmax, tiltmin and tiltmax == | | == panmin, panmax, tiltmin and tiltmax == |
| Line 68: | | Line 74: | |
| | | | |
| | Because the ROI is horizontally in the middle, you may pan half the width of the image to the left and right, converted to degrees. | | Because the ROI is horizontally in the middle, you may pan half the width of the image to the left and right, converted to degrees. |
| − | <pre> | + | |
| − | Minimum pan = -800px/2 * 0.2�/px = -80� | + | <math> |
| − | Maximum pan = 800px/2 * 0.2�/px = 80� | + | \begin{align} |
| − | </pre> | + | \text{Minimum pan} & = \frac{-800\text{px}}{2} \cdot \frac{0.2^\circ}{\text{px}} = -80^\circ\\ |
| | | + | \text{Maximum pan} & = \frac{800\text{px}}{2} \cdot \frac{0.2^\circ}{\text{px}} = 80^\circ |
| | | + | \end{align} |
| | | + | </math> |
| | | | |
| | The minimum tilt is calculated as the height of the ROI minus the position of the horizon, converted to degrees. | | The minimum tilt is calculated as the height of the ROI minus the position of the horizon, converted to degrees. |
| − | <pre> | + | |
| − | Minimum tilt = -(541px � 227px) * 0.2�/px = -62.8� => -62� | + | <math> |
| − | </pre> | + | \text{Minumum tilt} = -(541\text{px} - 227\text{px}) \cdot \frac{0.2^\circ}{\text{px}} = -62.8^\circ \approx -62^\circ |
| | | + | </math> |
| | | | |
| | The maximum tilt is calculated as the position of the horizon, converted to degrees. | | The maximum tilt is calculated as the position of the horizon, converted to degrees. |
| − | <pre> | + | |
| − | Maximum tilt = 227px * 0.2�/px = 45.4� => 45� | + | <math> |
| − | </pre> | + | \text{Maximum tilt} = 227\text{px} \cdot \frac{0.2^\circ}{\text{px}} = 45.4^\circ \approx 45^\circ |
| | | + | </math> |
| | | | |
| | Because ptviewer does not take fractions of degrees, you throw away the fraction. | | Because ptviewer does not take fractions of degrees, you throw away the fraction. |
| | | | |
| | Using these numbers in PTViewer should give you a good partial panorama. If you see blank space at the edges of the panorama, you may want to make a 1 degree change to the minimum and maximum pan and tilt untill it does not show up anymore. | | Using these numbers in PTViewer should give you a good partial panorama. If you see blank space at the edges of the panorama, you may want to make a 1 degree change to the minimum and maximum pan and tilt untill it does not show up anymore. |
| | | + | |
| | | + | == HTML code == |
| | | + | |
| | | + | To see how these calculations translate in HTML code, the above sample could result in the following HTML code : |
| | | + | |
| | | + | <source lang="html"> |
| | | + | <applet archive="ptviewer27L2.jar" code="ptviewer.class" WIDTH="300" HEIGHT="200" mayscript=true> |
| | | + | <param name=pwidth value=1800> |
| | | + | <param name=pheight value=900> |
| | | + | <param name=roi0 value="i'sample.jpg' x500 y223"> |
| | | + | <param name=panmin value=-80> |
| | | + | <param name=panmax value=80> |
| | | + | <param name=tiltmax value=45> |
| | | + | <param name=tiltmin value=-62> |
| | | + | </applet> |
| | | + | </source> |
| | | + | |
| | | | |
| | Have fun. | | Have fun. |
| | | + | |
| | | + | == Helpful form == |
| | | + | |
| | | + | If all these computations give you a headache, you can use [http://yhargla.free.fr/ptviewer_roi.php the following form]. Just input your data and it will generate the applet HTML code with all the correct parameters. Enjoy ! |
| | | + | |
| | | + | [[Category:Tutorial:Nice to know]] |
## Latest revision as of 19:17, 3 August 2009
If you have a panorama that is not fully 360°x180°, and you still want to use PTViewer to immerse your audience into your panorama, there are a few methods to do that.
You can expand your Panorama with blank space around, and use the normal way of displaying a panorama in PTViewer. The disadvantage of this is that if you put this picture online, the download times can be significantly longer because of all the blank space.
To avoid this, it is possible to use a Region Of Interest picture (ROI) to display the panorama. This will only download the partial panorama. We will have to tell PTViewer where to place the picture, and how far the user may pan left and right, and how much they can tilt up and down.
Note this is not an explanation of the syntax of PTViewer, rather a tutorial on how to calculate the different parameters. For the syntax on PTViewer you can visit : PTViewer Documentation
Good luck.
Richard Korff
## Gathering Information
From the ROI picture we need to get some basic information :
• Width in pixels (ROI Width) 800 px
• Height in pixels (ROI Height) 541 px
• Position of the horizon from the top of the picture (Horizon pos) 227 px
The picture must have equirectangular projection
From the stitcher we should be able to get the Horizontal Field of View (HFOV). In this case 160°
From these 4 numbers we should be able to calculate the parameters necessary for PTViewer to display a partial panorama.
## Calculating the parameters for PTViewer
Since we know the ROI Width of the picture as well as the Horizontal Field of View (HFOV), we can calculate the field of view for 1 pixel. In this case
$\frac{160^\circ}{800\text{px}} = \frac{0.2^\circ}{\text{px}}$
We need this number to convert from pixels to degrees and vice versa. You need an accuracy of a couple of decimals otherwise it won't work.
The objective is to place the ROI picture inside the 360°x180° panorama with the horizon in the ROI image over the horizontal 0° line of the pano, and the middle of the ROI image in the middle of the panorama.
## pwidth and pheight
To do that we first need to calculate the total size of the panorama image, of which the ROI image is a part of. The calculation is done by using the number of degrees per pixel. Since we know the degrees, we can calculate the number of pixels.
$\begin{align} \text{Panorama Width}\, pwidth & = \frac{360^\circ}{\frac{0.2^\circ}{\text{px}}} = 1800\text{px} \\ \text{Panorama Height}\, pheight & = \frac{180^\circ}{\frac{0.2^\circ}{\text{px}}} = 900\text{px} \end{align}$
## x and y insertion point
To calculate the x and y position of the insertion point (the point where the picture needs to be placed) we can take half of the panorama height and subtracting the horizon position in the ROI.
$Y \text{position of the insertion point} = \frac{900\text{px}}{2} - 227\text{px} = 223\text{px}$
Similarly we can calculate the x offset. In most circumstances, you either don't know, or don't care about the direction the picture was taken. In that case it is good practice to place the ROI in the middle of the large pano where 0° is the middle of the picture. You can do this by taking half of the total panorama width and subtracting half the size of the picture
$X \text{position of the insertion point} = \frac{1800\text{px}}{2} - \frac{800\text{px}}{2} = 500\text{px}$
## panmin, panmax, tiltmin and tiltmax
To limit the freedom the user has in moving around your pano, you want to restrict the pan and tilt angles. To calculate this is relatively easy with the information we have gathered above. The pan and tilt angles are calculated in degrees.
Because the ROI is horizontally in the middle, you may pan half the width of the image to the left and right, converted to degrees.
$\begin{align} \text{Minimum pan} & = \frac{-800\text{px}}{2} \cdot \frac{0.2^\circ}{\text{px}} = -80^\circ\\ \text{Maximum pan} & = \frac{800\text{px}}{2} \cdot \frac{0.2^\circ}{\text{px}} = 80^\circ \end{align}$
The minimum tilt is calculated as the height of the ROI minus the position of the horizon, converted to degrees.
$\text{Minumum tilt} = -(541\text{px} - 227\text{px}) \cdot \frac{0.2^\circ}{\text{px}} = -62.8^\circ \approx -62^\circ$
The maximum tilt is calculated as the position of the horizon, converted to degrees.
$\text{Maximum tilt} = 227\text{px} \cdot \frac{0.2^\circ}{\text{px}} = 45.4^\circ \approx 45^\circ$
Because ptviewer does not take fractions of degrees, you throw away the fraction.
Using these numbers in PTViewer should give you a good partial panorama. If you see blank space at the edges of the panorama, you may want to make a 1 degree change to the minimum and maximum pan and tilt untill it does not show up anymore.
## HTML code
To see how these calculations translate in HTML code, the above sample could result in the following HTML code :
<source lang="html"> <applet archive="ptviewer27L2.jar" code="ptviewer.class" WIDTH="300" HEIGHT="200" mayscript=true>
``` <param name=pwidth value=1800>
<param name=pheight value=900>
<param name=roi0 value="i'sample.jpg' x500 y223">
<param name=panmin value=-80>
<param name=panmax value=80>
<param name=tiltmax value=45>
<param name=tiltmin value=-62>
```
</applet> </source>
Have fun.
## Helpful form
If all these computations give you a headache, you can use the following form. Just input your data and it will generate the applet HTML code with all the correct parameters. Enjoy !
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8096333742141724, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/120420?sort=oldest
|
## Integral representation of the modified Bessel functions of the second kind and asymptotic expansion
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The modified Bessel function (Macdonald function) $K_\alpha(z)$ is known to have the following asymptotic expansion for large positive $z$: $$K_\alpha(z)=\sqrt{\frac{\pi}{2z}}e^{-z}\sum_{k=0}^\infty \frac{b_k(\alpha)}{z^ k}$$ where $b_1(\alpha)=1$, $b_2(\alpha)=\frac{\alpha^2-1^2}{1!8}$, $b_3(\alpha)=\frac{(\alpha^2-1^2)(\alpha^2-3^2)}{2!(8)^2}$ and so on. Is there any simple integral representation, for which it would be a perturbative expansion such that $$K_\alpha(z)=h(z) \int_C \exp\left(\frac{f(y)}{z}\right) g(y)^\alpha d\mu(y)$$ where $f(x)$, $g(x)$, $h(x)$ and $d \mu(x)$ are $\alpha$-independent?
-
I think you want the integrand to have the factor $\exp(z f(y))$, since you are taking the $z\to\infty$ limit and want the argument of the exponential to vary quickly with $y$. – Igor Khavkine Jan 31 at 15:38
## 1 Answer
The DLMF lists multiple integral representations of $K_\nu(z)$. Here's one that fits your bill:
```
\[\mathop{K_{{\nu}}}\nolimits\!\left(z\right)=\frac{\pi^{{\frac{1}{2}}}(\frac{1}{2}z)^{\nu}}{\mathop{\Gamma}\nolimits\!\left(\nu+\frac{1}{2}\right)}\int _{0}^{\infty}e^{{-z\mathop{\cosh}\nolimits t}}(\mathop{\sinh}\nolimits t)^{{2\nu}}dt .\]```
For integer $\nu$, the contour could be extended by symmetry to all of $\mathbb{R}$.
-
Thank you, I know this nice integral representation, but I want something (maybe) more complicated - with $\frac{1}{z}$ as I wrote. – Sasha Jan 31 at 15:54
Sorry, I don't follow your logic about $1/z$ in $g(f(y)/z)$. What kind of expansion would you expect to perform? The integral formula I suggested should be amenable to a Laplace-type (or steepest descent) expansion for large $z$. In any case, most known integral representations are in fact listed on the DLMF page. – Igor Khavkine Jan 31 at 20:53
I just want to have simple $1/z$ expansion for large $z$ (and not a usual steepest descent expansion). – Sasha Feb 1 at 13:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270501732826233, "perplexity_flag": "head"}
|
http://dsp.stackexchange.com/questions/7599/the-meaning-of-k-delta-t
|
# The meaning of $k \delta (t)$
Reading the signal literature I often come across the expression $k \delta (t)$ where $k$ is a constant. I presume this is a notation to suggest we are referring to the area or strength of the Dirac function since the multiplication of a constant, $k$, by $\delta (t)$ seems to make no sense.
My question is what does $k \delta(t)$ mean? Am I correct in assuming that on its own it doesn't represent $k$ multiplied by $\delta(t)$?
-
## 1 Answer
You are correct. In your example, $k$ would refer to the "area underneath" the impulse. Mathematically speaking, the Dirac delta isn't a function in the typical sense of the term. Instead, it is more of a distribution, characterized by the fact that when integrated across any interval that contains $t=0$, the result is unity. That is:
$$\int_{-\epsilon}^{\epsilon} \delta(t) dt = 1 \ \forall\ \epsilon > 0$$
The distribution is typically defined as follows:
$$\delta(t) = \begin{cases} \infty,\ t = 0 \\ 0, \text{ otherwise}\end{cases}$$
Multiplication by a constant $k$ obviously wouldn't change this definition at all. So instead, the scaling merely changes the area underneath the distribution:
$$\int_{-\epsilon}^{\epsilon} k\delta(t) dt = k \ \forall\ \epsilon > 0$$
Extending this concept of multiplication by a constant $k$ to multiplication by a function $f(t)$ yields the following:
$$\int_{-\epsilon}^{\epsilon} f(t) \delta(t) dt = f(0) \ \forall\ \epsilon > 0$$
or more generally:
$$\int_{T-\epsilon}^{T+\epsilon} f(t) \delta(t-T) dt = f(T) \ \forall\ \epsilon > 0$$
which is known as the sifting property of the Dirac delta and is used extensively in linear systems theory.
-
Remark that the Dirac delta is typically defined as the limit of a sequence of functions with unit area and support that approaches to 0. – thang Jan 22 at 19:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9607723355293274, "perplexity_flag": "head"}
|
http://www.abstractmath.org/Word%20Press/?tag=expression
|
# Gyre&Gimbleposts about math, language and other things that may appear in the wabe
## Algebra is a difficult foreign language
2012/08/15 — SixWingedSeraph
Note: This post uses MathJax. If you see mathematical formulas with dollar signs around them, or badly formatted formulas, try refreshing the screen. Sometimes you have to do it two or three times.
## Algebra
In a previous post, I said that the symbolic language of mathematics is difficult to learn and that we don't teach it well. (The symbolic language includes as a subset the notation used in high school algebra, precalculus, and calculus.) I gave some examples in that post but now I want to go into more detail. This discussion is an incomplete sketch of some aspects of the syntax of the symbolic language. I will write one or more posts about the semantics later.
### The languages of math
First, let's distinguish between mathematical English and the symbolic language of math.
• Mathematical English is a special register or jargon of English. It has not only its special vocabulary, like any jargon, but also used ordinary English words such as "If…then", "definition" and "let" in special ways.
• The symbolic language of math is a distinct, special-purpose written language which is not a dialect of the English language and can in fact be read by mathematicians with little knowledge of English.
• It has its own symbols and rules that are quite different from spoken languages.
• Simple expressions can be pronounced, but complicated expressions may only be pointed to or referred to.
• A mathematical article or book is typically written using mathematical English interspersed with expressions in the symbolic language of math.
### Symbolic expressions
A symbolic noun (logicians call it a term) is an expression in the symbolic language that names a number or other mathematical object, and may carry other information as well.
• "3" is a noun denoting the number 3.
• "$\text{Sym}_3$" is a noun denoting the symmetric group of order 3.
• "$2+1$" is a noun denoting the number 3. But it contains more information than that: it describes a way of calculating 3 as a sum.
• "$\sin^2\frac{\pi}{4}$" is a noun denoting the number $\frac{1}{2}$, and it also describes a computation that yields the number $\frac{1}{2}$. If you understand the symbolic language and know that $\sin$ is a numerical function, you can recognize "$\sin^2\frac{\pi}{4}$" as a symbolic noun representing a number even if you don't know how to calculate it.
• "$2+1$" and "$\sin^2\frac{\pi}{4}$" are said to be encapsulated computations.
• The word "encapsulated" refers to the fact that to understand what the expressions mean, you must think of the computation not as a process but as an object.
• Note that a computer program is also an object, not a process.
• "$a+1$" and "$\sin^2\frac{\pi x}{4}$" are encapsulated computations containing variables that represent numbers. In these cases you can calculate the value of these computations if you give values to the variables.
A symbolic statement is a symbolic expression that represents a statement that is either true or false or free, meaning that it contains variables and is true or false depending on the values assigned to the variables.
• $\pi\gt0$ is a symbolic assertion that is true.
• $\pi\lt0$ is a symbolic assertion that it is false. The fact that it is false does not stop it from being a symbolic assertion.
• $x^2-5x+4\gt0$ is an assertion that is true for $x=5$ and false for $x=1$.
• $x^2-5x+4=0$ is an assertion that is true for $x=1$ and $x=4$ and false for all other numbers $x$.
• $x^2+2x+1=(x+1)^2$ is an assertion that is true for all numbers $x$.
### Properties of the symbolic language
The constituents of a symbolic expression are symbols for numbers, variables and other mathematical objects. In a particular expression, the symbols are arranged according to conventions that must be understood by the reader. These conventions form the syntax or grammar of symbolic expressions.
The symbolic language has been invented piecemeal by mathematicians over the past several centuries. It is thus a natural language and like all natural languages it has irregularities and often results in ambiguous expressions. It is therefore difficult to learn and requires much practice to learn to use it well. Students learn the grammar in school and are often expected to understand it by osmosis instead of by being taught specifically. However, it is not as difficult to learn well as a foreign language is.
In the basic symbolic language, expressions are written as strings of symbols.
• The symbolic language gives (sometimes ambiguous) meaning to symbols placed above or below the line of symbols, so the strings are in some sense more than one dimensional but less than two-dimensional.
• Integral notation, limit notation, and others, are two-dimensional enough to have two or three levels of symbols.
• Matrices are fully two-dimensional symbols, and so are commutative diagrams.
• I will not consider graphs (in both senses) and geometric drawings in this post because I am not sure what I want to write about them.
## Syntax of the language
One of the basic methods of the symbolic language is the use of constructors. These can usually be analyzed as functions or operators, but I am thinking of "constructor" as a linguistic device for producing an expression denoting a mathematical object or assertion. Ordinary languages have constructors, too; for example "-ness" makes a noun out of a verb ("good" to "goodness") and "and" forms a grouping ("men and women").
### Special symbols
The language uses special symbols both as names of specific objects and as constructors.
• The digits "0", "1", "2" are named by special symbols. So are some other objects: "$\emptyset$", "$\infty$".
• Certain verbs are represented by special symbols: "$=$", "$\lt$", "$\in$", "$\subseteq$".
• Some constructors are infixes: "$2+3$" denotes the sum of 2 and 3 and "$2-3$" denotes the difference between them.
• Others are placed before, after, above or even below the name of an object. Examples: $a'$, which can mean the derivative of $a$ or the name of another variable; $n!$ denotes $n$ factorial; $a^\star$ is the dual of $a$ in some contexts; $\vec{v}$ constructs a vector whose name is "$v$".
• Letters from other alphabets may be used as names of objects, either defined in the context of a particular article, or with more nearly global meaning such as "$\pi$" (but "$\pi$" can denote a projection, too).
This is a lot of stuff for students to learn. Each symbol has its own rules of use (where you put it, which sort of expression you may it with, etc.) And the meaning is often determined by context. For example $\pi x$ usually means $\pi$ multiplied by $x$, but in some books it can mean the function $\pi$ evaluated at $x$. (But this is a remark about semantics — more in another post.)
### "Systematic" notation
• The form "$f(x)$" is systematically used to denote the value of a function $f$ at the input $x$. But this usage has variations that confuse beginning students:
• "$\sin\,x$" is more common than "$\sin(x)$".
• When the function has just been named as a letter, "$f(x)$" is more common that "$fx$" but many authors do use the latter.
• Raising a symbol after another symbol commonly denotes exponentiation: "$x^2$" denotes $x$ times $x$. But it is used in a different meaning in the case of tensors (and elsewhere).
• Lowering a symbol after another symbol, as in "$x_i$" may denote an item in a sequence. But "$f_x$" is more likely to denote a partial derivative.
• The integral notation is quite complicated. The expression \[\int_a^b f(x)\,dx\] has three parameters, $a$, $b$ and $f$, and a bound variable $x$ that specifies the variable used in the formula for $f$. Students gradually learn the significance of these facts as they work with integrals.
### Variables
Variables have deep problems concerned with their meaning (semantics). But substitution for variables causes syntactic problems that students have difficulty with as well.
• Substituting $4$ for $x$ in the expression $3+x$ results in $3+4$.
• Substituting $4$ for $x$ in the expression $3x$ results in $12$, not $34$.
• Substituting "$y+z$" in the expression $3x$ results in $3(y+z)$, not $3y+z$. Some of my calculus students in preforming this substitution would write $3\,\,y+z$, using a space to separate. The rules don't allow that, but I think it is a perfectly natural mistake.
### Using expressions and writing about them
• If I write "If $x$ is an odd integer, then $3+x$ is odd", then I am using $3+x$ in a sentence. It is a noun denoting an unspecified number which can be constructed in a specified way.
• When I mention substituting $4$ for $x$ in "$3+x$", I am talking about the expression $3+x$. I am not writing about a number, I am writing about a string of symbols. This distinction causes students major difficulties and teacher hardly ever talk about it.
• In the section on variables, I wrote "the expression $3+x$", which shows more explicitly that I am talking about it as an expression.
• Note that quotes in novels don't mean you are talking about the expression inside the quotes, it means you are describing the act of a person saying something.
• It is very common to write something like, "If I substitute $4$ for $x$ in $3x$ I get $3 \times 4=12$". This is called a parenthetic assertion, and it is literally nonsense (it says I get an equation).
• If I pronounce the sentence "We know that $x\gt0$" we pronounce "$x\gt0$" as "$x$ is greater than zero", If I pronounce the sentence "For any $x\gt0$ there is $y\gt0$ for which $x\gt y$", then I pronounce the expression "$x\gt0$" as "$x$ greater than zero\$", This is an example of context-sensitive pronunciation
• There is a lot more about parenthetic assertions and context-sensitive pronunciation in More about the languages of math.
## Conclusion
I have described some aspects of the syntax of the symbolic language of math. Learning that syntax is difficult and requires a lot of practice. Students who manage to learn the syntax and semantics can go on to learn further math, but students who don't are forever blocked from many rewarding careers. I heard someone say at the MathFest in Madison that about 25% of all high school students never really understand algebra. I have only taught college students, but some students (maybe 5%) who get into freshman calculus in college are weak enough in algebra that they cannot continue.
I am not proposing that all aspects of the syntax (or semantics) be taught explicitly. A lot must be learned by doing algebra, where they pick up the syntax subconsciously just as they pick up lots of other behavior-information in and out of school. But teachers should explicitly understand the structure of algebra at least in some basic way so that they can be aware of the source of many of the students' problems.
It is likely that the widespread use of computers will allow some parts of the symbolic language of math to be replaced by other methods such as using Excel or some visual manipulation of operations as suggested in my post Mathematical and linguistic ability. It is also likely that the symbolic language will gradually be improved to get rid of ambiguities and irregularities. But a deliberate top-down effort to simplify notation will not succeed. Such things rarely succeed.
## References
• Communicating in the language of mathematics, in IAE-Pedia.
• Handbook of mathematical discourse, by Charles Wells. (Also available online).
• The language of mathematics, by Warren Esty.
• Mathematical discourse: Language, Symbolism and Visual Images, by Kay O'Halloran.
• Mathematical and linguistic ability (previous post)
• Pages from abstractmath.org
## A visualization of a computation in tree form
2012/06/28 — SixWingedSeraph
To manipulate the demo below, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website.
This demonstration shows the step by step computation of the value of the expression $3x^2+2(1+y)$ shown as a tree. By moving the first slider from right to left, you go through the six steps of the computation. You may select the values of $x$ and $y$ with the second and third sliders. If you click on the plus sign next to a slide, a menu opens up that allows you to make the slider move automatically, shows the values, and other things.
Note that subtrees on the same level are evaluate left to right. Parallel processing would save two steps.
The code for this demo is in the file Live evaluation of expressions in TreeForm 3. The code is ad-hoc. It might be worthwhile for someone to design a package that produces this sort of tree for any expression.
A previous post related to this post is Making visible the abstraction in algebraic notation.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 99, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9159599542617798, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/14610/filtering-beat-to-beat-heart-rate-data?answertab=active
|
# Filtering beat-to-beat heart rate data
From an experiment, I have a dataset of beat-to-beat heart rate data: a list of the time between each heart beat in [ms]. The data is measured using an infrared optic sensor at the finger tip. The sensor frequently misinterprets a slight movement of the finger as an heart beat. Data therefore often looks somewhat like this:
````{1000, 1000, 1000, 1000, 500, 500, 1000, 1000, 1000, 600, 400, 1000}
````
In this example, one can easily see that the 5th and 6th element should be one; same for the 10th and 11th. However, in real life the data looks more like this:
````data = {981, 870, 1099, 1105, 650, 397, 920, 917, 1015, 1085, 210, 344, 457,
950}
````
where the 5-6 (`650, 397`) and 11-12-13 (`210, 344, 457`) should be taken together. It is easy to just delete the incorrect data by using something like:
````DeleteCases[data,
x_ /; x < Mean[data] - StandardDeviation[data] ||
x > Mean[data] + StandardDeviation[data]]
````
...but I want to make a function that recognizes when multiple elements should be added to one.
One could just add every two, three or four (=`length`) elements and select the `Cases` where the result lies (for example) in the range `Mean[data]±StandardDeviation[data]`:
````length = 2;
Position[Total[data[[# ;; # + length]]] & /@
Range[Length[data] - length],
x_ /; x > Mean[data] - StandardDeviation[data] &&
x < Mean[data] + StandardDeviation[data]]
````
Result:
````{{5}, {11}, {12}}
````
This gives me an idea of where the incorrect data is. Unfortunately, after having this result, I don't know what to do with it... For example, I get confused by the fact that elements 11-12-13 return 2 cases of incorrect data when I use `length=1`. And maybe there are more (simple) ways to filter this data.
Question: can anyone give me a kick-start?
Edit: You can download an example of actual data here. Just `Flatten[Import[filename,"Table"]]`
-
Perhaps fortuitously `Split` works on your small example: `Split[data, Abs[#2 - #1] < 300 &]`. However, more data is required to test how robust a simple, single threshold to `Split` is... – KennyColnago Nov 14 '12 at 21:58
Any chance this experiment attempts to measure heart rate variability? Just looking into something along these lines myself. – Jagra Nov 14 '12 at 22:09
@KennyColnago: i don't quite understand what your code does, but it works for the small example. When the mean of the data shifts to (say) 1500, i have to change the `300` to something else. I think it's difficult to implement it in a function that handles lots of different data. And i added a link to some actual data. – A. Goossens Nov 14 '12 at 22:14
@Jagra: indeed, doing a little research on heart rate variability and training load! I learned that almost everyone in this field just deletes the incorrect data, but to me that seems not te be the way to go... – A. Goossens Nov 14 '12 at 22:17
2
What if one of the subjects has arrhythmia? (Speaking from experience, the beat can occasionally get weirdly irregular.) – Szabolcs Nov 14 '12 at 22:54
show 9 more comments
## 4 Answers
You are looking for ideas, so I will venture a partial solution in the hope it might inspire something useful. The idea presented here is to exploit a statistical model of the heartbeat intervals as a way to test the goodness of any attempted clustering of the data.
The approach is general, because it is based on maximum likelihood methods, but I will illustrate it for a particularly simple, tractable, and (possibly) applicable case. (I hold out no hope that the following model is particularly correct; I only maintain it may be a good enough approximation to afford some insight and perhaps improve solutions to the problem.)
Let us suppose that the heartbeat intervals are independent and identically distributed according to some member $f_\theta$ of a parametric family, such as a Normal distribution (where $\theta=(\mu, \sigma)$ gives the mean and standard deviation). Let us also suppose that, independent of that process, there is another parameterized discrete random variable with values in the nonnegative integers giving the number of places where which each interval is broken. Let its probability function be $p_\lambda$: presumably, its expectation is close to $0$. Finally, let us suppose that conditional on these two processes, a third distribution governs the proportions in which the intervals are broken. Let its probability function be $u_\phi$.
When $f$ is Normal, $p$ is Poisson, and $u$ is uniform, the problem becomes analytically tractable. (Such tractability is not necessary, but it speeds the execution of the program and can provide insight into what's going on.) To write the log likelihood--which we intend to maximize--suppose that the data have already been grouped into sequences corresponding to a single beat. The contribution to the log likelihood from such a group $(x_i)$, $i=1,2,\ldots,k$, is
$$\Lambda = \Bigl[-\frac{(\sum x_i - \mu)^2}{2 \sigma^2} - \frac{1}{2} \log(2 \pi \sigma^2)\Bigr] + [-\lambda + k \log(\lambda) - \log(k!)] + [\log(k!)].$$
From left to right, the terms are grouped according to the contributions from the Normal distribution, the Poisson, and the uniform. Contingent on this grouping, closed formulas for values of $\mu, \sigma$, and $\lambda$ which maximize the log likelihood can be derived (and are simple): $\hat{\mu}$ is the mean of the group sums, $\hat{\lambda}$ is $1$ greater than the ratio of number of observations to number of groups, and $\hat{\sigma}$ is the standard deviation of the group sums about $\hat{\mu}$.
To enable automatic identification of heartbeats, it may be best to represent the groups of data by means of an indicator (0-1) vector $y$: the cumulative sums of $y$ designate the groups. Thus, $y$ is $1$ at the start of each group of readings and is $0$ for intermediate readings. Equivalently, $y$ is the indicator of the actual beats. Let $\Lambda^*(y)$ designate the maximized log likelihood for the grouping given by $y$.
What we need is a way to find such a $y$ for which $\Lambda^*(y)$ is as large as possible. This is a nonlinear binary optimization program. It seems a little tricky, but there's a natural underlying structure to the problem making it feasible. Generally, many heartbeats can be clearly identified using heuristics such as those suggested in other answers to the question. All we have to do is vary $y$ slightly around the endpoints of some of the more problematic groupings.
Let's see how this might play out in practice. Begin with the sequence of $215$ observations:
````data = Flatten[Import["https://dl.dropbox.com/u/1989758/beattobeatdata.txt", "Table"]];
ListPlot[data, PlotRange -> {Full, Full}, AxesOrigin -> {0, 0},
PlotStyle -> PointSize[0.0125], AxesLabel -> {"Index", "Interval"}]
````
A break in its histogram suggests a starting threshold for identifying overly short intervals:
````Histogram[data]
````
Somewhere around $550$ will do. We take this as an initial indicator and, preparatory to the next step, save it:
````y = ConstantArray[1, Length[data]];
y[[ Flatten[Position[data, a_ /; a < 550]]]] = 0;
x = y;
````
Before going on, we need to be able to maximize $\Lambda(y)$. This function returns an array given the best value of the log likelihood and the associated parameter values, $\{\Lambda^*(y), \{\{\hat{\mu}, \hat{\sigma}^2\}, \hat{\lambda}\}$.
````maxLogL[data_, x_] := Module[{logL, k = Total[x], n = Length[x], \[Lambda], \[Mu], \[Sigma]2},
\[Lambda] = n/k - 1;
\[Mu] = Total[data]/k;
\[Sigma]2 = Total[(Total /@ (First /@ # & /@
GatherBy[{data, Accumulate[x]}\[Transpose], Last]) - \[Mu])^2] / k;
{f[data, {x, {\[Mu], \[Sigma]2}, \[Lambda]}], {{\[Mu], \[Sigma]2}, \[Lambda]}}
];
````
It calls `f` to compute $\Lambda^*(y)$:
````f[data_, {x_, {\[Mu]_, \[Sigma]2_}, \[Lambda]_}] := Module[{logL},
logL = Function[{z}, -(1/2.) (Total[z] - \[Mu])^2 / \[Sigma]2 - (1/2.) Log[2. \[Pi] \[Sigma]2]
- \[Lambda] + (Length[z] - 1) Log[\[Lambda]]];
Total[logL /@ (First /@ # & /@ GatherBy[{data, Accumulate[x]}\[Transpose], Last])]
];
````
The maximum log likelihood associated with the initial value of $y$ equals $-1293.79$:
````maxLogL[data, y] // N
````
{-1293.79, {{898.461, 12047.1}, 0.0539216}}
Let's try to improve this by changing just one entry in $y$ at a time:
````For[y = x; lMax = maxLogL[data, y] // First; i = 1, i <= Length[x], i++,
y[[i]] = 1 - x[[i]];
lMax0 = First[maxLogL[data, y]];
{lMax, y[[i]]} = If[ lMax0 < lMax, {lMax, x[[i]]}, {lMax0, y[[i]]}]
];
lMax
````
-1172.63
That's a tremendous improvement in $\Lambda^*$ from $-1293.79$ to $-1172.63$. (Increases of $2$ or so are often considered "significant." Remember, these numbers are natural logarithms.) By inspection of the data (using plots and tables), we can do a little better still:
````y[[147]] = 1; y[[148]] = 0; maxLogL[data, y] // N
````
{-1142.2, {{889.738, 2671.79}, 0.0436893}}
The large increase of $30.4$ in $\Lambda^*$ demonstrates this change in the clustering was worthwhile.
Let's take a look at the results by plotting the current estimate of the heartbeats represented by $y$:
````ListPlot[Total /@ (First /@ # & /@
GatherBy[{data, Accumulate[y]}\[Transpose], Last]),
PlotRange -> {Full, Full}, AxesOrigin -> {0, 0},
PlotStyle -> PointSize[0.0125], AxesLabel -> {"Beat", "Interval"}]
````
It is wise to explore the log likelihood function. The mean of $889.738$ looks solid, so let's draw contours of the log likelihood by varying the other two parameters, $\sigma$ and $\lambda$:
````ContourPlot[f[data, {y, {889.738, \[Sigma]^2}, \[Lambda]}],
{\[Lambda], 0.02, 0.08}, {\[Sigma], 46, 58}],
Epilog -> {Red, PointSize[0.015], Point[{0.0436893, Sqrt[2671.79]}]}
````
The contour interval is $1$. We can therefore expect that this plot shows, approximately, a $95$% contour ellipse for $(\lambda, \sigma)$. The conclusion is that:
• The mean heartbeat interval is $889.7$. It is likely between $882$ and $898$ (as indicated by similar contour plots of $\mu$ versus $\sigma$ and $\mu$ versus $\lambda$, not shown here).
• The standard deviation of the intervals is $51.7 = \sqrt{2671.79}$ and is likely between $46$ and $58$ or so.
• The rate at which beats are interrupted is $\hat{\lambda} = 0.044$ and is likely between $0.02$ and $0.08$ or so.
There are some outlying data around indexes $50$ to $70$ in the raw data, as the previous list plot shows: this is a clear violation of the parametric assumptions of these calculations. Nevertheless, it appears they have done a good job of characterizing the data overall and of guiding us to a good solution of the original problem of clustering the data and classifying them into heartbeats and non-heartbeats.
There are obvious improvements to be made. The first is that this probability model is obviously deficient: any anomalies in the data are likely to be positively correlated; the variation of heartbeat intervals is not Normal; the mean interval might change over time. Introducing parameters to handle these complications is a matter of computational complexity but creates no new conceptual or computational problems.
The second is that I have not proposed an automated way to find $y$. As far as I can tell, Mathematica has no built-in procedure to handle this kind of optimization. It should succumb easily to a genetic algorithm, simulated annealing, or perhaps even a dynamic program, but I haven't the time to do that coding.
(My code is awfully quick and dirty too, but at this stage that is of little import.)
I have offered this account because the idea of using a likelihood maximizer to help evaluate possible solutions ought to be applicable no matter what solution method is ultimately adopted. By using a probability model it will be less ad hoc than most attempts and can even provide some insight into the heartbeat process itself.
-
Wow! That's a bit more than i expected, and more than my mind can grasp... I don't understand most of your statistics, but i think i get the basic idea. Thanks! I will probably reread this a couple of times in the next days... – A. Goossens Nov 15 '12 at 10:21
The simplest thing would probably be to put all heart beats in a list and perform a Fourier transform:
````beats = ConstantArray[0., Total[data]];
beats[[Accumulate[data]]] = 1;
beats = GaussianFilter[beats, 100, Padding -> "Cyclic"];
````
Which gives you an array like this (I've picked a section of your data with a few extra pulses):
````ft = Fourier[GaussianFilter[beats, 100, Padding -> "Cyclic"]];
ft[[1]] = 0;
maxFrequency = First[Flatten[Position[Abs[ft], Max[Abs[ft]]]]]-1;
ListPlot[Abs[ft[[;; 300]]], PlotRange -> All, Filling -> 0]
````
The fourier transform looks like this, with a clear peak at 38, so there are probably 38 "true" heart beats:
You can visualize the FT result:
````phase = ft[[maxFrequency + 1]]/Norm[ft[[maxFrequency + 1]]];
Show[
Plot[Im[phase*Exp[I*(k - 1)*(maxFrequency)/Length[ft]*2 \[Pi]]], {k, 1, Length[beats]}],
Graphics[{Red, Point[{#, 1} & /@ Accumulate[data]]}],
AspectRatio -> 1/10, ImageSize -> 800]
````
The phase is not perfect, probably because the heart rate is not perfectly constant over the whole time.
The advantage of this method is that it is very simple and very reliable if there are extra beats or missing (undetected) beats. Maybe you can use it as a first step to get a reliable estimate of the heart rate, then use that to filter extra/missing pulses.
-
2
Fourier transformations are a good way to grasp the main properties of an heart beat signal (a good part of the heart rate variability analysis in science is done using fourier transforms), but i think using it to filter' or estimate the data removes to many of the characteristics of the signal. Can you endorse this? – A. Goossens Nov 15 '12 at 11:21
I was trying to do something similiar but you beat me. However, you can save a line and using a `SparseArray` to make the beats array: `beats = SparseArray[Thread[Accumulate@data -> 1]]` – s0rce Nov 16 '12 at 14:21
Using your values for `data`, this seems to work:
````error = StandardDeviation[data];
data //. {a___, b_, c__, d___} /; Abs[b + c - Median[data]] < error :> {a, b + c, d}
````
I used `StandardDeviation[data]` because that's what you used, but you can put in whatever error bound you think is best there. Also note that I replaced `Mean[data]` with `Median[data]`, upon reviewing Rahul N.'s comment because I remembered that `Median` is more representative of the center of a skewed distribution than the `Mean`.
Additional edit
Rahul N. also suggested an "error" value which is the following:
````error[b_, c_] := Min[Abs[b - Median[data]], Abs[c - Median[data]]]
data //. {a___, b_, c__, d___} /; Abs[b + c - Median[data]] < error[b,c] :> {a, b + c, d}
````
This minimizes deviation from the median by combining terms.
I'd like to note that using a constant error value speeds up the whole process twofold (when using your linked data set). However, if speed was a concern (which you didn't mention it was), we'd have to get less elegant and avoid pattern matching.
-
1
Replace the condition with `Abs[b + c - Median[data]] < Min[Abs[b - Median[data]], Abs[c - Median[data]]]` and you don't even need to choose an error bound. – Rahul Narain Nov 15 '12 at 1:48
Thanks a lot! Simple method and very good results. Could you explain why for this data Median is better than Mean? I understand that the data is skewed, but that's due to the errors, am i right? 'Perfect' data (without) errors will have a normal distribution. – A. Goossens Nov 15 '12 at 10:24
My guess is Mean will average in lowliers whereas, assuming they are not terribly frequent, Median is likely to skip over them. – Daniel Lichtblau Nov 15 '12 at 16:07
@A.Goossens I just remembered that from stats class :). Median is approx. equal to mean for normal distributions anyway, so it's safe to go with the median. – VF1 Nov 15 '12 at 18:03
A bit crude, but it works. Idea: Find lowliers, then merge those that are contiguous.
````lowposns = Flatten[Position[data, aa_ /; aa < 4/5*Max[data]]];
groups = Split[lowposns, #2 - #1 == 1 &];
regroup =
SortBy[Join[
Transpose[{Complement[Range[Length[data]], Flatten[groups]]}],
groups], #[[1]] &]
(* {{1}, {2}, {3}, {4}, {5, 6}, {7}, {8}, {9}, {10}, {11, 12, 13}, {14}} *)
newdata = Map[Total, Map[data[[#]] &, regroup]]
(* {981, 870, 1099, 1105, 1047, 920, 917, 1015, 1085, 1011, 950} *)
````
I'm sure there are better ways to code this.
-
1
Good approach, now i understand how to implement my original idea. Any idea why it doesn't give good results in my actual data? Could it be the definition of the 'lowliers'? – A. Goossens Nov 15 '12 at 10:31
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8877911567687988, "perplexity_flag": "middle"}
|
http://www.reference.com/browse/kronecker-delta
|
Definitions
# Kronecker delta
In mathematics, the Kronecker delta or Kronecker's delta, named after Leopold Kronecker (1823-1891), is a function of two variables, usually integers, which is 1 if they are equal, and 0 otherwise. So, for example, $delta_\left\{12\right\} = 0$, but $delta_\left\{33\right\} = 1$. It is written as the symbol δij, and treated as a notational shorthand rather than as a function.
$delta_\left\{ij\right\} = left\left\{begin\left\{matrix\right\}$
1, & mbox{if } i=j 0, & mbox{if } i ne j end{matrix}right.
## Alternate notation
Using the Iverson bracket:
$delta_\left\{ij\right\} = \left[i=j \right].,$
Often, the notation $delta_i$ is used.
$delta_\left\{i\right\} = left\left\{begin\left\{matrix\right\}$
1, & mbox{if } i=0 0, & mbox{if } i ne 0 end{matrix}right.
In linear algebra, it can be thought of as a tensor, and is written $delta^i_j$.
## Digital signal processing
Similarly, in digital signal processing, the same concept is represented as a function on $mathbb\left\{Z\right\}$ (the integers):
$delta\left[n\right] = begin\left\{cases\right\} 1, & n = 0 0, & n ne 0.end\left\{cases\right\}$
The function is referred to as an impulse, or unit impulse. And when it stimulates a signal processing element, the output is called the impulse response of the element.
## Properties of the delta function
The Kronecker delta has the so-called sifting property that for $jinmathbb Z$:
$sum_\left\{i=-infty\right\}^infty a_i delta_\left\{ij\right\} =a_j.$
and if the integers are viewed as a measure space, endowed with the counting measure, then this property coincides with the defining property of the Dirac delta function
$int_\left\{-infty\right\}^infty delta\left(x-y\right)f\left(x\right) dx=f\left(y\right),$
and in fact Dirac's delta was named after the Kronecker delta because of this analogous property. In signal processing it is usually the context (discrete or continuous time) that distinguishes the Kronecker and Dirac "functions". And by convention, $delta\left(t\right),$ generally indicates continuous time (Dirac), whereas arguments like i, j, k, l, m, and n are usually reserved for discrete time (Kronecker). Another common practice is to represent discrete sequences with square brackets; thus: $delta\left[n\right],$. It is important to note that the Kronecker delta is not the result of sampling the Dirac delta function.
The Kronecker delta is used in many areas of mathematics.
### Linear algebra
In linear algebra, the identity matrix can be written as $\left(delta_\left\{ij\right\}\right)_\left\{i,j=1\right\}^n,$.
If it is considered as a tensor, the Kronecker tensor, it can be written $delta^i_j$ with a covariant index j and contravariant index i.
This (1,1) tensor represents:
• the identity matrix, considered as a linear mapping
• the trace
• the inner product $V^* otimes V to K$
• the map $K to V^* otimes V$, representing scalar multiplication as a sum of outer products
## Extensions of the delta function
In the same fashion, we may define an analogous, multi-dimensional function of many variables
$delta^\left\{j_1 j_2 dots j_n\right\}_\left\{i_1 i_2 dots i_n\right\} = prod_\left\{k=1\right\}^n delta_\left\{i_k j_k\right\}.$
This function takes the value 1 if and only if all the upper indices match the corresponding lower ones, and the value zero otherwise.
## Integral representations
For any integer n, using a standard residue calculation we can write an integral representation for the Kronecker delta as
$delta_\left\{x,n\right\} = frac1\left\{2pi i\right\} oint z^\left\{x-n-1\right\} dz,$
where the contour of the integral goes counterclockwise around zero. This representation is also equivalent to
$delta_\left\{x,n\right\} = frac1\left\{2pi\right\} int_0^\left\{2pi\right\} e^\left\{i\left(x-n\right)varphi\right\} dvarphi,$
by a rotation in the complex plane.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8700138926506042, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/32427/how-convincing-is-the-evidence-for-dark-matter-annihilation-at-130-gev-in-the-ga?answertab=active
|
How convincing is the evidence for dark matter annihilation at 130 GeV in the galactic center from the Fermi Satellite data?
I listened to Christoph Weniger present his results at SLAC today. See his paper is here: http://arxiv.org/abs/1204.2797 and also see a different analysis here: http://arxiv.org/abs/1205.1045. The data seems convincing to me! Is this result consistent with theoretical expectations for DM candidates? In particular is the reported estimate cross section for annihilation into photons consistent with estimated cross sections for the various WIMP dark matter candidate particles (like LSP dark matter candidates)? Are there any other reasonable astrophysical mechanisms that would produce this 130 GeV photon line?
The summary for the talk claims: Using 43 months of public gamma-ray data from the Fermi Large Area Telescope, we find in regions close to the galactic center at energies of 130 GeV a 4.6 sigma excess that is not inconsistent with a gamma-ray line from dark matter annihilation. When taking into account the look-elsewhere effect, the significance of the observed signature is 3.3 sigma. If interpreted in terms of dark matter particles annihilating into a photon pair, the observations imply a partial annihilation cross-section of about $10^{-27} cm^3 s^{-1}$ and a dark matter mass around 130 GeV.
-
1
You kinda answer your own question. One observation at better than 3 sigma. OK, so that's good for a first report (and very exciting), but nothing is nailed down. – dmckee♦ Jul 20 '12 at 3:10
2
I'm a little concerned that "How convincing is this breaking news" is a bit of a subjective discussion rather than a real question, but I'll let it go a bit to see if we get non-subjective answers and to allow input from interested users. – dmckee♦ Jul 20 '12 at 3:11
2
@dmckee The one particular piece of concrete information I don't know is whether the reported cross section is reasonable when compared to proposed DM candidates. I was hoping someone here could nail down that one concrete fact. If this reported cross section is many orders of magnitude too large, I would question it's validity, whereas if it is a reasonable cross section then I would think this evidence is more convincing. By the way, this is not brand new information - it has been reported since April/May 2012. – FrankH Jul 20 '12 at 3:18
2
Well, that would certainly be a nice piece of concrete, non-subjective, non-discussion. – dmckee♦ Jul 20 '12 at 3:20
2 Answers
Another very fresh paper presented at Dark Attack yesterday, one by Hektor et al.,
http://arxiv.org/abs/1207.4466
also claims that the signal is there – not only in the center of the Milky Way but also in other galactic clusters, at the same 130 GeV energy. This 3+ sigma evidence from clusters is arguably very independent. All these hints and several additional papers of the sort look very intriguing.
There are negative news, too. Fermi hasn't confirmed the "discovery status" of the line yet. Puzzles appear in detailed theoretical investigations, too. Cohen at al.
http://arxiv.org/abs/1207.0800
claim that they have excluded neutralino – the most widely believed identity of a WIMP – as the source because the neutralino would lead to additional traces in the data because of processes involving other Standard Model particles and these traces seem to be absent. The WIMP could be a different particle than the supersymmetric neutralino, of course.
Another paper also disfavors neutralino because it is claimed to require much higher cross sections than predicted by SUSY models:
http://arxiv.org/abs/1207.4434
But one must be careful and realize that the status of the "5 sigma discovery" here isn't analogous to the Higgs because in the case of the Higgs, the "canonical" null hypothesis without the Higgs is well-defined and well-tested. In this case, the 130-GeV-line-free hypothesis is much more murky. There may still exist astrophysical processes that tend to produce rather sharp peaks around 130 GeV even though there are no particle species of this mass. I think and hope it is unlikely but it hasn't really been excluded.
Everyone who studies these things in detail may want to look at the list (or contents) of all papers referring to Weniger's original observation – it's currently 33 papers:
http://inspirehep.net/search?ln=en&p=refersto%3Arecid%3A1110710
-
I'm reluctant to say much about this right now, right here, but: the cross section is a reasonable one for a loop annihilation to $\gamma\gamma$. From the model-building point of view the thing to worry about is that many DM models that predict such a loop would also predict much more frequent tree-level annihilation processes that are ruled out by the lack of continuum gamma rays from the same spot. From the astrophysical point of view the puzzle is why the gamma rays seem to come not quite from the galactic center but from a couple hundred parsecs away. And then there may be some other puzzling features in the data, in terms of similar lines being seen when looking in places where you don't expect to see dark matter.
Things will become much clearer over the next few months, if not faster, at which point more questions here might deserve longer answers.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943315327167511, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Best-first_search
|
# Best-first search
Graph and tree
search algorithms
Listings
Related topics
Best-first search is a search algorithm which explores a graph by expanding the most promising node chosen according to a specified rule.
Judea Pearl described best-first search as estimating the promise of node n by a "heuristic evaluation function $f(n)$ which, in general, may depend on the description of n, the description of the goal, the information gathered by the search up to that point, and most important, on any extra knowledge about the problem domain."[1][2]
Some authors have used "best-first search" to refer specifically to a search with a heuristic that attempts to predict how close the end of a path is to a solution, so that paths which are judged to be closer to a solution are extended first. This specific type of search is called greedy best-first search.[2]
Efficient selection of the current best candidate for extension is typically implemented using a priority queue.
The A* search algorithm is an example of best-first search, as is B*. Best-first algorithms are often used for path finding in combinatorial search.
## Algorithm
```OPEN = [initial state]
while OPEN is not empty or until a goal is found
do
1. Remove the best node from OPEN, call it n.
2. If n is the goal state, backtrace path to n (through recorded parents) and return path.
3. Create n's successors.
4. Evaluate each successor, add it to OPEN, and record its parent.
done
```
Note that this version of the algorithm is not complete, i.e. it does not always find a possible path between two nodes even if there is one. For example, it gets stuck in a loop if it arrives at a dead end, that is a node with the only successor being its parent. It would then go back to its parent, add the dead-end successor to the `OPEN` list again, and so on.
The following version extends the algorithm to use an additional `CLOSED` list, containing all nodes that have been evaluated and will not be looked at again. As this will avoid any node being evaluated twice, it is not subject to infinite loops.
```OPEN = [initial state]
CLOSED = []
while OPEN is not empty
do
1. Remove the best node from OPEN, call it n, add it to CLOSED.
2. If n is the goal state, backtrace path to n (through recorded parents) and return path.
3. Create n's successors.
4. For each successor do:
a. If it is not in CLOSED: evaluate it, add it to OPEN, and record its parent.
b. Otherwise: change recorded parent if this new path is better than previous one.
done
```
Also note that the given pseudo code of both versions just terminates when no path is found. An actual implementation would of course require special handling of this case.
## Greedy BFS
Using a greedy algorithm, expand the first successor of the parent. After a successor is generated:[4]
1. If the successor's heuristic is better than its parent, the successor is set at the front of the queue (with the parent reinserted directly behind it), and the loop restarts.
2. Else, the successor is inserted into the queue (in a location determined by its heuristic value). The procedure will evaluate the remaining successors (if any) of the parent.
## References
1. Pearl, J. Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley, 1984. p. 48.
2. ^ a b Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2 . pp. 94 and 95 (note 3).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8982940912246704, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/129377-proving-convergent-seq-cauchy-seq.html
|
# Thread:
1. ## Proving convergent seq. is a cauchy seq.
I proved convergent sequences and cauchy sequences.
How would I prove that every convergent sequence is a Cauchy sequence?
2. Originally Posted by summerset353
I proved convergent sequences and cauchy sequences.
How would I prove that every convergent sequence is a Cauchy sequence?
Let $\varepsilon>0$ be given. There exists some $N\in\mathbb{N}$ such that $N\leqslant n\implies d(x_n,x)<\frac{\varepsilon}{2}$ and so $N\leqslant m,n\implies d(x_n,x_m)\leqslant d(x_n,x)+d(x_m,x)=\varepsilon$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364091157913208, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/103751/about-the-intrinsic-definition-of-the-weyl-group-of-complex-semisimple-lie-algebr/103770
|
## About the intrinsic definition of the Weyl group of complex semisimple Lie algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It may be a easy question for experts.
The definition of the Weyl group of a complex semisimple Lie algebra $\mathfrak{g}$ is well-known: We first $\textbf{choose}$ a Cartan subalgebra $\mathfrak{h}$ and we have the root space decomposition. The Weyl group now is the group generated by the reflections according to roots.
Naively this definition depends on the choices of the Cartan subalgebra $\mathfrak{h}$. Of course we can prove that for different choices the resulting Weyl groups are isomorphic.
My question is: can we define the Weyl group intrinsically such that we don't need the do check the unambiguity.
One thought is: we have the abstract Cartan subalgebra $\mathfrak{H}:=\mathfrak{b}/[\mathfrak{b},\mathfrak{b}]$ of $\mathfrak{g}$ (which is in fact not a subalgebra of $\mathfrak{g}$). Can we define the Weyl group along this way? Again is there any references for this?
-
5
Don't you end up proving the result does not depend on the Borel you picked? – Mariano Suárez-Alvarez Aug 2 at 3:00
@ Mariano You are right. It can be proved that for two Borel subalgebra $\mathfrak{b}$ and $\mathfrak{b}'$ , the resulting quotient $\mathfrak{b}/[\mathfrak{b}, \mathfrak{b}]$ and $\mathfrak{b}'/[\mathfrak{b}', \mathfrak{b}']$ are canknically isomorphic. It's in Representation Theory and Complex Geometry Chapter 3 by Chriss/Ginzburg. Oop, still there are choices. – Zhaoting Wei Aug 3 at 6:12
1
@Zhaoting: I think all the approaches suggested in the answers tend to show the impossibility of giving the desired intrinsic definition "such that we don't need to check the unambiguity". Under the surface the conjugacy theorems and other scaffolding are concealed. – Jim Humphreys Aug 3 at 14:18
@Jim: I see. Thank you very much! – Zhaoting Wei Aug 4 at 5:25
## 4 Answers
Probably the earliest intrinsic definition of Weyl group occurs in section 1.2 of the groundbreaking paper "Representations of Reductive Groups Over Finite Fields" by Deligne and Lusztig (Ann. of Math. 103, 1976, available at JSTOR). This is done elegantly in the closely related but more general setting of a reductive algebraic group `$G$` over an arbitrary algebraically closed field (though their interest is mainly in prime characteristic). Letting `$X$` denote the set of all Borel subgroups of `$G$`, the set of `$G$`-orbits on `$X \times X$` provides a natural model for a universal Weyl group of `$G$` (or its Lie algebra).
[ADDED] In the algebraic group setting, this intrinsic definition depends just on knowing what a connected reductive (or semisimple) group is and what a Borel subgroup is (maximal closed connected solvable subgroup). But obviously one can't exploit the "Weyl group" without knowing more of the structure theory: conjugacy theorems, Bruhat decomposition. (Is it a group? finite?) In the easier characteristic 0 Lie algebra theory, where `$X$` becomes the set of Borel subalgebras (whose definition requires some theory) with conjugation action by the adjoint group, this abstract notion of "Weyl group" similarly needs unpacking. But the Deligne-Lusztig definition is a good conceptual one for their purposes and sneaks in the underlying set `$X$` of the flag variety of `$G$`. Any intrinsic definition of the Weyl group needs serious background in Lie theory.
In the treatment by Chriss and Ginzburg, even when one is primarily interested in the Lie algebra picture, the group in the background tends to play an important role. Indeed, in the early work of Borel and Chevalley on semisimple algebraic groups, the Weyl group appears most naturally in the guise of the finite quotient `$W_G(T) :=N_G(T)/T$` for a fixed maximal torus `$T$`. Then one sees `$W$` as generated by reflections relative to roots, etc. As in the parallel Lie algebra setting in characteristic 0, the maximal tori (or Cartan subalgebras) are all conjugate under the adjoint group action, but this falls short of giving an intrinsic definition of the sort provided by Deligne-Lusztig.
[Weyl himself gave the group an awkward name, but was mainly concerned with its use in the context of a compact Lie group. The notion basically originates earlier in the work of Cartan, but it took a while to see the root system and Weyl group as combinatorial objects including the Coxeter presentation of the group as a reflection group (carried over by Witt to Lie algebras).]
-
Of course, this elegant "intrinsic definition" rests on all of the usual conjugacy results, though it isn't clear what the OP is really seeking by asking for a definition which avoids the need to check "the unambiguity". – quasi-coherent Aug 2 at 12:34
@quasi (if I may call you that): See my added paragraph, where I emphasize that the definition itself uses no more than basic definitions. The price for that is having to figure out what it actually means in concrete terms; then you do need more theory. – Jim Humphreys Aug 2 at 17:36
@Jim, a related approach defining an abstract "Weyl group" for probably the most general category of groups that should have Weyl groups, appeared in the recent work by Bader and Furman related to Margulis' superrigidity, see for example homepages.math.uic.edu/~furman/preprints/…, and after your introduction, I see it inspired from the Deligne-Lusztig definition. – Asaf Aug 2 at 18:14
1
@Jim Maybe we can look at the set of $G$ -orbits of $X \times X$ and say that "this is the Weyl group". But can we define a multiplication just on this set of $G$-orbits? If we can, then this is what I am seeking for: an intrinsic definition of Weyl group. – Zhaoting Wei Aug 2 at 23:10
1
@Zhaoting: This is all worked out carefully by Deligne-Lusztig in their section 1.2. But I'd emphasize that it uses most of the deep structure theory (including conjugation theorems and Bruhat decomposition in the group version) to reach the intrinsic formulation. – Jim Humphreys Aug 3 at 14:14
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let $g_r$ be the set of regular semi-simple elements of the Lie algebra, and $\tilde g_r$ be the set of these elements with a choice of Borel containing it. The Weyl group is the group of deck transformations of the cover $\tilde {g}_r\to {g}_r$.
-
This is a great point! Maybe the remaining problem to me is that can we relate this deck transformation with the set of $G$-orbits on $X \times X$, as Jim Humphreys pointed out in his answer. – Zhaoting Wei Aug 2 at 23:13
1
@Zhaoting: Like the other intrinsic descriptions, this requires a lot of the structure theory to relate it to the concrete Weyl group attached to a Cartan subalgebra (or maximal torus) of a semisimple Lie algebra (or group). Here you also need regular elements (Kostant/Steinberg; cf. Bourbaki Ch. 7-8): regular semisimple elements are dense and each lies in exactly `$|W|$` Borel subalgebras (or subgroups), corresponding to positive systems of roots or Weyl chambers. – Jim Humphreys Aug 3 at 14:07
@Ben: This is a very nice topological viewpoint, which I guess goes back to work on the topology of compact Lie groups (Adams, Bott, Samelson, ...)? Is there a good reference for the translation to semisimple Lie algebras and Borel subalgebras? – Jim Humphreys Aug 3 at 14:11
Yes: this is the approach to defining the 'abstract Weyl group' introduced in "Representation Theory and Complex Geometry" by Chriss/Ginzburg on p. 135 (2nd Edition, Birkhauser).
-
I have heard that, originally, the Weyl group was designed (and worked out e.g. by Chevalley) as some Galois group which are then intrinsic.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9163345098495483, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/64995/list
|
## Return to Answer
1 [made Community Wiki]
I've been avoiding answering this question, as I think axiomatization of time is a rather fruitless activity. But, as no one else has mentioned this, I feel, unfortunately, compelled to post this answer.
The axiomatic approach to topological quantum field theory [Blanchet and Turaev] defines a topological quantum field theory,
Definition (TQFT): A $(n + 1)$-dimensional TQFT $(V,\tau)$ over a scalar field $k$ assigns to every closed oriented $n$-dimensional manifold $X$ a finite dimensional vector space $V(X)$ over $k$ and assigns to every cobordisim $(M,X,Y)$ a $k$-linear map $\tau(M) = \tau(M,X,Y ):V(X) \rightarrow V(Y )$.
In addition, the axiomatic approach to topological quantum field theory contains the normalization axiom,
Axiom (Normalization Axiom): For any n-dimensional manifold $X$, the linear map $\tau([0, 1] \times X) : V(X) \rightarrow V(X)$ is identity.
This normalization axiom is an axiomatization of time as it occurs in any diffeomorphism invariant theory.
In more detail, any theory that is diffeomorphism invariant is, in particular, invariant with respect to diffeomorphisms in the time direction $t'(t)$. The generator of time evolution is the Hamiltonian $H$. Thus, any state in the Hilbert space is invariant under the action of the Hamiltonian $H$. This is the exact content of the normalization axiom.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8964194655418396, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/165899-set-theory-functions-injectivity-surjectivity.html
|
Thread:
1. Set Theory: Functions (injectivity/surjectivity)
Suppose f : A -> B and g : B -> C are functions.
(a) Show that if f is surjective and g is not injective then g o f is not injective.
(Hint: draw pictures to get an idea.)
(b) Show that if f is not surjective and g is injective then g o f is not surjective.
hey, I have a general idea how to solve questions like this, but these ones seem more challenging than any ive done so far, so any help is is really appreciated
2. Originally Posted by zukias
Suppose f : A -> B and g : B -> C are functions.
(a) Show that if f is surjective and g is not injective then g o f is not injective.
(Hint: draw pictures to get an idea.)
(b) Show that if f is not surjective and g is injective then g o f is not surjective.
From the given we can get the following:
$\left( {\exists b_1 \in B} \right)\left( {\exists b_2 \in B} \right)\left[ {b_1 \ne b_2 \wedge g(b_1 ) = g(b_2 )} \right]$.
ALSO $\left( {\exists a_1 \in A} \right)\left[ {f(a_1 ) = b_1 } \right] \wedge \left( {\exists a_2 \in A} \right)\left[ {f(a_2 ) = b_2 } \right]$.
There is an easy contradiction to injectivity there for $g\circ f$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9080521464347839, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/4400/boy-born-on-a-tuesday-is-it-just-a-language-trick?answertab=active
|
# Boy Born on a Tuesday - is it just a language trick?
The following probability question appeared in an earlier thread:
I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?
The claim was that it is not actually a mathematical problem and it is only a language problem.
If one wanted to restate this problem formally the obvious way would be like so:
Definition: Sex is defined as an element of the set $\{\text{boy},\text{girl}\}$.
Definition: Birthday is defined as an element of the set $\{\text{Monday},\text{Tuesday},\text{Wednesday},\text{Thursday},\text{Friday},\text{Saturday},\text{Sunday}\}$
Definition: A Child is defined to be an ordered pair: (sex $\times$ birthday).
Let $(x,y)$ be a pair of children,
Define an auxiliary predicate $H(s,b) :\!\!\iff s = \text{boy} \text{ and } b = \text{Tuesday}$.
Calculate $P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y))$
I don't see any other sensible way to formalize this question.
To actually solve this problem now requires no thought (infact it is thinking which leads us to guess incorrect answers), we just compute
$$\begin{align*} & P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y)) \\ =& \frac{P(x\text{ is a boy and }y\text{ is a boy and }(H(x)\text{ or }H(y)))} {P(H(x)\text{ or }H(y))} \\ =& \frac{P((x\text{ is a boy and }y\text{ is a boy and }H(x))\text{ or }(x\text{ is a boy and }y\text{ is a boy and }H(y)))} {P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\ =& \frac{\begin{align*} &P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday}) \\ + &P(x\text{ is a boy and }y\text{ is a boy and }y\text{ born on Tuesday}) \\ - &P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday and }y\text{ born on Tuesday}) \\ \end{align*}} {P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\ =& \frac{1/2 \cdot 1/2 \cdot 1/7 + 1/2 \cdot 1/2 \cdot 1/7 - 1/2 \cdot 1/2 \cdot 1/7 \cdot 1/7} {1/2 \cdot 1/7 + 1/2 \cdot 1/7 - 1/2 \cdot 1/7 \cdot 1/2 \cdot 1/7} \\ =& 13/27 \end{align*}$$
Now what I am wondering is, does this refute the claim that this puzzle is just a language problem or add to it? Was there a lot of room for misinterpreting the questions which I just missed?
-
– Derek Jennings Sep 11 '10 at 7:20
1
– Derek Jennings Sep 11 '10 at 7:36
I was particularly hoping that people would directly address my derivation as written and the interpretations I used here. (Hence actually writing them out as opposed to just giving the value 13/27). – anon Sep 11 '10 at 7:38
## 9 Answers
There are even trickier aspects to this question. For example, what is the strategy of the guy telling you about his family? If he always mentions a boy first and not a daughter, we get one probability; if he talks about the sex of the first born child, we get a different probability. Your calculation makes a choice in this issue - you choose the version of "if the father has a boy and a girl, he'll mention the boy".
What I'm aiming to is this: the question is not well-defined mathematically. It has several possible interpretations, and as such the "problem" here is indeed of the language; or more correctly, the fact that a simple statement in English does not convey enough information to specify the precise model for the problem.
Let's look at a simplified version without days. The probability space for the make-up of the family is {BB, GB, BG, GG} (GB means "an older girl and a small boy", etc). We want to know what is $P(BB|A)$ where A is determined by the way we interpret the statement about the boys. Now let's look at different possible interpretations.
1) If there is a boy in the family, the statement will mention him. In this case A={BB,BG,GB} and so the probability is $1/3$.
2) If there is a girl in the family, the statement will mention her. In this case, since the statement talked about a boy, there are NO girls in the family. So A={BB} and so the probability is 1.
3) The statement talks about the sex of the firstborn. In this case A={BB,BG} and so the probability is $1/2$.
The bottom line: The statement about the family looks "constant" to us, but it must be looked as a function from the random state of the family - and there are several different possible functions, from which you must choose one otherwise no probabilistic analysis of the situation will make sense.
-
3
Nope. I'm saying that in order to give a probabilistic meaning to this statement, you need to attach probabilistic assumptions to it. In your case it seems you've chosen the assumption of "if there is a boy, a boy will be given in the statement". Other possible interpretations: "If there is a girl, a girl will be said" (so since a boy was said, there are no girls 100%), and "If there is a boy and a girl, one will be said at random" and "The sex of x is said" and so on. Forget about days - try to do the probability for these cases and see what happens. – Gadi A Sep 11 '10 at 6:11
3
This is of course correct, but you must also take into account the question of how the statement "I have a boy" comes to be. I've tried to elaborate in my answer. – Gadi A Sep 11 '10 at 6:40
1
The statement (ignoring the day red herring) is "I have two children. One is a boy." As I said before, this statement should be considered as part of the probabilistic scenario otherwise the problem is simply not well-defined. – Gadi A Sep 11 '10 at 7:37
1
What I meant was that adding the day does not affect the "strangeness" of the situation. However, I'm not sure if it's the strangeness that interests you. The main point is that it's called "a language problem" because different mathematical interpretations of the same English statement yield different mathematical models, and hence different mathematical results - and so for every intuition about the problem there is a interpretation in which the result is counter-intuitive. – Gadi A Sep 11 '10 at 11:52
2
-1 As long as we agree "One is a boy born on a Tuesday" means "At least one is a boy born on a Tuesday", then the answer is 13/27. The statement puts the family in the set { families with 2 children, at least one of whom is a boy born on Tuesday } and then 13/27 is the probability that a family picked at random from this set has 2 boys. There is no ambiguity about it, and if you're unconvinced you can run a simulation as others have done below. – Chris Card Sep 12 '10 at 8:27
show 19 more comments
Something still bothers me.
If someone came to me and said:
1) "I have two children. At least one is a boy born on a Tuesday. What is the probability I have two boys?"
The answer is: 13/27.
2) "I have two children. At least one is a boy. What is the probability I have two boys?"
The answer is: 1/3.
And then I see 8 fathers. The first one say "I have ... born on Sunday ..." the second father say "on Monday", etc. then for each one the probability to have 2 boys is 13/27. The eighth father say:
3) "I have two children. At least one is a boy born on some week day (but I won't tell you what day). What is the probability I have two boys?"
The answer is: ???
Probably 1/3. because we already know that the boy was born on some day. But what is the difference between him to the other 7 fathers?
Similarly, we can ask what if the father wrote the day of the birth of that boy on a card, and doesn't show it to us. How is it different from a father who show us the card?
-
– Tangent Bundle May 5 '11 at 7:15
2
– ShreevatsaR May 5 '11 at 7:32
The same question can be asked on Khovanova. She says: Now let me give you a variation of the Tuesday-Child problem that is unambiguous and where the answer is 13/27: You pick a random father of two children and ask him, “Yes or no, do you have a son born on a Tuesday?”. If the answer is yes, what is the probability that the father has two sons? The 13/27 argument works perfectly in this case. Why there its unambiguous? There I also can ask "Do you have a son born on some weekday?" and he answer "Yes". The same problem exists. – Tangent Bundle May 5 '11 at 7:58
If you ask "do you have a son born on some weekday?" (without naming a weekday) and the answer is "yes", the probability of two sons is unambiguously 1/3. The difference is in the questions. :-) It's clear that when a question asks for less information (and thus gets less), the posterior probability can be different. The trouble with the original statement of the question is that it confuses "X is true" with "A father says X is true" — the information in the former is precisely that X is true, whereas the information in the latter depends on what else the father could have said. – ShreevatsaR May 5 '11 at 9:17
1
This is like in the Monty Hall problem, the difference between "a door was opened that contains a goat" (makes no difference to switch) and "the host necessarily opened a door that he knew contains a goat" (it's better to switch). The space of host actions matters (or the space of father-statements), not only the bare fact that a door-with-a-goat was opened (or the bare statement that the father made). – ShreevatsaR May 5 '11 at 9:20
It is actually impossible to have a unique and unambiguous answer to the puzzle without explicitly articulating a probability model for how the information on gender and birthday is generated. The reason is that (1) for the problem to have a unique answer some random process is required, and (2) the answer is a function of which random model is used.
1. The problem assumes that a unique probability can be deduced as the answer. This requires that the set of children described is chosen by a random process, otherwise the number of boys is a deterministic quantity and the probability would be 0 or 1 but with no ability to determine which is the case. More generally one can consider random processes that produce the complete set of information referenced in the problem: choose a parent, then choose what to reveal about the number, gender, and birth days of its children.
2. The answer depends on which random process is used. If the Tuesday birth is disclosed only when there are two boys, the probability of two boys is 1. If Tuesday birth is disclosed only when there is a sister, the probability of two boys is 0. The answer could be any number between 0 or 1 depending on what process is assumed to produce the data.
There is also a linguistic question of how to interpret "one is a boy born on Tuesday". It could mean that the number of Tuesday-born males is exactly one, or at least one child.
-
2
+1. We have a reasonably standard jargon for specifying simple random processes in English. People come up with "puzzles" like this by exploiting the impreciseness of English for describing more complicated situations. If the person posing the question had directly stated the random process, nobody would be confused. The reason this is a "puzzle" is that it's worded too vaguely to convey the intended meaning. So @muad: any time someone asks you a probability "puzzle", you should simply press them about exactly what random process they have in mind. Then it will be easy. – Carl Mummert Sep 12 '10 at 12:07
The Tuesday is a red herring. It's stated as a fact, thus the probability is 1. Also, it doesn't say "only one boy is born on a Tuesday". But indeed, this could be a language thing.
With 2 children you have the following possible combinations:
1. two girls
2. a boy and a girl
3. a girl and a boy
4. two boys
If at least 1 is a boy we only have to consider the last three combinations. That gives us one in three that both are boys.
The error which is often made is to consider 2. and 3. as a single combination.
edit
I find it completely counter-intuitive that the outcome is influenced by the day, and I simulated the problem for one million families with 2 kids. And lo and behold, the outcome is 12.99 in 27. I was wrong.
-
Note that this too may seem counter-intuitive: It means that if someone tells you "I have two children. One of them is a boy" the probability the the other one is a girl is 2/3, but you might expect 1/2 since the sex of the children is independent. – Gadi A Sep 11 '10 at 7:00
What you are saying here contradicts my derivation of 13/27 (which I cannot see any flaw in) - can you tell me if you think there is a mistake in my derivation? – anon Sep 11 '10 at 7:03
@muad: there doesn't seem a mistake in your derivation; the error was in my intuition that the day couldn't possibly have anything to do with it. – stevenvh Sep 12 '10 at 7:04
I also had to write a computer simulation before I could believe the answer! – anon Sep 12 '10 at 7:49
Well, given the unstated assumption that the writer is a mathematician and therefore not using regular english, then I agree with the 13/27 answer.
But in everyday english, from "there are two fleems, one is a glarp" we all infer that the other is not a glarp.
From "there are two fleems, one is a glarp, which is snibble" we would still infer that the other is not a glarp. Whereas from "there are two fleems, one is a glarp which is snibble" (absence of comma, or when spoken, difference in intonation) we would infer that the other is not a snibble glarp, but it could still be an unsnibble glarp.
-
I see what you mean, Whether one interprets "One is ..." to mean "Exactly one (and no more) is ..." or "At least one ...". This would change the answer from 13/27 to 12/26. I don't think this ambiguity is an intentional part of the problem though - just an unfortunate problem of the language. – anon Sep 12 '10 at 7:57
1
I think "But in everyday english, from "there are two fleems, one is a glarp" we all infer that the other is not a glarp." is debatable. – Chris Card Sep 12 '10 at 8:31
This, in my opinion, is why the intuitive approach fails:
One has a tendency to think that the probability of 7*P(b AND d1) = P(b AND d1) + P(b AND d2) + ... + P(b AND d7) = P((b AND d1) OR (b AND d2) OR ... OR (b AND d7)) = P(b AND (d1 OR d2 OR ... OR d7)) = P(b).
However, the flaw here is that, in reality, P(b AND d1) + P(b AND d2) + ... + P(b AND d7) is NOT equal to P((b AND d1) OR (b AND d2) OR ... OR (b AND d7)). This means that mentioning independent (and one might think irrelevant) information alongside with relevant information actually changes the resulting probabilities.
One interesting consequence: if I say something like "I have two children. One of them is a boy who was born at 10:24 PM on February 10th," The probability that I have two boys is now almost exactly the same as as the probability that I have a girl and a boy. Adding a unique or almost unique piece of information makes the stuff I want to know about the other child independent of the information I have on the first child. If I took this to the extreme and said that I have a firstborn boy, won't know anything additional about the other child.
-
What is an example of the "... is NOT equal ..."? (A well-specified probability model where the two sides of the equation are not the same.) – T.. Sep 13 '10 at 1:26
This is discussed also on http://johncarlosbaez.wordpress.com/2010/08/24/probability-puzzles-from-egan/#comment-1313
-
I guess the following two versions of the experiment provide two different answers:
1. Dave has two children. Is atleast one of them a boy who is born on Tuesday? Dave answers Yes.
2. Dave has two children. I ask him to first choose and fix one child at random, and tell me if it is a boy who was born on Tuesday. Dave answers yes he is a boy born on Tuesday.
For 1st the answer is 13/27, while the second has answer 1/2.
The way in which the question is asked, it's in line with 1st, hence the answer should be 13/27.
-
+1 for correct answer. More specifically, your first example is equivalent to the case where Dave knows the sex/birthdate of both children, while the second example is equivalent to the case where Dave doesn't know the sex/birthdate of both children; so what we have here is yet another question about knowledge-of-knowledge. See this answer for more information. – BlueRaja - Danny Pflughoeft Dec 21 '10 at 20:11
Also, since there is - presumably - no ambiguity over whether or not Dave knows the sex of his children, there is indeed no ambiguity in the quesiton; the answer is correctly 13/27 (the accepted answer, by Gadi, is incorrect). – BlueRaja - Danny Pflughoeft Dec 21 '10 at 20:13
There is always room for misinterpreting a question when one does not fully understand the language in which it is written. I think that the way mathematics and mathematicians use conditional probability is clear:
$$P(A|B)=P(A \cap B)/P(B).$$
So I believe that this is the interpretation that one should take, and thus arrive at your answer of 13/27, and not search for further nuances, which are not too difficult to find.
-
3
Conditional probability is not the issue. The problem is in defining what "P" is implied in the problem. Once a probability measure is specified ,of course it is then possible to use the formula for conditional probability. – T.. Sep 11 '10 at 19:08
So A = [Dave has 2 boys] and B = [Dave says "I have 2 children. One is a boy born on a Tuesday."]. The problem is that we don't know P(B). What space does Dave's statement come from? The only way to get the 13/27 answer is to make the unjustified unreasonable assumption that Dave is boy-centric & Tuesday-centric: if he has two sons born on Tue and Sun he will mention Tue; if he has a son & daughter both born on Tue he will mention the son, etc. See [this article](arxiv.org/abs/1102.0173). The information in [Dave says X] is not the same as [X]; it depends on what all Dave could have said. – ShreevatsaR May 5 '11 at 11:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9668358564376831, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/106386/can-the-bockstein-spectral-sequence-be-used-to-compute-cohomology-rings
|
## Can the Bockstein spectral sequence be used to compute cohomology rings ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If $G$ is a finite group then there is the so-called Bockstein spectral sequence $$E_2^n = H^n(G,\mathbb{F}_p) \Rightarrow \begin{cases} \mathbb{F}_p & n =0 \newline 0 & n>0\end{cases}$$ that can be used to compute the integral cohomology out of the mod-$p$ cohomology. In more detail, each non-zero element of $d_r(E_r^{n-1})\subseteq E_r^n$ corresponds to a direct $\mathbb{Z}/p^r$-summand of $H^n(G,\mathbb{Z})$ (Corollary 5.9.12 in Weibel's Homological Algebra book).
Now my question is whether it is possible to compute the integral cohomology ring of $G$ if the mod-$p$ cohomology rings for the primes dividing $|G|$ and the differentials in the associated Bockstein spectral sequences are known ?
-
Doesn't the Bockstein end up twisting the cup product, in the sense that $\beta(xy)=\beta(x)y\pm x\beta(y)$? Or something like that. I would guess this makes it difficult to get the integral ring structure. – Chris Gerig Sep 5 at 1:10
Are you asking whether the cohomology ring can be recovered from just the Bockstein data, or whether two finite groups with (multiplicatively) isomorphic Bockstein spectral sequences must have isomorphic integral cohomology rings? – Tyler Lawson Sep 5 at 2:12
1
The BSS is a spectral sequence of algebras (since the Bockstein is a derivation, as mentioned by Chris). So you may be able to glean some information about integral cup products. That said, it may be a better tactic to use the ring homomorphism $H^\ast(G;\mathbb{Z})\to H^\ast(G;\mathbb{F}_p)$ given by reducing coefficients mod $p$. The BSS is, after all, just the SS obtained by wrapping up the long exact coefficent sequence into an exact couple. – Mark Grant Sep 5 at 6:30
@Tyler: My question is about the former. But if it is possible to recover the integral cohomology ring uniquely from the Bockstein data, wouldn't this also answer the 2nd question affirmatively ? – TJ Sep 5 at 6:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9017308354377747, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/58790/a-remark-in-swinnerton-dyers-paper-in-cassels-frohlich
|
## A remark in Swinnerton-Dyer’s paper in Cassels-Frohlich
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Swinnerton-Dyer's charming paper "An application of computing to classfield theory", in Cassels-Frohlich, he discusses the genesis of the Birch/Swinnerton-Dyer conjecture and numerical tests of it for the curves $y^2=x^3-dx$. At the end of the paper, he speculates on higher-dimensional analogues of the conjecture, linking Chow groups to L-functions, which in hindsight were essentially correct (Beilinson-Bloch made them precise). Regarding these conjectures, he says that Bombieri and himself had found some evidence for them in the special case of cubic threefolds and the intersection of two quadric hypersurfaces. I know that in the case of cubic threefolds $X$, codimension-two cycles can be related to zero-cycles in the Albanese $A_X$ of the Fano surface of lines in $X$, which (I assume) reduces the conjecture in this case to BSD for $A_X$. But what can be done for the intersection of two quadrics? Does anyone know what he was talking about here? Is this in print in more detail somewhere?
-
## 1 Answer
Maybe you already know this! But Miles Reid and Ron Donagi have shown that the intermediate Jacobian of the intersection of two quadrics is the Jacobian of a hyperelliptic curve. See Donagi's old paper where there is a beautiful generalization of the group law on an elliptic curve. Recent results on such questions all use Nori's general results or Bloch-Srinivas's paper on the diagonall see for example Nagel's paper or Voisin's paper.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309141039848328, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/94905-parabola-problem.html
|
# Thread:
1. ## Parabola problem
I'm trying to figure out what the heck I'm supposed to do with this:
x^2=8y.
I'm supposed to solve for vertex, focus, and directx... how do I put this into the y=ax^2+bc+c form??
2. Originally Posted by Kevin Green
I'm trying to figure out what the heck I'm supposed to do with this:
x^2=8y.
I'm supposed to solve for vertex, focus, and directx... how do I put this into the y=ax^2+bc+c form?? ... divide both sides by 8 ; a = 1/8 , b = 0, c = 0
.
3. Thank you.... where do I go from here? The math book I have does a horrible job describing this whole matter.
4. Originally Posted by Kevin Green
I'm trying to figure out what the heck I'm supposed to do with this:
x^2=8y.
I'm supposed to solve for vertex, focus, and directx... how do I put this into the y=ax^2+bc+c form??
Parabolas can also be expressed in this form:
$(x - h)^2 = 4p(y - k)$ or $(y - k)^2 = 4p(x - h)$.
The first is a parabola that opens upward or downward, and the second is a parabola that opens left or right. Sometimes this form is preferred because you can see what the vertex is.
$x^2=8y$
h = k = 0, so the vertex is (0, 0). The x is squared, and the y coefficient is positive, so it opens upward. Solve 4p = 8 to get the focal length, which is 2.
Since the parabola opens upward, the focus would be defined by (h, k + p), or (0, 2). The directrix is an equation of a line, in our case, defined by y = k - p, or y = -2.
01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470457434654236, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/125039-matrix-question.html
|
# Thread:
1. ## Matrix question
I am not sure exactly, what they are asking me to do, i found a set of numbers that works for this system but its not accepting my answer, which makes me think they are asking for variables? not realy sure
Solve this system:
X1 = | | + | |S
X2 = | | + | |S
X3 = | | + | |S
X4 = | | + | |S
2. Hi
The determinant of the matrix is 0
You can find all variables as functions of only one parameter
3. Sorry but i dont understand what you mean by that statement, can you clarify?
4. The fact that the determinant, $\left|\begin{array}{cccc}1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1\end{array}\right|$ is 0 means the equations are not independent- either there is no solution or there are an infinite number of solutions. If there are an infinite number of solutions, you cannot solve for all 4 variables but rather can solve for three of them in terms of the fourth. For example, from the first equation, $x_2= 3- x_1$ and, from the fourth, $x_4= 12- x_1$. Now, put those into the second or third equation to find $x_3$ the same way. Try both equations to make sure you get the same result so that there is a solution.
Those are the numbers that go into the squares. Letting the parameter, S, be $x_1$, the first row is $x_1= 0+ 1 S$ and the second row would be $x_2= 3- 1 S$. Now, be aware that you could solve for any three of the variables in terms of the fourth and use that as the parameter, S, so there is NOT one single answer. You appear to be using a computer system to input your answer and I have no idea how it will handle the different possible answers. That's one reason I dislike those things.
5. ah i understand now, thanks alot
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9530119299888611, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/106124/list
|
## Return to Answer
2 added 2 characters in body
In $\mathbb R^n$, the answer is $n+2$.
You can apply an inverse inversion which sends two of the spheres in to two parallel hyperplanes. The rest of the spheres will have the same radii and their centers lie in a hyperplane. Hence everything follows.
1
In $\mathbb R^n$, the answer is $n+2$.
You can apply an inverse which sends two of the spheres in to two parallel hyperplanes. The rest of the spheres will have the same radii and their centers lie in a hyperplane. Hence everything follows.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9170988202095032, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2012/07/19/the-higgs-mechanism-part-4-symmetry-breaking/?like=1&source=post_flair&_wpnonce=78d053c4d8
|
# The Unapologetic Mathematician
## The Higgs Mechanism part 4: Symmetry Breaking
This is part four of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1, Part 2, and Part 3 first.
At last we’re ready to explain the Higgs mechanism. We start where we left off last time: a complex scalar field $\phi$ with a gauged phase symmetry that brings in a (massless) gauge field $A_\mu$. The difference is that now we add a new self-interaction term to the Lagrangian:
$\displaystyle L=-\frac{1}{4}F_{\mu\nu}F_{\mu\nu}+(D_\mu\phi)^*D_\mu\phi-\left[-m^2\phi^*\phi+\lambda(\phi^*\phi)^2\right]$
where $\lambda$ is a constant that determines the strength of the self-interaction. We recall the gauged symmetry transformations:
$\displaystyle\begin{aligned}\phi'(x)&=e^{i\alpha(x)}\phi(x)\\A_\mu'(x)&=A_\mu(x)+\frac{1}{e}\partial_\mu\alpha(x)\end{aligned}$
If we write down an expression for the energy of a field configuration we get a bunch of derivative terms — basically like kinetic energy — that all occur with positive signs and then the potential energy term that comes in the brackets above:
$\displaystyle V(\phi^*\phi)=-m^2\phi^*\phi+\lambda(\phi^*\phi)^2$
Now, the “ground state” of the system should be one that minimizes the total energy, but the usual choice of setting all the fields equal to zero doesn’t do that here. The potential has a “bump” in the center, like the punt in the bottom of a wine bottle, or like a sombrero.
So instead of using that as our ground state, we’ll choose one. It doesn’t matter which, but it will be convenient to pick:
$\displaystyle\begin{aligned}A_\mu^{(v)}&=0\\\phi^{(v)}=\frac{1}{\sqrt{2}}\phi_0\end{aligned}$
where $\phi_0=\frac{m}{\sqrt{\lambda}}$ is chosen to minimize the potential. We can still use the same field $A_\mu$ as before, but now we will write
$\displaystyle\phi(x)=\frac{1}{\sqrt{2}}\left(\phi_0+\chi(x)+i\theta(x)\right)$
Since the ground state $\phi_0$ is a point along the real axis in the complex plane, vibrations in the field $\chi$ measure movement that changes the length of $\phi$, while vibrations in $\theta$ measure movement that changes the phase.
We want to consider the case where these vibrations are small — the field $\phi$ basically sticks near its ground state — because when they get big enough we have enough energy flying around in the system that we may as well just work in the more symmetric case anyway. So we are justified in only working out our new Lagrangian in terms up to quadratic order in the fields. This will also make our calculations a lot simpler. Indeed, to quadratic order (and ignoring an irrelevant additive constant) we have
$\displaystyle V(\phi^*\phi)=m^2\chi^2$
so vibrations of the $\theta$ field don’t show up at all in quadratic interactions.
We should also write out our covariant derivative up to linear terms:
$\displaystyle D_\mu\phi=\frac{1}{\sqrt{2}}\left(\partial_\mu\chi+i\partial_\mu\theta-ie\phi_uA_\mu\right)$
so that the quadratic Lagrangian is
$\displaystyle\begin{aligned}L^{(2)}&=-\frac{1}{4}F_{\mu\nu}F_{\mu\nu}+\frac{1}{2}\lvert\partial_\mu\chi+i\partial_\mu\theta-ie\phi_uA_\mu\rvert^2-m^2\chi^2\\&=-\frac{1}{4}F_{\mu\nu}F_{\mu\nu}+\left[\frac{1}{2}\partial_\mu\chi\partial_\mu\chi-m^2\chi^2\right]+\frac{e^2\phi_0^2}{2}\left(A_\mu-\frac{1}{e\phi_0}\partial_\mu\theta\right)^2\end{aligned}$
Now, the term in parentheses on the right looks like the mass term of a vector field $B_\mu$ with mass $e\phi_0$. But what is the kinetic term of this field?
$\displaystyle\begin{aligned}B_{\mu\nu}&=\partial_\mu B_\nu-\partial_\nu B_\mu\\&=\partial_\mu\left(A_\nu-\frac{1}{e\phi_0}\partial_\nu\theta\right)-\partial_\nu\left(A_\mu-\frac{1}{e\phi_0}\partial_\mu\theta\right)\\&=\partial_\mu A_\nu-\partial_\nu A_\mu-\frac{1}{e\phi_0}\left(\partial_\mu\partial_\nu\theta-\partial_\nu\partial_\mu\theta\right)\\&=F_{\mu\nu}-0=F_{\mu\nu}\end{aligned}$
And so we can write down the final form of our quadratic Lagrangian:
$\displaystyle L^{(2)}=\left[-\frac{1}{4}B_{\mu\nu}B_{\mu\nu}+\frac{e^2\phi_0^2}{2}B_\mu B_\mu\right]+\left[\frac{1}{2}\partial_\mu\chi\partial_\mu\chi-m^2\chi^2\right]$
In order to deal with the fact that our normal vacuum was not a minimum for the energy, we picked a new ground state that did minimize energy. But the new ground state doesn’t have the same symmetry the old one did — we have broken the symmetry — and when we write down the Lagrangian in terms of excitations around the new ground state, we find it convenient to change variables. The previously massless gauge field “eats” part of the scalar field and gains a mass, leaving behind the Higgs field.
This is essentially what’s going on in the Standard Model. The biggest difference is that instead of the initial symmetry being a simple phase, which just amounts to rotations around a circle, we have a (slightly) more complicated symmetry to deal with. For those that are familiar with some classical groups, we start with an action of $SU(2)\times U(1)$ on a column vector $\phi$ made of two complex scalar fields with a potential of the form:
$\displaystyle V(\phi)=\lambda\left(\phi^\dagger\phi-\frac{v^2}{2}\right)^2$
which is invariant under the obvious action of $SU(2)$ and a phase action of $U$. Since the group $SU(2)$ is three-dimensional there are three gauge fields to introduce for its symmetry and one more for the $U(1)$ symmetry.
When we pick a ground state that breaks the symmetry it doesn’t completely break; a one-dimensional subgroup $U(1)\subseteq SU(2)\times U(1)$ still leaves the new ground state invariant — though it’s important to notice that this is not just the $U(1)$ factor, but rather a mixture of this factor and a $U(1)$ subgroup of $SU(2)$. Thus only three of these gauge fields gain mass; they become the $W^\pm$ and $Z^0$ bosons that carry the weak force. The other gauge field remains massless, and becomes $\gamma$ — the photon.
At high enough energies — when the fields bounce around enough that the bump doesn’t really affect them — then the symmetry comes back and we see that the electromagnetic and weak interactions are really two different aspects of the same, unified phenomenon, just like electricity and magnetism are really two different aspects of electromagnetism.
## 3 Comments »
1. [...] July 17, “The Higgs Mechanism part 3: Gauge Symmetries,” July 18, y “The Higgs Mechanism part 4: Symmetry Breaking,” July 19. Recomiendo una lectura a estas [...]
Pingback by | July 19, 2012 | Reply
Pingback by | July 20, 2012 | Reply
3. I must comment you for a warts-and-all presentation. Here you can see for what it is, a jimcrack device to force the physics community to swallow the universal term mass, a move which is not licensed under reigning Medieval theology! Decades of ranting against Platonist infiltration by Wittgenstein down the drain…if they can find the missing isospin that is.
Comment by Orwin O'Dowd | October 31, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313127398490906, "perplexity_flag": "head"}
|
http://cstheory.stackexchange.com/questions/3330/dominations-under-oracles-which-is-closed-under-complement
|
# Dominations under oracles which is closed under complement?
## Edited at 2010/11/29:
As John Watrous have mentioned, the class $\mathsf{C^O}$ may be not well-defined. After reading some early posts, I try to restate my question in an unambiguous way.
Let $\mathsf{O}$ be a complexity class that is closed under complement, i.e. $\mathsf{O} = \mathsf{coO}$. Also we assume that the logspace, $\mathsf{L}$, is a subset of $\mathsf{O}$.
When does the equality $\mathsf{L^O} = \mathsf{O}$ hold?
We define $\mathsf{L^O}$ as languages accepted by logspace oracle machines with an $\mathsf{O}$ oracle, where queries are written on a separated oracle tape not restricted to the logspace bound, and after each query the tape is automatically erased.
We know that $\mathsf{NL} = \mathsf{coNL}$ by Immerman-Szelepcsényi Theorem, and we have $\mathsf{L^{NL}} = \mathsf{NL}$. Before the era of Reingold, when nobody knows whether $\mathsf{SL} = \mathsf{L}$, Nisan and Ta-Shma have proved that $\mathsf{SL}$ is closed under complement. They also show that $\mathsf{L^{SL}} = \mathsf{SL}$ in the paper.
In the paper "Directed Planar Reachability Is in Unambiguous Log-Space" by Bourke, Tewari and Vinodchandran, they gave a claim in corollary 4.3 that $\mathsf{L^{UL \cap coUL}} = \mathsf{UL \cap coUL}$. Clearly $\mathsf{UL \cap coUL}$ is closed under complement, but is this equality holds so trivially?
Do we have any easy conditions to decide if $\mathsf{L^O}$ and $\mathsf{O}$ are in fact the same? For easy conditions it means we only have to check some properties about $\mathsf{O}$, then we can decide if they are equal, without using definitions of the classes to prove the inclusion $\mathsf{L^O} \subseteq \mathsf{O}$.
Another related question would be:
Do we have any oracle $\mathsf{O}$ such that $\mathsf{L^O} \neq \mathsf{O}$?
-
Do you know anything about the extremal case where $C = O$? – András Salamon Nov 26 '10 at 16:04
We do have $\mathsf{P^P} = \mathsf{P}$ and $\mathsf{EXP^{EXP}} \neq \mathsf{EXP}$; it seems that one needs to be able to simulate an $\mathsf{O}$-oracle call in $\mathsf{O}$ at least. But I do not know any conditions that can guarantee the equality holds i.e. $\mathsf{O^O} = \mathsf{O}$ when $\mathsf{C} = \mathsf{O}$. – Hsien-Chih Chang 張顯之 Nov 26 '10 at 16:46
For the second question, sure, e.g. take $O$ to be the empty set. – Kaveh♦ Nov 28 '10 at 17:16
1
If $O$ is not closed under log-space Turing reductions then this still holds and there are such $O$, so you may want to state that $O$ is closed under composition at least, in which case I think the question becomes "is log-space Turing reductions the same as log-space many-one reductions?" which is probably open. – Kaveh♦ Nov 28 '10 at 17:26
1
It is not open for all of them, it is well-known that the two reductions are not equivalent for some large classes. – Kaveh♦ Nov 29 '10 at 6:18
show 2 more comments
## 2 Answers
Edit: In revision 1, I wrote an embarrassingly complicated answer. The answer below is much simpler and stronger than the older answer.
Edit: Even the “simplified” answer in revision 2 was more complicated than necessary.
Let O be a complexity class that is closed under complement, i.e. O=coO. Also we assume that the logspace, L, is a subset of O.
[…]
Do we have any oracle O such that LO≠O?
Yes. Let O=RE∪coRE, where RE is the class of recursively enumerable languages. Then O is closed under complement and O contains L. However, note that LO=LRE, and in particular LO has a complete language under polynomial-time many-one reducibility. On the other hand, O=RE∪coRE does not have a complete language under polynomial-time reducibility because RE≠coRE. Therefore, LO≠O.
-
Is $\mathsf{L}$ a syntactic class? I mean, does $\mathsf{L^O}$ always have a complete language? If so, haven't your revisions 2 and 3 proved that for any complexity class $\mathsf{K}$ which is not closed under complement and contains $\mathsf{L}$, we have $\mathsf{L^{K \cup coK}} \neq \mathsf{K \cup coK}$? – Hsien-Chih Chang 張顯之 Nov 29 '10 at 6:59
@Hsien-Chih: If K has a complete language X (for example, if K=RE, we can let X=HALT), then L^{K∪coK}=L^K has the following canonical complete language: Given a Turing machine M, input x and a string 1^k, does M^X accepts x while using at most lg k work space? Note that we need a K-complete language to do this. (Here completeness means completeness under many-one reducibility.) – Tsuyoshi Ito Nov 29 '10 at 11:41
– Hsien-Chih Chang 張顯之 Nov 29 '10 at 12:13
@Hsien-Chih: I am afraid that you are confusing a class with a language. In your question, O is a class. On the other hand, “every oracle X” in Theorem 8.2 in your link is a language. For example, P^AH (where AH is the arithmetic hierarchy) does not have a complete language (because P^AH=AH), but this does not contradict Theorem 8.2. – Tsuyoshi Ito Nov 29 '10 at 13:33
That is where my problems are!! Thank you! :) – Hsien-Chih Chang 張顯之 Nov 29 '10 at 14:57
For a given choice of complexity classes $\mathsf{C}$ and $\mathsf{O}$, by which we mean sets of languages and nothing else, the class $\mathsf{C}^{\mathsf{O}}$ is not well-defined: we need the definition of $\mathsf{C}$ to determine how oracle queries are defined. Sometimes even then there could be ambiguities that need to be resolved, or the definition might not provide a reasonable notion of an oracle query at all. (Attaching oracles to space-bounded complexity classes is one example in the first category, where the relativized classes we obtain are very sensitive to the exact capabilities of the oracle tape and whether we consider that it contributes to our space bounds.)
So, I don't think you can possibly say anything about $\mathsf{C}^{\mathsf{O}}$ without using the definitions of the classes, because if you don't use the definition of $\mathsf{C}$ you aren't working with a well-defined mathematical object.
If I have misunderstood your question, please clarify what sort of properties of $\mathsf{C}$ and $\mathsf{O}$ you permit to be used.
-
You are correct; I did not aware that $\mathsf{C^O}$ may be not well-defined. I'll try to modify my question, please correct me if it is still problematic. – Hsien-Chih Chang 張顯之 Nov 28 '10 at 16:18
2
– Tsuyoshi Ito Nov 29 '10 at 1:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304062128067017, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/223314/evaluating-a-surface-integral-with-differential-forms
|
# Evaluating a surface integral with differential forms
Let $\alpha=x dy-\frac{1}{2}(x^2+y^2)dz$ be a differential form in $\mathbb{R}^3=\{(x,y,z)\;|\;x,y,z\in\mathbb{R}\}$and let $Z=\{(\cos\theta,\sin\theta,s)\;|\;0\leq\theta \leq2\pi, 0\leq s\leq 1\}$ be a cylinder. I'd like to compute $\int_Z d\alpha$.
First, I need to find $d\alpha$. This is fairly straightforward: $$d\alpha=dx\wedge dy-\frac{1}{2}(2x dx+2ydy)\wedge dz = dx\wedge dy - x dx\wedge dz - y dy\wedge dz$$ Next, I need to rewrite the differential using the parameterization of $Z$. I think this means that I need to rewrite each $dx, dy,$ and $dz$ as $$dx = \frac{dx}{d\theta}d\theta + \frac{dx}{ds} ds$$ (replacing $x$ with $y$ and $z$ to get the corresponding expressions). Is this correct?
Assuming it is, then I get the straightforwards calculations that $$dx = -\sin\theta d\theta \quad dy=\cos\theta d\theta \quad dz=ds$$ So I can rewrite $d\alpha$ in terms of $d\theta\wedge ds$: $$d\alpha = (-\sin\theta d\theta\wedge \cos\theta d\theta)-(\cos\theta(-\sin\theta) d\theta\wedge ds)-(\sin\theta\cos\theta d\theta\wedge ds) = 0$$ This looks strange. I thought that I was supposed to be left with a differential form $d\theta\wedge ds$ that I could then integrate over. But what I'm left with now is the following double integral: $$\int_0^1\int_0^{2\pi} 0 d\theta\wedge ds =0$$ This just doesn't seem right. I'm not very familiar with differential forms, so maybe I'm missing something obvious. Is this analogous to the case where the integration of an exact form over a closed curve is $0$? Clearly $d\alpha$ is exact because we're given $\alpha$. We're also integrating over a cylinder, which I guess is sort of like a closed curve, only it's a closed surface instead.
In short, I have two questions: is the procedure I outlined above a correct way to go about evaluating surface integrals of differential forms? Second, was there a short cut I could have recognized that would have made the final answer obvious (assuming that $0$ is in fact the correct answer)?
-
1
Using Stoke's theorem it is easier to integrate $\alpha$ over the boundary of the cylinder which consists of two circles. Since the orientation on the two circles is oposite you should expect things to cancel out. – Pantelis Damianou Oct 29 '12 at 6:18
## 1 Answer
As Pantelis Damianou suggested you can compute $$\int_{\partial Z} \alpha$$ The boundary of the cylinder are the two circles with opposite orientation. Therefore the result is $0$.
-
Why is the boundary two circles with opposite orientation? – chris Oct 29 '12 at 17:24
Take the normal to the surface (cylinder). When you walk on the circle (boundary) the surface should be on your left. – Pete Markou Oct 30 '12 at 10:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9659391641616821, "perplexity_flag": "head"}
|
http://freakonometrics.blog.free.fr/index.php?post/2012/09/18/Open-question-on-tail-index
|
To content | To menu | To search
« Copulas and tail dependence, part 1 - Copulas and tail dependence, part 2 »
## Open question on tail index
By arthur charpentier on Tuesday, September 18 2012, 13:28 - Statistics - Permalink
A short post today on a rather surprising result. I can't figure out how to prove it (I don not know if the result is valid, or known, so I claim it is an open question). Consider an i.i.d. sequence $%5C%7BX_1,%5Ccdots,X_n%5C%7D$ with common survival function $%5Coverline%7BF%7D$. Assume further that $%5Coverline%7BF%7D$ is regularly varying at $%5Cinfty$, written $%5Coverline%7BF%7D%5Cin%20RV_%7B%5Calpha%7D$ , i.e.
$%5Clim_%7Bt%5Crightarrow%5Cinfty%7D%5Cfrac%7B%5Coverline%7BF%7D%28tx%29%7D%7B%5Coverline%7BF%7D%28t%29%7D=x%5E%7B%5Calpha%7D$
with $%5Calpha%5Cin%28-%5Cinfty,0%29$. Then the tail of $X$ is Pareto type, and we can use Hill's estimator to estimate $%5Calpha$. An interesting result is that if $%5Coverline%7BF%7D%5Cin%20RV_%7B%5Calpha%7D$ and $%5Coverline%7BG%7D%5Cin%20RV_%7B%5Cbeta%7D$, then $%5Coverline%7BF%7D%5Cstar%5Coverline%7BG%7D$ is also regularly varying, where $%5Cstar$ denote the convolution operator. More precisely, when $%5Calpha=%5Cbeta$, then (see Feller (), section VIII.8), then $%5Coverline%7BF%7D%5Cstar%5Coverline%7BG%7D$ is regularly varying with the same index.
This can be visualized below, on simulation of Pareto variables, where the tail index is estimated using Hill's estimator. First, we start we one sample, either of size 20 (on the left) or 100 (on the right),
```> library(evir)
> n=20
> set.seed(1)
> alpha=1.5
> X=runif(n)^(-1/alpha)
> hill(X)
> abline(h=1.5,col="blue")```
If we generate two (independent) samples, and then look at the sum, Hill's estimator does not perform very well (the sum of two independent Pareto variates is no longer Pareto, but only Pareto type) we have
```> set.seed(1)
> alpha=1.5
> X=runif(n)^(-1/alpha)
> Y=runif(n)^(-1/alpha)
> hill(X+Y)
> abline(h=1.5,col="blue")```
The idea is then to use a Jackknife strategy in order to (artificially) increase the size of our sample. Thus, consider sums on all pairs of all $X_i$'s, i.e. sample
$%5C%7BX_1+X_2,%5Ccdots,X_1+X_%7Bn%7D,X_2+X_3,%5Ccdots,X_%7Bn-1%7D+X_n%5C%7D$
Let us use Hill's estimator on this (much larger) sample.
```> XC=NA
> for(i in 1:(n-1)){
+ for(j in (i+1):n){
+ XC=c(XC,X[i]+X[j])
+ }}
> XC=XC[-1]
> hill(XC)
> abline(h=1.5,col="blue")
> abline(h=1.5*2,col="blue",lty=2)
> abline(h=1.5^2,col="blue",lty=3)```
Here, with 20 observations from the initial sample, it looks like the tail index is $2%5Calpha$ (on the left). With 100 observations, it looks like it is $%5Calpha%5E2$ (on the right). On the graphs above, I have plotted those two horizontal lines. It's odd, isn't it ? Of course, I did not really expect $%5Calpha$ since we do not have an i.i.d. sample. Identically distributed yes, with a regularly varying survival function from what we've mentioned before. But clearly not independent. So Hill's estimator should not be a good estimator of $%5Calpha$, but it might be a good estimator of some function of $%5Calpha$.
If we go one step further, an consider all triplets,
$%5C%7BX_1+X_2+X_3,%5Ccdots,X_1+X_2+X_%7Bn%7D,X_2+X_3+X_4,%5Ccdots,X_%7Bn-2%7D+X_%7Bn-1%7D+X_n%5C%7D$
(observe that now, the sample size is huge, even if we start with only 20 points). Then, it looks like the tail index should be $%5Calpha%5E3$ (at least, we can observe a step of Hill's plot around $%5Calpha%5E3$).
```> XC=NA
> for(i in 1:(n-2)){
+ for(j in (i+1):(n-1)){
+ for(k in (j+1):n){
+ XC=c(XC,X[i]+X[j]+X[k])
+ }}}
> XC=XC[-1]
> hill(XC)
> abline(h=1.5,col="blue")
> abline(h=1.5*3,col="blue",lty=2)
> abline(h=1.5^3,col="blue",lty=3)```
My open question is whether there is general result behind, or if it was just a coincidence that those values appear so clearly.
### They posted on the same topic
Trackback URL : http://freakonometrics.blog.free.fr/index.php?trackback/2549598
This post's comments feed
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9037293195724487, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/23665-eqution-print.html
|
# eqution
Printable View
• November 28th 2007, 05:54 AM
bassmansam
eqution
hi guys im trying to work out 20/3x=4 ive tried 1.666 but that does not work, could you give me the method to work it out ,also can you give me the answer so that i can understand it.
Thanks
• November 28th 2007, 07:24 AM
janvdl
Quote:
Originally Posted by bassmansam
hi guys im trying to work out 20/3x=4 ive tried 1.666 but that does not work, could you give me the method to work it out ,also can you give me the answer so that i can understand it.
Thanks
$\frac{20}{3x} = 4$
$20 = 4 \times 3x$
$20 = 12 x$
$\frac{20}{12} = x$
$x = 1,666....$
I guess you made a mistake when setting x = 1,666... into your equation.
Remember x is actually equal to 1,6666666666666666... into infinity :)
Rather try setting $\frac{5}{3}$ into it, which is equal to 1,666...
All times are GMT -8. The time now is 11:21 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435258507728577, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/50530?sort=votes
|
Minimal polynomial with a given maximum in the unit interval
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Find the lowest degree polynomial that satisfies the following constraints:
i) $F(0)=0$
ii) $F(1)=0$
iii)The maximum of $F$ on the interval $(0,1)$ occurs at point $c$
iv) $F(x)$ is positive on the interval $(0,1)$
The answer seems to depend pretty strongly on $c$. It's not difficult to find solutions for all $c$, but the solutions are not minimal. It seems like the solution involves Chebyshev polynomials, but I'm not familiar with them. Can anyone recommended a link?
-
13
Can you give reasons of asking the question? Your $F(x)=x(1-x)G(x)$, where $G(x)>0$ for $0<x<1$ and $F'(c)=0$. By symmetry we can consider $c\ge1/2$ only. $c=1/2$ corresponds to $G(x)=1$. For $1/2<c<2/3$ you simply take $G(x)=x-c(2-3c)/(1-2c)$, otherwise you have to try $G(x)$ a 2nd degree polynomial... I don't see any obvious appearance of Tschebyscheff. – Wadim Zudilin Dec 28 2010 at 3:45
2 Answers
I'll plunge in here.
Edit: I've now added a proof, using Brownian motion. (Earlier this was only justified heuristically) Further revised Jan 5, 2011, to correct typos/formatting and to promote material from comments.
The $n$th Chebyshev $T$-polynomials $T_n(x) = \cos(n \arccos( x))$ are the unique degree n polynomials that fold the interval $[-1,1]$ exactly over itself $n$ times, taking $1$ to $1$.
When $n$ is even, $T_n(-1) = 1$ as well, so by an affine transformation you can make it satisfy the inequalities of the question: $$f_n(x) = 1 - T(2 x - 1) .$$
Here for example is the plot of $f_{10}$:
This has a maximum at $c = \cos(\pi/10) / 2 = .975528...$, pretty close to 1. (If a strict maximum is desired, modify the polynomial by adding $\epsilon x$, and change by a linear transformation of the domain so that $f(1)=0$.)
For odd degrees, you can restrict the Chebyshev polynomials to an interval from the first local maximum to 1, and renormalize in a similar way, to get the unique degree $n$ polynomial $V_n(x)$ that folds the unit interval exactly $n-1$ times over itself and has a double root at 0.
One way to know the existence of such a polynomial is this: extend the map of the interval to itself to all of $\mathbb C$ in a way that it is an $n+1$-fold branched cover of $\mathbb C$ (not yet analytic) with the branching pattern corresponds to the desired critical points and critical values. Pull back the complex structure in the range via this map. Using the Riemann mapping theorem, it can be seen that the resulting complex 1-manifold is $\mathbb C$, so you get a polynomial map. By symmetry, the critical points all lie on a straight line, so it can be renormalized as a map from the interval to itself. (This is a special case of more general theories, including Shabat polynomials and dessins d'enfants, as well as the theory of postcritically finite rational maps).
For odd degree $n$, I think $V_n$ probably gives the maximal $c$.
Here's a heuristic argument: One can ask, where are the critical points in $\mathbb C$ for a polynomial that satisfies the given constraints and maximizes $c$. Given the $n-1$ critical points ${c_i}$, the derivative of $f$ is up to a constant $\prod x-c_i$. To make the ratio large, we need the ratio of the mean value of the integrand in $[0,c]$ to be small compared to the mean value of - integrand in $[c,1]$: since the integrals add to 0, this ratio is the same as the ratio of arc length. This seems to say that the $c_i$'s want to be close to --- actually, inside--- the interval $[0,c]$. The best way to squeeze them in seems to be to make the interval fold as described.
It's easy to relax from the extreme case. For example, in even degrees, just make an affine change to a larger interval $f_n^{-1}( [1-t, \infty], t \ge 0$; as $t \rightarrow \infty$, $c \rightarrow .5$. For the odd degree examples, add a linear function and renormalize in a similar way.
This is reminiscent of the phenomenon of monotonicity in the theory of iterated real polynomials, but simpler to establish.
Added: a proof, using Brownian Motion
Proof that Chebyshev polynomials are optimal for even degrees
There's a way to formulate the problem as a probability question. If you start a Brownian path from a point near infinity in the plane, it almost surely eventually hits the line segment $J = [0,1]$. The position of $c$ in this line segment is determined by the ratio of the probability that the path first hits $[0,c]$ vs $[c,1]$. (To get the exact function, you can map the complement of the line segment coformally to the complement of a circle; on a circle, hitting measure is proportional to arc length).
Now suppose we have a degree $n$ polymomial $g$, as in the question, scaled so that $g(x)$ with $g([0,1]) = [0,1]$ and $g(0)=g(1) = 0$. As a complex polynomial, it defines a branched cover of $\mathbb C$ over $\mathbb C$. In 2 dimensions, conformal maps preserve the trajectories of Brownian motion: only the time parameter changes. Therefore, Brownian motion on the branched cover of the plane looks exactly like Brownian motion on the plane, but with the extra information of which of the $n$ sheets the trajectory is on at any given time. As the trajectory goes around the various critical values, the sheets are permuted. At any given time, if we just know the position of a Brownian path, the distribution on the sheets is uniform.
Let's denote by $J$ the unit interval $[0,1]$ in the domain (upstairs in the branched cover) and $K$ the same interval $[0,1]$ in the range (downstairs). Thus, $J$ is a union of line segments on sheets above $K$, and furthermore, the two subintervals of $J$, $[0,c]$ and $[c,1]$, both map surjectively to $[0,1]$.
Therefore, the first time the Brownian path downstairs crosses $[0,1]$, it has at least a $1/n$ probability of crossing the $[c,1]$ segment in the branched cover. For the even degree Chebyshev polynomial, as soon as it hits the segment $K$ downstairs it also hits $J$ upstairs, so the probability is exactly $1/n$: therefore, this is optimal. It is the unique optimal example for even degree, since if $g^{-1}(K) \ne J$, there would be a nonzero second chance for paths that hit $g^{-1}(K) \setminus J$ to continue on and still hit $[c,1]$.
The figures below illustrate this. The top figure shows the 6th Chebyshev polynomial, renormalized as above to take $[0,1] \rightarrow [0,1]$. The red interval is $J$, the caterpillar's skin is the inverse image of the circumscribed circle about $K$; also depicted is the inverse image of $\mathbb R$. Brownian motion starting at infinity has an equal probability of arriving in any of the 12 sectors (which each opens out under the map to a halfplane), so the probability of arriving in the leftmost segment is exactly 1/6, with length $(1-\cos(\pi/6))/2 = .0669873\dots$.
The next figure (below) shows a comparison polynomial (also graphed, futher down) with an order 5 zero at 0 and one other critical point at $c$, mapping with critical value $1$. The same data is shown. When a Brownian path starting from infinity first hits $K$ (downstairs), it has equal probability of hitting any of the 6 segments inside the various curves: one of the two red segments (in $J$), or the vein in one of the four leaves. In the 4/6 probability event that it does not hit $J$, when the Brownian path continues on it has some chance of hitting the top interval, so this probability is strictly greater than $1/6$.
Proof that Chebyshev polynomials are optimal for odd degree
For the odd degree case, a little more is needed. Since the requirement of the question is that $g(0)=g(1)=0$, there are an even number of sheets above any point of $K$, so at least one sheet is absent in $J \setminus g^{-1}(K)$. Let's suppose first that $g^{-1}(K)$ is connected. In that case, we can use the Riemann mapping theorem to map its complement conformally to the exterior of a unit disk; there is a set of measure at least $2\pi/n$ that does not map to $J$. We can follow Brownian motion by letting it "reflect" whenever it hits this portion of the boundary of the unit disk, and continue on until it hits a part that corresponds to $J$. With this formulation, it's obvious that to minimize the probability that the continuing trajectory hits the sensitive area corresponding to $[c,1]$, we need to minimize its length and put it as far away from the sensitive area as possible. That's exactly what happens for $V_n$: on the circle the extra sheet is antipodal to the sensitive sheet.
A similar argument applies to the disconnected case, although without quite as simple a visual representation. It's easy to establish that any optimal polynomial must have the maximal number of sheets above each point in $K$. The hitting probability for the senstive area for random walks starting at points $z$ is a harmonic function on $\mathbb C \setminus J$, with limit 1 along $[c,1]$ and 0 along $[0,c]$. This harmonic function has no critical points, so if there is a component of $g^{-1}(K)$ not attached to $J$, its mean on this component can be reduced by moving it toward 0, by moving the critical values not on the $K$ toward $0$. Below is a picture for the optimal solution for degree 5. There's a short tail to the caterpillar where the Brownian path gets a second chance, but its far away from the sensitive portion so it has only a small chance to next hit there rather than in $[0,c]$. The interval $[c,1]$ is comparatively short because it is exposed out at the end of the interval, but not as exposed so not as short as in the Chebyshev case.
The Constants
When $n$ is even, there is a solution for $c$ between $(1−\cos(\pi/n))/2$ and $(1+\cos(\pi/n)/2$. If $n$ is odd, there is a solution for $c$ between $(1−\cos(\pi/n))/(1+\cos(\pi/n))$ and $2 \cos(\pi/n)/(1+\cos(\pi/n))$. Numerically, the low values for $c$ are {2, 0.5}, {3, 0.333333}, {4, 0.146447}, {5, 0.105573}, {6,0.0669873}, {7, 0.0520951}, {8, 0.0380602}, {9, 0.0310912}, {10, 0.0244717}
*End of added proof *
Polynomials with a unique local maximum
For comparison, here are plots of degree $n$ polynomials functions (unique up to a constant) that have an $n-1$-fold root at 0, a critical point at $c$, and take value $0$ at 1. At first I guessed that these might give the optimal $c$, but for them, $c = 1-1/n$, much smaller than for the Chebyshev polynomials. The plots are for $n = 2, 3, \dots, 10$.
However, these polynomials answer a different question: given c, what is the minimum degree of a polynomial that is 0 at 0 and 1 and has a unique local maximum at c. These polynomials, also discussed by Wadim Zudilin, have that property. For such a polynomial, the same technique as above can be used. For any candidate polynomial f, a Brownian path starting at infinity has a probability of a probability of 1/n to hit the interval [0,c], 1/n probability to hit [1,c], and (n−2)/n to first hit elsewhere on f−1([0,1]). The same proof shows these examples are optimal
-
+1. You give sufficiet details for doing the case of $n$ even, and the oddies should need a slight modification of the argument. Without seeing a reason for the OP, there is no motivation to elaborate on the technicalities. This could be done by the author. – Wadim Zudilin Dec 29 2010 at 3:19
That's wonderful! Is there a guess for $c$ in the odd case? – Wadim Zudilin Dec 29 2010 at 23:06
2
@Wadim Zudilin: Yes, thanks for asking, I should have put in the values for c. When $n$ is even, there is a solution for $c$ between $(1 - \cos(\pi/n))/2$ and $(1 + \cos(\pi/n)/2$. If $n$ is odd, there is a solution for $c$ between $(1-\cos(\pi/n))/(1+\cos(\pi/n))$. Numerically, the low values for $c$ are {2, 0.5}, {3, 0.333333}, {4, 0.146447}, {5, 0.105573}, {6,0.0669873}, {7, 0.0520951}, {8, 0.0380602}, {9, 0.0310912}, {10, 0.0244717} – Bill Thurston Dec 29 2010 at 23:57
One can also ask, given $c$, what is the minimum degree of a polynomial that is 0 at ${0,1}$ and has a unique local maximum at $c$. The polynomials that appear in the last figure of my answer and were also discussed by Wadim Zudilin have that property. For such a polynomial, the same technique as above can be used. For any candidate polynomial $f$, a Brownian path starting at infinity has a probability of a probability of 1/n to hit the interval $[0,c]$, 1/n probability to hit $[1,c]$, and $(n-2)/n$ to first hit elsewhere on $f^{-1}([0,1])$. The same proof shows these examples are optimal. – Bill Thurston Jan 1 2011 at 16:51
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
By replacing $F(x)$ by $F(1-x)$ we may assume that $c\ge1/2$. The problem is to determine a polynomial $G(x)=G_c(x)$ of minimal possible degree, say $n$, such that $G(x)>0$ for $0 < x < 1$ and the derivative of $F(x)=x(1-x)G(x)$ changes sign at $x=c$ and $F(x)\le F(c)$ for $0 \le x \le1$. Clearly, $G_{1/2}(x)\equiv1$ and with a very little work $G_c(x)=x+c(2-3c)/(2c-1)$ for $1/2 < c\le2/3$, so $n=1$. The case $n=2$ produces $$G(x)=x^2+a\biggl(x-\frac{c(3c-2)}{2c-1}\biggr)-\frac{c^2(4c-3)}{2c-1},$$ so for $a \ge 0$ we have $G(x) > G(0)=c^2(3-4c)/(2c-1)$ on $x\in(0,1)$, while the latter expression is non-negative for $2/3 < c\le3/4$. If $a <0$, then either $G(x)$ is not positive for $0 < x < 1$ or $F(x)$ attains its maximum at a different point of the interval $0 < x < 1$. This is however a little bit technical to show. (For example, if we take $a=1-3c$, then for the corresponding polynomial $G(x)$ we indeed have $G(x) > 0$, since $$G(x)\ge G\biggl(\frac{3c-1}2\biggr)=\frac{(2c+1)(c-1)^2}{4(2c-1)}.$$ But $x=c$ is not the maximum of $F(x)=x(1-x)G(x)$ on the interval.)
If the above pattern remains, then for $n/(n+1) < c \le (n+1)/(n+2)$ the minimal possible degree of the polynomial $G(x)$ seems to be $n$ (so that $\deg F=n+2$), with the corresponding choice $$G_c(x)=x^n-\frac{c^n((n+2)c-(n+1))}{2c-1}.$$ The limiting case $c=1$ is in favor of this observation: there is no polynomial $F(x)\not\equiv0$ of the assumed form which attains its maximum at $x=1$. So, the expected answer to the original question would be $\deg F=\lceil 1/\min(c,1-c)\rceil$, where $\lceil x\rceil=n$ when $n-1 < x \le n$.
Edit. With the above choice of $G_c(x)$, $F'(x)=0$ on the interval $0 < x < 1$ only at $x=c$. Therefore, this choice results in the estimate $\deg F \le \lceil 1/\min(c,1-c)\rceil$, which is sharp at least for $1/3 \le c \le 2/3$.
Edit 2. Bill's answer gives pretty much evidence for the fact that $1/2 < c < (1+\cos(\pi/n))/2$ gives the estimate $\deg F\le n$ for $n$ even. More remarkably, this is indeed related to the Chebyshev polynomials. The most unpleasant thing is a necessary amount of technical work to be done (but Bill's answer contains all details for such calculations).
-
But ... for your n = 2, the polynomial $f_{n+2}$ in my answer below achieves $c = (2 + \sqrt 2)/4 = .853553 > 3/4$. I'll attempt to put the comparison of the plots in this comment:  – Bill Thurston Dec 29 2010 at 1:57
I played with choosing $a<0$ (experimentally only) and discovered that $F(x)$ has its maximum at a different point. Your plot suggests that there are 2 maximums, so $F(x)=F(c)$ for some $x\ne c$. Otherwise, please give an explicit example... – Wadim Zudilin Dec 29 2010 at 2:13
The statement of the problem doesn't specifically require that F attain a unique maximum at $c$. But (this is mentioned in my answer) if you do insist that the maximum should be unique, just modify it by a small linear pertubation, i.e. $g_{n, \epsilon}(x) = \epsilon x + f_n((1-\epsilon') x)$, where $\epsilon'$ is chosen to solve $g_{n,\epsilon}(1)=0$. The limit of the $c$'s as $\epsilon \rightarrow 0$ is that for $f_n(x)$, namely $(1 + \cos(\pi/n))/2$. – Bill Thurston Dec 29 2010 at 4:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 214, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316006302833557, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/17116/what-is-the-physical-meaning-of-a-product-of-vectors?answertab=oldest
|
# What is the physical meaning of a product of vectors?
My teacher told me that Vectors are quantities that behave like Displacements. Seen this way, the triangle law of vector addition simply means that to reach point C from point A, going from A to B & then to C is equivalent to going from A to C directly.
But what is the meaning of a product of vectors? I cannot imagine how a product of displacements would look like in reality. Also, how do we know whether we need the scalar (dot) product or a vector (cross) product?
-
2
""Also, how do we know whether we need the scalar (dot) product or a vector (cross) product?"" In case You need it really, You will know. – Georg Nov 17 '11 at 12:48
@Georg Can you give an example? – Green Noob Nov 17 '11 at 12:54
– Georg Nov 17 '11 at 13:00
@Georg Still didn't understand what a product of displacement means :( – Green Noob Nov 17 '11 at 13:04
Force is a vector. Displacement is a vector. Their Dot product is a scalar Energy. I find the best way to understand these concepts is by considering meaningful combinations. For the cross product consider angular velocity and displacement. – ja72 Feb 6 '12 at 20:41
## 2 Answers
You seem to look for geometrical meanings. The cross product gives the area of the parallelogram that is spanned by the two vectors as the length of the resulting vector and the direction perpendicular to both vectors. The scalar product gives you information about the component of one vector into the direction of the other.
As Georg said, you will probably know when you need it. I also found that school is making this stuff more complicated than it needs to be by just letting the students memorize particles of information instead of teaching understanding. If you have to stay with memorizing, a pretty clear way for distinguishing scalar and vector product is the result in respekt of the direction of the vectors: the cross product gives the maximal value, if the vectors have a 90° angle between each other and 0 for 0°, the scalar product.
About the meaning: I would not think in displacements. A force has nothing to with displacements for starters. A vector is a scalar quantity with a direction, or even more general, just a bunch of numbers - generally more than 1 - with a certain operation like the possibility to add two vectors.
-
Why can we not think in terms of displacements? Aren't they vectors too? – Green Noob Nov 17 '11 at 13:23
2
Displacements can be described with vectors, but not every vector is a displacement, so sooner or later you will run into problems. Example: I lean against a wall in x-direction. The force acting on the wall is F = (10,0,0) N. The wall obviously doe not move. Where is the displacement? – mcandril Nov 17 '11 at 13:45
2
The displacement corresponding to a product of vectors doesn't have a natural intuitive geometrical connection to the displacements corresponding to the two original vectors. So it's not a useful intuitive way to visualize it, unlike the case with the sum. Also, I want to disagree with everybody who says not to think in displacements. You should understand that there is more than one way in which vectors appear (so they're not always displacements). However, thinking in terms of displacements is a very good way to get an intuitive feeling for some aspects of vectors (not products, though). – Peter Shor Nov 17 '11 at 14:32
@mcandril I think we can think of vectors as displacements. In your example, the wall doesn't move because the resultant of forces is zero that is, the wall exerts a normal force which cancels out your weight. In terms of displacement, it is the same as walking from A to B(your weight) & walking back following the same path backwards from B to A (normal force) & hence you have no apparent change in position. – Green Noob Nov 19 '11 at 14:52
Note that I said that vectors are quantities that behave like displacements. I never said that vectors are quantities that cause displacement. Please clarify. – Green Noob Nov 19 '11 at 14:54
show 2 more comments
It is a bit misleading to think of Vectors as displacements. Vectors are abstract mathematical objects that live in a Vector Space over a Field (say Real number field). A vector is a higher-order animal that is obtained when you pour the Field over a Vector group.
Quick and dirty introduction:
1. Build a set with a collection of objects.
2. Establish a relationship between the objects by means of an operation. (Say multiplication).
3. See if it forms a Group. (We assume yes).
4. Now bring in a Field (Set which forms a Group under two operations) and form a new algebraic structure called a Vector Space over a Field by establishing certain combination rules between elements of the Field and the elements of the group. To make life easier we choose a field that has one operation the same as the Group.
The Field serves to fill the "holes" between elements in the Group by giving you the ability to scale vectors. Vector "Products" are obtained by asking the question "how do we make vectors talk to each other"? Inner products yield elements in the Field (scalars) and wedge products yield another vector that is not in the same sub-space as the two original vectors.
How do you know whether a physical system can be represented by an inner or outer product? Well, the easiest way to check is experimentally. For example how do we know if $\vec{F}=q(\vec{v}\times\vec{B})$ and not $q(\vec{B}\times\vec{v})$ ?? This is by experiment.
*Remember that when we measure something, we do so in the Field because our results are numbers.*This is a critical concept.
There is a lot more to say and I'll edit this when I have the time. Abstract Algebra is a beautiful subject. Hope this helps. :)
Edit #1: The triangle law of addition comes out naturally when you write down the rules that result in the formation of a vector space. All these geometric pictures are misleading because they are presented to students as the absolute concept. You can ask the question "Why is a vector represented by an arrow?". My opinion (I have never seen this discussed anywhere) is that by giving a "direction" you inherently establish an ordering within the set. A lot more can be said if you think deeper, but I guess I have confused the OP already. :) :)
-
Thanks for answering but I didn't understand anything because I'm not familiar with abstract algebra :( And further, since we are using vectors in Physics, I believe there must be a physical way of looking at vectors rather than an abstract mathematical one. – Green Noob Nov 17 '11 at 13:29
Just think about the steps I listed and you will get it eventually. I don't know if there is a physical way of looking at vectors without running into problems down the line. :) What I wrote is probably at a much higher level than you are accustomed to, but it never hurts to get your feet wet if you really want to understand what the deal is? – Antillar Maximus Nov 17 '11 at 13:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949372410774231, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/19021/comparing-original-variables-with-characteristic-values-of-diagonalized-variance
|
# Comparing original variables with characteristic values of diagonalized variance-covariance matrix
If I have a reference data set comprising repeated measurements of 3 variables of a system in state $A$. Given new observations of these variables for a different system I would like to classify individual observations as being in state $A$ or not.
My initial inclination would be to compare the new value of each variable to the distribution in the reference state. However, a manuscript I am reading for a similar type of analysis suggests to instead construct a $3 \times 3$ covariance matrix based on the reference state, and diagonalize this covariance matrix. I am advised that I can compare each new set of measurements of the 3 variables against this diagonalized matrix. My understanding is that to follow this method, I need to examine the relative deviation of the new variables ($x_i$) from the eigenvalues ($\lambda_i$) of the new matrix and determine if this deviation is within a certain tolerance.
$\sum_{i=1}^3 \left(x_i - \sqrt{\lambda_i}\right)^2$
It seems that in this case I am comparing a set of new measurements $x_i$ against the loadings of a set of transformed variables (according to my interpretation of PCA).
Is this a common method of classification, and is there a term for this type of analysis? I was not aware that the singular values (square root of eigenvalues) would be directly comparable to the original variables.
Thank you in advance.
-
1
There must be something missing from this recipe you are following, because it appears to make no statistical sense and in many cases will be a terrible procedure. Is this "manuscript" accessible to the rest of the world to consult? – whuber♦ Nov 27 '11 at 23:33
@whuber, thank you for this response. I am not sure if the author would like his manuscript scrutinized on a public forum, but I will try to contact him directly. But thank you for your feedback, I thought something was amiss and I may have misinterpreted this portion of the manuscript. – crippledlambda Nov 28 '11 at 3:43
1
Perhaps you could quote the relevant part anonymously. If there is a possibility of multiple interpretations, we can suggest other ways to make sense of it, and if there is an error in the manuscript, you can share that with its author, who in either case should be grateful for the opportunity to clarify or correct their work. – whuber♦ Nov 28 '11 at 15:43
Thanks so much - I spoke with the author and it turns out that the new observations should also be projected onto the new basis set for comparison. – crippledlambda Dec 2 '11 at 5:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397724866867065, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/287869/boolean-simplification-identifying-a-rule/287874
|
Boolean Simplification: Identifying a rule
I'm in the process of minimizing a boolean equation, and I've gotten it into the following form:
$$\lnot B \lor (B \land \lnot C) \lor C$$
Just by looking at it, I can tell that this is always TRUE. The complements of the center term are covered by the other two. However, I can't quite prove it formally.
I can split the middle term using indempotency:
$$\lnot B \lor (B \land \lnot C) \lor (B \land \lnot C) \lor C$$
Then (again, intuitively) I can tell that the two left terms simplify to: $\lnot B \land \lnot C$
What is the rule (or rules) that can get the left two terms into this form? Once I know that rule, I can formally solve the equation like so:
$$\lnot B \lor \lnot C \lor C\lor B$$ $$(\lnot B \lor B) \lor (\lnot C \lor C)$$ $$1$$
Any help would be greatly appreciated!
-
2 Answers
$$\lnot B \lor (B \land \lnot C) \lor C$$ rearrange $$(\lnot B \lor C) \lor (B \land \lnot C)$$
Let's apply the De Morgan rule here quick:
$$(\lnot B \lor C) \lor \lnot(\lnot B \lor C)$$
I think it is called the "law of excluded middle" that $A\lor \lnot A$ is always true, and that's what you have here for $A = (\lnot B \lor C)$.
-
Aha! This is the nice, simple answer I was looking for. Thanks for the clarification :) – BraedenP Jan 27 at 4:28
Sorry I deleted my previous comment which said "please accept". And of course it's ok if the other never got a good answer. – chx Jan 27 at 4:34
$$\lnot B \lor (B \land \lnot C) \lor C$$ $$(\lnot B \lor B) \land (\lnot B \lor\lnot C) \lor C$$ $$T \land (\lnot B \lor\lnot C) \lor C$$ $$(\lnot B \lor\lnot C) \lor C$$ $$\lnot B \lor (\lnot C \lor C)$$ $$\lnot B \lor T$$ $$T$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632591009140015, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Matrix_exponential
|
# Matrix exponential
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. Abstractly, the matrix exponential gives the connection between a matrix Lie algebra and the corresponding Lie group.
Let X be an n×n real or complex matrix. The exponential of X, denoted by eX or exp(X), is the n×n matrix given by the power series
$e^X = \sum_{k=0}^\infty{1 \over k!}X^k.$
The above series always converges, so the exponential of X is well-defined. Note that if X is a 1×1 matrix the matrix exponential of X is a 1×1 matrix consisting of the ordinary exponential of the single element of X.
## Properties
Let X and Y be n×n complex matrices and let a and b be arbitrary complex numbers. We denote the n×n identity matrix by I and the zero matrix by 0. The matrix exponential satisfies the following properties:
• e0 = I
• eaXebX = e(a + b)X
• eXe−X = I
• If XY = YX then eXeY = eYeX = e(X + Y).
• If Y is invertible then eYXY−1 =YeXY−1.
• exp(XT) = (exp X)T, where XT denotes the transpose of X. It follows that if X is symmetric then eX is also symmetric, and that if X is skew-symmetric then eX is orthogonal.
• exp(X*) = (exp X)*, where X* denotes the conjugate transpose of X. It follows that if X is Hermitian then eX is also Hermitian, and that if X is skew-Hermitian then eX is unitary.
### Linear differential equation systems
Main article: matrix differential equation
One of the reasons for the importance of the matrix exponential is that it can be used to solve systems of linear ordinary differential equations. The solution of
$\frac{d}{dt} y(t) = Ay(t), \quad y(0) = y_0,$
where A is a constant matrix, is given by
$y(t) = e^{At} y_0. \,$
The matrix exponential can also be used to solve the inhomogeneous equation
$\frac{d}{dt} y(t) = Ay(t) + z(t), \quad y(0) = y_0.$
See the section on applications below for examples.
There is no closed-form solution for differential equations of the form
$\frac{d}{dt} y(t) = A(t) \, y(t), \quad y(0) = y_0,$
where A is not constant, but the Magnus series gives the solution as an infinite sum.
### The exponential of sums
We know that the exponential function satisfies ex + y = exey for any real numbers (scalars) x and y. The same goes for commuting matrices: If the matrices X and Y commute (meaning that XY = YX), then
$e^{X+Y} = e^Xe^Y. \,$
However, if they do not commute, then the above equality does not necessarily hold, in which case we can use the Baker–Campbell–Hausdorff formula to compute eX + Y.
The converse is false: the equation eX + Y = eXeY does not necessarily imply that X and Y commute.
For Hermitian matrices there are two notable theorems related to the trace of matrix exponentials:
#### Golden–Thompson inequality
Main article: Golden–Thompson inequality
If A and H are Hermitian matrices, then
$\operatorname{tr}\exp(A+H) \leq \operatorname{tr}(\exp(A)\exp(H)).$ [1]
Note that there is no requirement of commutativity. There are counterexamples to show that the Golden–Thompson inequality cannot be extended to three matrices (and in any event, tr(exp(A)exp(B)exp(C)) is not guaranteed to be real for Hermitian A, B, C[citation needed]). However, the next theorem accomplishes this in a way.
#### Lieb's theorem
The Lieb's theorem, named after Elliott H. Lieb, states that for a fixed Hermitian matrix $H$ the function
$f(A) = \operatorname{tr} \,\exp \left (H + \log A \right)$
is concave on the cone of positive-definite matrices. [2]
### The exponential map
Note that the exponential of a matrix is always an invertible matrix. The inverse matrix of eX is given by e−X. This is analogous to the fact that the exponential of a complex number is always nonzero. The matrix exponential then gives us a map
$\exp \colon M_n(\mathbb C) \to \mathrm{GL}(n,\mathbb C)$
from the space of all n×n matrices to the general linear group of degree n, i.e. the group of all n×n invertible matrices. In fact, this map is surjective which means that every invertible matrix can be written as the exponential of some other matrix (for this, it is essential to consider the field C of complex numbers and not R).
For any two matrices X and Y, we have
$\| e^{X+Y} - e^X \| \le \|Y\| e^{\|X\|} e^{\|Y\|},$
where || · || denotes an arbitrary matrix norm. It follows that the exponential map is continuous and Lipschitz continuous on compact subsets of Mn(C).
The map
$t \mapsto e^{tX}, \qquad t \in \mathbb R$
defines a smooth curve in the general linear group which passes through the identity element at t = 0. In fact, this gives a one-parameter subgroup of the general linear group since
$e^{tX}e^{sX} = e^{(t+s)X}.\,$
The derivative of this curve (or tangent vector) at a point t is given by
$\frac{d}{dt}e^{tX} = Xe^{tX} = e^{tX}X. \qquad (1)$
The derivative at t = 0 is just the matrix X, which is to say that X generates this one-parameter subgroup.
More generally, (R. M. Wilcox 1966)
$\frac{d}{dt}e^{X(t)} = \int_0^1 e^{\alpha X(t)} \frac{dX(t)}{dt} e^{(1-\alpha) X(t)}\,d\alpha.$
Taking in above expression $e^{X(t)}$ outside the integral sign and expanding the integrand with the help of the Hadamard lemma one can obtain the following useful expression for the derivative of matrix exponent:
$\left(\frac{d}{dt}e^{X(t)}\right)e^{-X(t)} = \frac{d}{dt}X(t) + \frac{1}{2!}[X(t),\frac{d}{dt}X(t)] + \frac{1}{3!}[X(t),[X(t),\frac{d}{dt}X(t)]]+\cdots$
### The determinant of the matrix exponential
By Jacobi's formula, for any complex square matrix the following identity holds:
$\det (e^A)= e^{\operatorname{tr}(A)}~.$
In addition to providing a computational tool, this formula demonstrates that a matrix exponential is always an invertible matrix. This follows from the fact the right hand side of the above equation is always non-zero, and so det(eA)≠ 0, which means that eA must be invertible.
In the real-valued case, the formula also exhibits the map $\exp \colon M_n(\mathbb R) \to \mathrm{GL}(n,\mathbb R)$ to not be surjective, in contrast to the complex case mentioned earlier. This follows from the fact that, for real-valued matrices, the right-hand side of the formula is always positive, while there exist invertible matrices with a negative determinant.
## Computing the matrix exponential
Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis. Both Matlab and GNU Octave use Padé approximant.[3][4] Several methods are listed below.
### Diagonalizable case
If a matrix is diagonal:
$A=\begin{bmatrix} a_1 & 0 & \ldots & 0 \\ 0 & a_2 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & a_n \end{bmatrix},$
then its exponential can be obtained by just exponentiating every entry on the main diagonal:
$e^A=\begin{bmatrix} e^{a_1} & 0 & \ldots & 0 \\ 0 & e^{a_2} & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & e^{a_n} \end{bmatrix}.$
This also allows one to exponentiate diagonalizable matrices. If A = UDU−1 and D is diagonal, then eA = UeDU−1. Application of Sylvester's formula yields the same result. The proof behind this is that multiplication between diagonal matrices is equivalent to element wise multiplication; in particular, the "one dimensional" exponentiation is felt element wise for the diagonal case.
### Projection case
If the matrix under question is a projection matrix, then the matrix exponential of it is eP = I + (e − 1)P, which is easy to show upon expansion of the definition of the exponential:
$e^P = I + \sum_{k=1}^{\infty} \frac{P^k}{k!}=I+\left(\sum_{k=1}^{\infty} \frac{1}{k!}\right)P=I+(e-1)P$
### Nilpotent case
A matrix N is nilpotent if Nq = 0 for some integer q. In this case, the matrix exponential eN can be computed directly from the series expansion, as the series terminates after a finite number of terms:
$e^N = I + N + \frac{1}{2}N^2 + \frac{1}{6}N^3 + \cdots + \frac{1}{(q-1)!}N^{q-1}.$
### Generalization
When the minimal polynomial of a matrix X can be factored into a product of first degree polynomials, it can be expressed as a sum
$X = A + N \,$
where
• A is diagonalizable
• N is nilpotent
• A commutes with N (i.e. AN = NA)
This is the Jordan–Chevalley decomposition.
This means that we can compute the exponential of X by reducing to the previous two cases:
$e^X = e^{A+N} = e^A e^N. \,$
Note that we need the commutativity of A and N for the last step to work.
Another (closely related) method if the field is algebraically closed is to work with the Jordan form of X. Suppose that X = PJP −1 where J is the Jordan form of X. Then
$e^{X}=Pe^{J}P^{-1}.\,$
Also, since
$J=J_{a_1}(\lambda_1)\oplus J_{a_2}(\lambda_2)\oplus\cdots\oplus J_{a_n}(\lambda_n),$
$\begin{align} e^{J} & {} = \exp \big( J_{a_1}(\lambda_1)\oplus J_{a_2}(\lambda_2)\oplus\cdots\oplus J_{a_n}(\lambda_n) \big) \\ & {} = \exp \big( J_{a_1}(\lambda_1) \big) \oplus \exp \big( J_{a_2}(\lambda_2) \big) \oplus\cdots\oplus \exp \big( J_{a_k}(\lambda_k) \big). \end{align}$
Therefore, we need only know how to compute the matrix exponential of a Jordan block. But each Jordan block is of the form
$J_{a}(\lambda) = \lambda I + N \,$
where N is a special nilpotent matrix. The matrix exponential of this block is given by
$e^{\lambda I + N} = e^{\lambda}e^N. \,$
#### Alternative
If P and Qt are nonzero polynomials in one variable, such that P(A) = 0, and if the meromorphic function
$f(z)=\frac{e^{t z}-Q_t(z)}{P(z)}$
is entire, then
$e^{t A} = Q_t(A)$.
To prove this, multiply the first of the two above equalities by P(z) and replace z by A.
Such a polynomial Qt can be found as follows. Let a be a root of P, and Qa,t the product of P by the principal part of the Laurent series of f at a. Then the sum St of the Qa,t, where a runs over all the roots of P, can be taken as a particular Qt. All the other Qt will be obtained by adding a multiple of P to St. In particular St is the only Qt whose degree is less than that of P.
Consider the case of a 2-by-2 matrix
$A:=\begin{bmatrix} a & b \\ c & d \end{bmatrix}.$
The exponential matrix $e^{tA}$ is of the form $e^{tA}=s_0(t)\,I+s_1(t)\,A$. (For any complex number $z$ and any $\mathbb{C}$-algebra $B$ we denote again by $z$ the product of $z$ by the unit of $B$.) Let $\alpha$ and $\beta$ be the roots of the characteristic polynomial
$X^2-(a+d)\ X+ ad-bc. \,$
Then we have
$s_0(t)=\frac{\alpha\,e^{\beta t} -\beta\,e^{\alpha t}}{\alpha-\beta},\quad s_1(t)=\frac{e^{\alpha t}-e^{\beta t}}{\alpha-\beta}\quad$
if $\alpha\not=\beta$, and
$s_0(t)=(1-\alpha\,t)\,e^{\alpha t},\quad s_1(t)=t\,e^{\alpha t}\quad$
if $\alpha=\beta$.
In either case, writing:
$s = \frac{\alpha + \beta}{2}=\frac{\operatorname{tr} A}{2},$
and
$q = \frac{\alpha-\beta}{2}=\pm\sqrt{-\det\left(A-s I\right)},$
$s_0(t) = e^{s t}\left(\cosh (q t) - s \frac{\sinh (q t)}{q t}\right),\quad s_1(t) =e^{s t}\frac{\sinh(q t)}{q t},$
where
$\frac{\sinh 0}{0}$ is 0 if t = 0, and 1 if q = 0.
The polynomial $S_t$ can also be given the following "interpolation" characterization. Put $e_t(z):=e^{tz}$, $n:=\deg P$. Then $S_t$ is the unique degree $<n$ polynomial which satisfies $S_t^{(k)}(a)=e_t^{(k)}(a)$ whenever $k$ is less than the multiplicity of $a$ as a root of $P$.
We assume (as we obviously can) that $P$ is the minimal polynomial of $A$.
We also assume that $A$ is a diagonalizable matrix. In particular, the roots of $P$ are simple, and the "interpolation" characterization tells us that $S_t$ is given by the Lagrange interpolation formula.
At the other extreme, if $P=(X-a)^n$, then
$S_t=e^{at}\ \sum_{k=0}^{n-1}\ \frac{t^k}{k!}\ (X-a)^k.$
The simplest case not covered by the above observations is when $P=(X-a)^2\,(X-b)$ with $a\not=b$, which gives
$S_t=e^{at}\ \frac{X-b}{a-b}\ \Bigg(1+\left(t+\frac{1}{b-a}\right)(X-a)\Bigg)+e^{bt}\ \frac{(X-a)^2}{(b-a)^2}\quad.$
### via Laplace transform
As above we know that the solution to the system linear differential equations given by $\frac{d}{dt} y(t) = Ay(t), y(0) = y_0$ is $y(t) = e^{At} y_0$. Using the Laplace transform, letting $Y(s) = \mathcal{L}\{y\}$, and applying to the differential equation we get
$sY(s) - y_0 = AY(s) \Rightarrow (sI - A)Y(s) = y_0$
where $I$ is the identity matrix. Therefore $y(t) = \mathcal{L}^{-1}\{(sI-A)^{-1}\}y_0$. Thus it can be concluded that $e^{At} = \mathcal{L}^{-1}\{(sI-A)^{-1}\}$. And from this we can find $e^A$ by setting $t = 1$.
## Calculations
Suppose that we want to compute the exponential of
$B=\begin{bmatrix} 21 & 17 & 6 \\ -5 & -1 & -6 \\ 4 & 4 & 16 \end{bmatrix}.$
Its Jordan form is
$J = P^{-1}BP = \begin{bmatrix} 4 & 0 & 0 \\ 0 & 16 & 1 \\ 0 & 0 & 16 \end{bmatrix},$
where the matrix P is given by
$P=\begin{bmatrix} -\frac14 & 2 & \frac54 \\ \frac14 & -2 & -\frac14 \\ 0 & 4 & 0 \end{bmatrix}.$
Let us first calculate exp(J). We have
$J=J_1(4)\oplus J_2(16) \,$
The exponential of a 1×1 matrix is just the exponential of the one entry of the matrix, so exp(J1(4)) = [e4]. The exponential of J2(16) can be calculated by the formula e(λI + N) = eλ eN mentioned above; this yields[5]
$\begin{align} \exp \left( \begin{bmatrix} 16 & 1 \\ 0 & 16 \end{bmatrix} \right) & = e^{16} \exp \left( \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \right) \\[6pt] & = e^{16} \left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} + {1 \over 2!}\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} + \cdots \right) = \begin{bmatrix} e^{16} & e^{16} \\ 0 & e^{16} \end{bmatrix}. \end{align}$
Therefore, the exponential of the original matrix B is
$\begin{align} \exp(B) & = P \exp(J) P^{-1} = P \begin{bmatrix} e^4 & 0 & 0 \\ 0 & e^{16} & e^{16} \\ 0 & 0 & e^{16} \end{bmatrix} P^{-1} \\[6pt] & = {1\over 4} \begin{bmatrix} 13e^{16} - e^4 & 13e^{16} - 5e^4 & 2e^{16} - 2e^4 \\ -9e^{16} + e^4 & -9e^{16} + 5e^4 & -2e^{16} + 2e^4 \\ 16e^{16} & 16e^{16} & 4e^{16} \end{bmatrix}. \end{align}$
## Applications
### Linear differential equations
The matrix exponential has applications to systems of linear differential equations. (See also matrix differential equation.) Recall from earlier in this article that a differential equation of the form
$\mathbf{y}' = C\mathbf{y}$
has solution eCty(0). If we consider the vector
$\mathbf{y}(t) = \begin{pmatrix} y_1(t) \\ \vdots \\y_n(t) \end{pmatrix}$
we can express a system of coupled linear differential equations as
$\mathbf{y}'(t) = A\mathbf{y}(t)+\mathbf{b}(t).$
If we make an ansatz and use an integrating factor of e−At and multiply throughout, we obtain
$e^{-At}\mathbf{y}'-e^{-At}A\mathbf{y} = e^{-At}\mathbf{b}$
$e^{-At}\mathbf{y}'-Ae^{-At}\mathbf{y} = e^{-At}\mathbf{b}$
$\frac{d}{dt} (e^{-At}\mathbf{y}) = e^{-At}\mathbf{b}.$
The second step is possible due to the fact that if AB = BA then $e^{At}B=Be^{At}$. If we can calculate eAt, then we can obtain the solution to the system.
#### Example (homogeneous)
Say we have the system
$\begin{matrix} x' &=& 2x&-y&+z \\ y' &=& &3y&-1z \\ z' &=& 2x&+y&+3z \end{matrix}$
We have the associated matrix
$M=\begin{bmatrix} 2 & -1 & 1 \\ 0 & 3 & -1 \\ 2 & 1 & 3 \end{bmatrix}$
The matrix exponential
$e^{tM}=\begin{bmatrix} e^{2t}(1+e^{2t}-2t) & -2te^{2t} & e^{2t}(-1+e^{2t}) \\ -e^{2t}(-1+e^{2t}-2t) & 2(t+1)e^{2t} & -e^{2t}(-1+e^{2t}) \\ e^{2t}(-1+e^{2t}+2t) & 2te^{2t} & e^{2t}(1+e^{2t}) \end{bmatrix}$
so the general solution of the system is
$\begin{bmatrix}x \\y \\ z\end{bmatrix}= C_1\begin{bmatrix}e^{2t}(1+e^{2t}-2t) \\-e^{2t}(-1+e^{2t}-2t)\\e^{2t}(-1+e^{2t}+2t)\end{bmatrix} +C_2\begin{bmatrix}-2te^{2t}\\2(t+1)e^{2t}\\2te^{2t}\end{bmatrix} +C_3\begin{bmatrix}e^{2t}(-1+e^{2t})\\-e^{2t}(-1+e^{2t})\\e^{2t}(1+e^{2t})\end{bmatrix}$
that is,
$\begin{align} x & = C_1(e^{2t}(1+e^{2t}-2t)) + C_2(-2te^{2t}) + C_3(e^{2t}(-1+e^{2t})) \\ y & = C_1(-e^{2t}(-1+e^{2t}-2t)) + C_2(2(t+1)e^{2t}) + C_3(-e^{2t}(-1+e^{2t})) \\ z & = C_1(e^{2t}(-1+e^{2t}+2t)) + C_2(2te^{2t}) + C_3(e^{2t}(1+e^{2t})) \end{align}$
#### Inhomogeneous case – variation of parameters
For the inhomogeneous case, we can use integrating factors (a method akin to variation of parameters). We seek a particular solution of the form yp(t) = exp(tA) z (t) :
$\begin{align} \mathbf{y}_p'(t) & = (e^{tA})'\mathbf{z}(t)+e^{tA}\mathbf{z}'(t) \\[6pt] & = Ae^{tA}\mathbf{z}(t)+e^{tA}\mathbf{z}'(t) \\[6pt] & = A\mathbf{y}_p(t)+e^{tA}\mathbf{z}'(t). \end{align}$
For yp to be a solution:
$\begin{align} e^{tA}\mathbf{z}'(t) & = \mathbf{b}(t) \\[6pt] \mathbf{z}'(t) & = (e^{tA})^{-1}\mathbf{b}(t) \\[6pt] \mathbf{z}(t) & = \int_0^t e^{-uA}\mathbf{b}(u)\,du+\mathbf{c}. \end{align}$
So,
$\begin{align} \mathbf{y}_p(t) & {} = e^{tA}\int_0^t e^{-uA}\mathbf{b}(u)\,du+e^{tA}\mathbf{c} \\ & {} = \int_0^t e^{(t-u)A}\mathbf{b}(u)\,du+e^{tA}\mathbf{c} \end{align}$
where c is determined by the initial conditions of the problem.
More precisely, consider the equation
$Y'-A\ Y=F(t)$
with the initial condition $Y(t_0)=Y_0$, where
$A$ is an $n$ by $n$ complex matrix,
$F$ is a continuous function from some open interval $I$ to $\mathbb{C}^n$,
$t_0$ is a point of $I$, and
$Y_0$ is a vector of $\mathbb{C}^n$.
Left multiplying the above displayed equality by $e^{-tA}$, we get
$Y(t)=e^{(t-t_0)A}\ Y_0+\int_{t_0}^t e^{(t-x)A}\ F(x)\ dx.$
We claim that the solution to the equation
$P(d/dt)\ y = f(t)$
with the initial conditions $y^{(k)}(t_0)=y_k$ for $0\le k<n$ is
$y(t)=\sum_{k=0}^{n-1}\ y_k\ s_k(t-t_0)+\int_{t_0}^t s_{n-1}(t-x)\ f(x)\ dx,$
where the notation is as follows:
$P\in\mathbb{C}[X]$ is a monic polynomial of degree $n>0$,
$f$ is a continuous complex valued function defined on some open interval $I$,
$t_0$ is a point of $I$,
$y_k$ is a complex number, and
$s_k(t)$ is the coefficient of $X^k$ in the polynomial denoted by $S_t\in\mathbb{C}[X]$ in Subsection Alternative above.
To justify this claim, we transform our order n scalar equation into an order one vector equation by the usual reduction to a first order system. Our vector equation takes the form
$\frac{dY}{dt}-A\ Y=F(t),\quad Y(t_0)=Y_0,$
where A is the transpose companion matrix of P. We solve this equation as explained above, computing the matrix exponentials by the observation made in Subsection Alternative above.
In the case $n=2$ we get the following statement. The solution to
$y''-(\alpha+\beta)\ y' +\alpha\,\beta\ y=f(t),\quad y(t_0)=y_0,\quad y'(t_0)=y_1\ .$
is
$y(t)=y_0\ s_0(t-t_0)+y_1\ s_1(t-t_0) +\int_{t_0}^t s_1(t-x)\,f(x)\ dx,$
where the functions $s_0$ and $s_1$ are as in Subsection Alternative above.
#### Example (inhomogeneous)
Say we have the system
$\begin{matrix} x' &=& 2x & - & y & + & z & + & e^{2t} \\ y' &=& & & 3y& - & z & \\ z' &=& 2x & + & y & + & 3z & + & e^{2t}. \end{matrix}$
So we then have
$M= \left[ \begin{array}{rrr} 2 & -1 & 1 \\ 0 & 3 & -1 \\ 2 & 1 & 3 \end{array} \right]$
and
$\mathbf{b}=e^{2t}\begin{bmatrix}1 \\0\\1\end{bmatrix}.$
From before, we have the general solution to the homogeneous equation, Since the sum of the homogeneous and particular solutions give the general solution to the inhomogeneous problem, now we only need to find the particular solution (via variation of parameters).
We have, above:
$\mathbf{y}_p = e^{tA}\int_0^t e^{(-u)A}\begin{bmatrix}e^{2u} \\0\\e^{2u}\end{bmatrix}\,du+e^{tA}\mathbf{c}$
$\mathbf{y}_p = e^{tA}\int_0^t \begin{bmatrix} 2e^u - 2ue^{2u} & -2ue^{2u} & 0 \\ \\ -2e^u + 2(u+1)e^{2u} & 2(u+1)e^{2u} & 0 \\ \\ 2ue^{2u} & 2ue^{2u} & 2e^u\end{bmatrix}\begin{bmatrix}e^{2u} \\0\\e^{2u}\end{bmatrix}\,du+e^{tA}\mathbf{c}$
$\mathbf{y}_p = e^{tA}\int_0^t \begin{bmatrix} e^{2u}( 2e^u - 2ue^{2u}) \\ \\ e^{2u}(-2e^u + 2(1 + u)e^{2u}) \\ \\ 2e^{3u} + 2ue^{4u}\end{bmatrix}+e^{tA}\mathbf{c}$
$\mathbf{y}_p = e^{tA}\begin{bmatrix} -{1 \over 24}e^{3t}(3e^t(4t-1)-16) \\ \\ {1 \over 24}e^{3t}(3e^t(4t+4)-16) \\ \\ {1 \over 24}e^{3t}(3e^t(4t-1)-16)\end{bmatrix}+ \begin{bmatrix} 2e^t - 2te^{2t} & -2te^{2t} & 0 \\ \\ -2e^t + 2(t+1)e^{2t} & 2(t+1)e^{2t} & 0 \\ \\ 2te^{2t} & 2te^{2t} & 2e^t\end{bmatrix}\begin{bmatrix}c_1 \\c_2 \\c_3\end{bmatrix}$
which can be further simplified to get the requisite particular solution determined through variation of parameters.
## References
1. Bhatia, R. (1997). Matrix Analysis. Graduate Texts in Mathematics 169. Springer.
2. E. H. Lieb (1973). "Convex trace functions and the Wigner–Yanase–Dyson conjecture". Adv. Math. 11. p. 267–288. H. Epstein (1973). "Remarks on two theorems of E. Lieb". Commun Math. Phys. 31. p. 317–325.
3. This can be generalized; in general, the exponential of Jn(a) is an upper triangular matrix with ea/0! on the main diagonal, ea/1! on the one above, ea/2! on the next one, and so on.
• Horn, Roger A.; Johnson, Charles R. (1991), Topics in Matrix Analysis, Cambridge University Press, ISBN 978-0-521-46713-1 .
• .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 142, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8863053321838379, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/164924-quadratic-splines-minimizing-s.html
|
# Thread:
1. ## Quadratic splines, minimizing S
Hi i have a problem with quadratic splnes, i am supposed to find $S_1$ and $S_2$ that interpolates the following points S(-1)=0 S(0)=1 S(1)=2, and at the same time we want to find S such that $\int_{-1}^1 \! (S(x))^2 \, \mathrm{d}x.$ is minimal. The answer is on the form
$<br /> S_1(x)=a_1 x^2 +b_1x +c_1$ on [-1,0] and
$<br /> S_2(x)=a_2x^2 +b_2x +c_2$ on [-1,0]
my answer:
I use the data points and find that a1=-a2, b1=b2 and c1=c2=1, but I have no idea how to use minimize $<br /> \int_{-1}^1 \! (S(x))^2 \, \mathrm{d}x.<br />$, can i divide it up?
$<br /> min\int_{-1}^1 \! (S(x))^2 \, \mathrm{d}x.=min (\int_{-1}^0 \! (S_1(x))^2 \, \mathrm{d}x. +\int_{0}^1 \! (S_2(x))^2 \, \mathrm{d}x. ) ?<br />$
I know i should get an expression and probably set the derivative to zero but i just dont know how to attack the minimizing itegral since the functions has two parts. Help greatly appreciated.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931528627872467, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/6276/least-number-of-non-zero-coefficients-to-describe-a-degree-n-polynomial/6474
|
Least number of non-zero coefficients to describe a degree n polynomial
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'd be grateful for a good reference on this, it feels like a classic subject yet I couldn't find much about it.
Polynomials in one variable of the form $x^n+a_{n-1}x^{n-1}+\dots +a_1 x+a_0$ can be transformed into simpler expressions. For instance it is apparently well-known that the Tschirnhaus transformation allows to bring any quintic into so-called Bring-Jerrard form $x^5+ax+b$, while for degree 6 one needs at least three coefficents $x^6+ax^2+bx+c$.
Is there a name for such "generalized Bring-Jerrard form", and what is known about it? In particular there is a cryptic footnote of Arnold (page 3 of this lecture) where he says roughly that the degrees for which more coefficients are needed occur along "a rather strange infinite sequence": could someone please describe what those degrees are (I had a look at the OEIS but I believe that sequence is different from Hamilton numbers, and couldn't find a relevant one).
-
2 Answers
You might have a look at Polynomial Transformations of Tschirnhaus, Bring and Jerrard. It gives more explicit detail on why you can remove the first three terms after the leading term (covering the cases of degree 5 and 6 you mention above), but it does concentrate on degree 5.
Hamilton's 1836 paper on Jerrard's original work has an elementary explanation of the technique (much of the paper concentrates on showing that certain other reductions Jerrard proposed, including a general degree 6 polynomial to a degree 5, were "illusory"). It also explains Jerrard's trick for eliminating the 2nd, 3rd and 5th terms. Finally, Jerrard has a method for eliminating the second and fourth terms, while bringing the third and fifth coefficients into any specified ratio: this only works in degree 7 or above (Jerrard had mistakenly thought this worked generally, and thus solved the general quintic by reducing it to de Moivre's solvable form -- this all predates Abel's work!)
If by "Bring-Jerrard" form you just mean a certain number of the initial terms (after the first) have been eliminated, then the Hamilton numbers you linked to are indeed exactly what you want.
-
Thanks for the reply and the references. Right, I was confused, Arnold's footnote really deals with something else, namely the least number of non-zero coefficients of certain equations of degree n whose solutions are universal functions from solving any other equation of the same degree. So Arnold's sequence giving the numbers of those non-zero coefficients for each degree is definitely different from Hamilton's (and probably quite interesting too). – Thomas Sauvaget Nov 22 2009 at 19:51
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The modern notion of the essential dimension of a group gives a precise way to state your question (and generalizations), and there is some recent work extending the work mentioned in Scott's answer. To get started, see the article
J. Buhler and Z. Reichstein, On the essential dimension of a group, Compositio Math. 106 (1997), 159-179.
For instance, it is proved there that for polynomials of degree $n$, at least `$\lfloor n/2 \rfloor$` coefficients are required. (This agrees with what you mentioned for $n=5$ and $n=6$.)
-
Thank you very much for this extremely interesting reference! – Thomas Sauvaget Mar 27 2010 at 8:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941588819026947, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/89526/basis-of-vectors/89547
|
# basis of vectors
Let $\mathbf{V}$ be $\mathbb{R}^5$ with the usual Euclidean inner product, and let $\mathbf{W}$ be the subspace of $\mathbf{V}$ spanned by the vectors $\mathbf{v}_1$, $\mathbf{v}_2$, $\mathbf{v}_3$, $\mathbf{v}_4$ where: $$\begin{align*} \mathbf{v}_1&=[1,3,1,-2,3],\\\mathbf{v}_2&=[1,4,3,-1,-4],\\ \mathbf{v}_3&=[2,3,-4,-7,-3],\\\text{ and }\quad\mathbf{v}_4&=[3,8,1,-7,-8].\end{align*}$$
1. Find a basis for $\mathbf{W}$.
2. Find an orthogonal basis for $\mathbf{W}$.
3. Find an orthonormal basis for $\mathbf{W}$.
4. Let vector $\mathbf{u}=[3,8,1,-7,-8]$. Is $\mathbf{u}$ in $\mathbf{W}$ or not? If it is, find the components of $\mathbf{u}$ with respect to the orthonormal basis found in 3.
I do know that $\mathbf{v}_1$, $\mathbf{v}_2$, $\mathbf{v}_3$, $\mathbf{v}_4$ do span $\mathbf{W}$.
-
2
Well... of course they do! $\mathbf{W}$ is defined to be the subspace spanned by those vectors, so of course they span $\mathbf{W}$. Do you know how to extract a basis from a spanning set? (HINT: start getting rid of vectors that are linear combinations of vectors you already have). Do you know the Gram-Schmidt orthonormalization process? – Arturo Magidin Dec 8 '11 at 5:59
1
What did you try? What do you know? Where did you fail? // This looks like homework. If it is, you should add the (homework) tag. – Did Dec 8 '11 at 6:41
## 2 Answers
I do not attempt to give a full answer - this is a standard question, solved with standard techniques which you should familiarize yourself with.
1. Use Gaussian Elimination on the matrix containing the given vectors.
2. Use Gram-Schmidt.
3. Ditto.
4. Solve a linear equation system with $u$ being the right-hand side and the coefficients of the system given by the basis of 3.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8964537382125854, "perplexity_flag": "head"}
|
http://mathforum.org/mathimages/index.php/Sine
|
# Basic Trigonometric Functions
### From Math Images
(Redirected from Sine)
## Ratios: The Idea Behind Trig Functions
Imagine two lines that extend infinitely from one point. Let's call the the angle that these lines make $\angle A$.
We can draw right triangles using $\angle A$ by creating lines that are perpendicular to one of our original two lines, as shown in Image 1. In the picture, the lines are drawn perpendicular to the side that is oriented horizontally, but they could be drawn to the other side instead without affecting our results.
Visually, we can see that $\vartriangle ABD$ and $\vartriangle ACE$ have the same three angles, and so they are similar. That is, $\overline{AC}$ divided$\overline{AB}$" means "the line segment connecting points A and B". When we talk of dividing line segments, we're really dividing the magnitudes of those line segments. by $\overline{AB}$ is the same as $\overline{AE}$ divided by $\overline{AD}$:
$\frac{ \overline{AC} }{\ \overline{AB}\ }= \frac{ \overline{AE} }{\ \overline{AD}\ }$
Those aren't the only corresponding pairs of sides in our diagram, though. It's also true (by the definition of similarity) that the ratio of any two sides in $\vartriangle ABD$ is equal to the ratio of the corresponding sides in $\vartriangle ACE$:
$\frac{ \overline{AC} }{\ \overline{AE}\ }= \frac{ \overline{AB} }{\ \overline{AD}\ }$ $\frac{ \overline{EC} }{\ \overline{AE}\ }= \frac{ \overline{DB} }{\ \overline{AD}\ }$ $\frac{ \overline{EC} }{\ \overline{AC}\ }= \frac{ \overline{DB} }{\ \overline{AB}\ }$
Because these ratios are the same any time you have a right triangle with a given angle, every angle can be thought of as having an associated collection of ratios. We use trigonometric functions to connect an angle with its associated ratios. Since "trigonometric" is a long word, we often shorten it to "trig".
## Specific Functions
Image 2. A typical right triangle.
Image 3. A right triangle with base 8, height 6, hypotenuse 10, and angle 36.9°.
The trig functions below are defined in terms of a typical right triangle, as show in Image 2. We will then find their values for the specific triangle shown in Image 3.
#### Sine
Sine is a function that takes an angle and gives you the ratio of its opposite side divided by the hypotenuse:
$\sin A = \frac{ \overline{CB} }{\ \overline{AC}\ }$
In the triangle with angle A = 36.9°,
$\sin 36.9 = \frac{6}{10} = \frac{3}{5}$
#### Cosine
Cosine is a function that takes an angle and gives you the ratio of its adjacent side divided by the hypotenuse:
$\cos A = \frac{ \overline{AB} }{\ \overline{AC}\ }$
In the triangle with angle A = 36.9°,
$\cos 36.9 = \frac{8}{10} = \frac{4}{5}$
#### Tangent
Tangent is a function that takes an angle and gives you the ratio of its opposite side divided by its adjacent side:
$\tan A = \frac{ \overline{CB} }{\ \overline{AB}\ }$
In the triangle with angle A = 36.9°,
$\tan 36.9 = \frac{6}{8} = \frac{3}{4}$
### Reciprocal Functions
The reciprocal trig functions are functions that have a reciprocal relationship to the "Big Three" trig functions.
#### Secant
Secant is a function that takes an angle and gives you the ratio of the hypotenuse divided by its adjacent side. It is the reciprocal of cosine.
$\sec A = \frac{ \overline{AC} }{\ \overline{AB}\ } = \frac{1}{ \cos A}$
In the triangle with angle A = 36.9°,
$\sec 36.9 = \frac{10}{8} = \frac{5}{4}$
#### Cosecant
Cosecant is a function that takes an angle and gives you the ratio of the hypotenuse divided by its opposite side. It is the reciprocal of sine.
$\csc A = \frac{ \overline{AC} }{\ \overline{CB}\ } = \frac{1}{ \sin A}$
In the triangle with angle A = 36.9°,
$\csc 36.9 = \frac{10}{6} = \frac{5}{3}$
#### Cotangent
Cotangent is a function that takes an angle and gives you the ratio of the its adjacent side divided by its opposite side. It is the reciprocal of tangent.
$\cot A = \frac{ \overline{AB} }{\ \overline{CB}\ } = \frac{1}{ \cos A}$
In the triangle with angle A = 36.9°,
$\cot 36.9 = \frac{8}{6} = \frac{4}{3}$
## The Unit Circle
[Click to expand]
[Click to hide]
Image 4. A right triangle created within the Unit Circle.
Up until now, we have been defining trigonometric functions in terms of right triangles. Right triangles are one of the simplest ways to begin working with trig functions, but a big problem with that definition is that we have no way to define trig functions for angles that are greater than 90 ° (or π/2 radians). If an angle is bigger than 90 °, we can't draw a right triangle with that as one of the non-right angles, and then we have no way to assign it a value for sine, cosine, or tangent.
The Unit Circle gives us a new way to define trig functions - a definition that works for all angles, even those too big to draw right triangles from.
The Unit Circle is a circle with a radius of 1 unit that is centered at the origin. In the context of the Unit Circle, angles are depicted as central angles with one side fixed on the x-axis and the other rotated counter-clockwise around the origin.
We'll start in the first quadrant, where both the x and y coordinates are positive. We can relate our right-triangle based definitions of trig functions from above to the Unit Circle by dropping a line down to the x-axis from any point on the circle. By doing so, we create a right triangle whose hypotenuse is a radius of the Unit Circle. The length of this triangle's horizontal leg is the point's x coordinate, and the length of its vertical leg is the point's y coordinate.
Before, we found an angle's sine and cosine in terms of the sides of a right triangle. Now, we will find the sides of the right triangle in terms of an angle and that angle's sine and cosine. In Image 4, x and y are the sides of the right triangle, r is the radius of the circle (in the Unit Circle, r=1), and θ is the central angle that we're looking at. Then, by our definition of sine and cosine:
$\cos \theta = \frac{adjacent}{hypotenuse} = \frac{x}{r} = \frac{x}{1}$ $x = \cos \theta$ $\sin \theta = \frac{opposite}{hypotenuse} = \frac{y}{r} = \frac{y}{1}$ $y = \sin \theta$
If the radius of our circle is not 1, the same calculations show that $x=r \cos\theta$ and $y=r \sin \theta$:
$\cos \theta = \frac{adjacent}{hypotenuse} = \frac{x}{r}$ $x =r \cos \theta$ $\sin \theta = \frac{opposite}{hypotenuse} = \frac{y}{r}$ $y = r \sin \theta$
Image 5. A radius defining an obtuse angle.
If x and y are related to sine and cosine, is there something in our new picture that's related to tangent? Since tangent was defined as the opposite side divided by the adjacent side,
$\tan \theta= \frac{y}{x} = \frac{r \sin \theta}{r \cos \theta} = \frac{ \sin \theta}{ \cos \theta}$.
We now have a new way of defining sine, cosine, and tangent. Sine and cosine are the x and y coordinates of points on the Unit Circle, and tangent is sine divided by cosine.
Now that we can define our functions in terms of a radius and an angle, we can simply orient the radius so that it makes an obtuse angle with the positive x-axis. Then the x and y coordinates of the point where the radius intersects the Unit Circle will give us the values of sine, cosine, and tangent for that angle.
In Image 5, θ is an angle greater than 90°. We are interested in finding the coordinates (x,y) of the point on the Unit Circle defined by the radius at angle θ. First, we draw a vertical line from our point to the x-axis. Next, we label the sides of our triangle. Because the point is located in the second quadrant, we know that x is negative; however since length can't be negative, the leg of our triangle along the x-axis must have length $|x|$.
Now our picture looks very familiar - in fact, it looks just like Image 4, but with the triangle flipped over the y-axis. This is important.
Let's label the angle inside of the triangle α. This angle is the supplementary angle of θ. Next, we draw another triangle, this one measuring α from the positive x-axis. The updated version of our picture is shown in Image 6. In this new triangle, we know how to find the coordinates of the point on the Unit Circle defined by the radius at angle α. These are the coordinates that correspond to the triangle's side-lengths, so they're just $r \cos \alpha$ and $r \sin \alpha$.
Remember that we wanted to find x and y because they are $r \cos \theta$ and $r \sin \theta$ and we wanted to know what $\cos \theta$ and $\sin \theta$ are when θ is greater than 90°. Since the x and y that we're actually looking for are simply the coordinates we found for the vertex of our second triangle reflected over the y-axis, $x=-r \cos \alpha$and $y=r \sin \alpha$. This tells us that when θ is in the second quadrant,
$\cos \theta = - \cos \alpha = - \cos (180 - \theta)$
and
$\sin \theta = \sin \alpha = \sin (180 - \theta)$.
Because we used the angle α to help us find out information about the angle θ, we say that α is the reference angle of θ. Whenever we work with an angle greater than 90°, we will need to find a reference angle. To find the reference angle of some obtuse angle θ, we first rotate the radius θ degrees counter-clockwise from the positive x-axis. Then, we measure the smallest angle (going either direction) between this radius and the x-axis. This angle is θ's reference angle.
When our angle is greater than 180° but less than 270°, the radius that defines it is in the third quadrant, and so both sine and cosine are negative, because both the x and y coordinates are negative. When our angle is greater than 270° but less than 360°, it's in the fourth quadrant, and only the sine (the y-coordinate) is negative. In each case, we can find cosine and sine for our angle by finding the sine and cosine of the reference angle, and then applying negative signs depending on what quadrant we're in. This process is illustrated in Image 7.
Image 7. 150°, 220°, and 330° all have a reference angle of 30°. In the first quadrant, sine and cosine are both positive. In the second quadrant, sine is positive but cosine is negative. In the third quadrant, they're both negative. In the fourth quadrant, cosine is positive but sine is negative. In each case, tangent is sine over cosine.
Thus, for angles greater than 90°, we have now defined the trigonometric functions in terms of a reference angle. This introduces an interesting property of trig functions: they are periodic. By starting at the positive x-axis and rotating the radius around the Unit Circle, the values of sine, cosine, and tangent cycle through a specific set of values. Once the radius reaches the positive x-axis again, these values begin to repeat.
Below is an interactive applet that demonstrates the periodic properties of the Unit Circle:
## Inverse Trig Functions
Inverse trig functions can be written in two (nearly equivalent) ways - either by putting a superscripted -1 after the function name or by prefixing the function with "arc". So the inverse of $\sin x$ is $\sin^{-1} x$ or $\arcsin x$. Both are used to an equal degree. $\sin^{-1} x$ uses function notation for the inverse of a function, however, the superscript is easily confused with an exponent, which also frequently are used with trig functions.
The concept of inverse trig functions is fairly straightforward. Just as other functions and operations have inverses (subtraction is the inverse of addition, division is the inverse of multiplication, logarithms are the inverse of exponentiation, etc.), so do trig functions. Since regular trig functions take an angle and return a ratio, their inverses take a ratio (which will be a number between -1 and 1) and return the angle to which that ratio belongs. So, $\sin \left (30^\circ \right ) = \frac{1}{2}$ and $\sin^{-1} \left ( \frac{1}{2} \right )= 30^\circ$.
In this way: $\sin \left (x \right ) = \frac{1}{2} \longrightarrow x=\sin^{-1} \left ( \frac{1}{2} \right )$
[Click to show a more in-depth explanation.]
Understanding of this section requires knowledge of radians and the unit circle.
[hide]
Image 8. Note that sin-1 (the vertical graph) does not pass the vertical line test, and is, therefore, not a well defined function. The restricted red section of sin-1 does, however, pass this test and is, therefore, a well defined function.
The input of a trig function is the output of an inverse trig function, and the output of a trig function is the input of an inverse trig function. Trig functions output ratios when given angle measure, while inverse trig functions output angle measures when given ratios. Graphically, the two relate to each other in that, the x values of the first are the y values of the second, while the y values of the first are the x values of the second. Drawing the graph of an inverse function can be accomplished simply by reflecting the graph of the original function over the line y = x, thus flipping the x and y values of the coordinates. This is illustrated in Image 8 for sine and sin-1.
As was discussed in The Unit Circle, the values of trig functions repeat every 360°. Hence, there are an infinite number of angles that will yield a specific ratio when put into a trig function. In other words, as is, the inverse trig functions do not pass the vertical line test. There is a way to remedy the problem of inverse trig functions returning an infinite number of angles for every ratio: inverse trig functions are defined with a restricted, or shortened, range from that of their corresponding trig function.
A portion that passes the vertical line test (the red portion in Image 8) is taken from each of the inverse trig functions, and is set to represent the function as a whole. Restricting the range in this way, ensures that an inverse trig function will output at most one angle for a given ratio. Note, however, that the angle that an inverse trig function outputs for a given ratio will be the reference angle for that ratio.
| | | | | | |
|----------|------------|--------------|----------------------------|---------------------------------------|---------------------------------------|
| Function | If | Then | Domain (input) of function | Range (output) of function in degrees | Range (output) of function in radians |
| sin-1 | y = sin(x) | x = sin-1(y) | -1 ≤ x ≤ 1 | -90° ≤ y ≤ 90° | -π/2 ≤ y ≤ π/2 |
| cos-1 | y = cos(x) | x = cos-1(y) | -1 ≤ x ≤ 1 | 0° ≤ y ≤ 180° | 0 ≤ y ≤ π |
| tan-1 | y = tan(x) | x = tan-1(y) | All real numbers | -90° < y < 90° | -π/2 < y < π/2 |
| csc-1 | y = csc(x) | x = csc-1(y) | x ≤ -1 OR 1 ≤ x | -90° ≤ y < 0° OR 0° < y ≤ 90° | -π/2 ≤ y < 0 OR 0 < y ≤ π/2 |
| sec-1 | y = sec(x) | x = sec-1(y) | x ≤ -1 OR 1 ≤ x | 0° ≤ y < 90° OR 90° < y ≤ 180° | 0 ≤ y < π/2 OR π/2 < y ≤ π |
| cot-1 | y = cot(x) | x = cot-1(y) | All real numbers | 0° < y < 180° | 0 < y < π |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 56, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.901748776435852, "perplexity_flag": "head"}
|
http://www.maths.lth.se/matstat/research/mathematicalfinance/glossary/?word=Poisson%20process
|
LU / (NF) - (LTH) / MatCent / Groups / Financial Mathematics / Glossary
Financial Mathematics
Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Poisson process
A Poisson process is a process which jumps up one. The times between jumps are independent and exponentially distributed with mean $1/\lambda$. If we put $\lambda=1$ we get a standard Poisson process. A Poisson process is an important special case of the more general class of Lévy processes. The characteristic function of Poisson proces $N(t)$ is $$\phi_{N(t)}=E[e^{iyN(t)}]=e^{tK(y)},$$ where $K(y)=iy\mu+\lambda(e^{iy}-1)$.
If we put $\mu=-\lambda$ and define $X(t)=N(t)/\sqrt{\lambda}$ then $X$ will converge in distribution to a Brownian motion as $\lambda$ tends to $\infty$.
It is also possible to define a time-inhomogeneous Poisson process. The easist way of understanding a time-inhomogeneous Poisson process is to view it as a time-change of a standard Poisson process $N$. Let $\lambda(t)$ be a non-negative function defined on the positive real numbers (${\mathbb R}^+$). Define the cumulative intensity function $\Lambda(t)=\int_0^t\lambda(s)ds$. A time-inhomogeneous Poisson process $X$ with intensity function $\lambda(\cdot)$ can now be obtained as $X(t)=N(\Lambda(t))$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8431199193000793, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/247987/prove-that-2k-k3-for-all-k-ge10/247997
|
# Prove that $2^k > k^3$ for all $k\ge10$
I've no clue how to go ahead with this, all I know is it will be solved with induction.
1. Proved it's true for $k=10$
2. Assumed it's true for $k$
3. Need to prove that $2^{k+1} > ({k+1})^3$
Any pointers? I'm struggling with tough Induction questions so if you have any general tips to solve such questions it'll be great.
-
## 3 Answers
If you know $2^k > (k)^3$ and want to prove $2^{k+1} > ({k+1})^3$ the obvious thing to do is multiply the first by two so that you have $2^{k+1} > 2 k^3$ now if we could show that $2k^3 \ge (k+1)^3$ we could put these two inequalities together to complete the proof.
By expanding out and collecting like terms we have to prove $k^3 \ge 3k^2 + 3k + 1$ but that's an easy consequence of $k^3 \ge 3k^2 + 3k^2 + k^2$ which holds because $k \ge 10$ (dividing both sides by $k^2$).
-
When you pass from $2^k$ to $2^{k+1}$, the number doubles. If you could show that passing from $k^3$ to $(k+1)^3$ increases the number by a factor less than $2$, you’d be in business. How big is $\frac{(k+1)^3}{k^3}$? If $k\ge 10$, then
$$\frac{(k+1)^3}{k^3}=\left(\frac{k+1}k\right)^3=\left(1+\frac1k\right)^3\le\left(1+\frac1{10}\right)^3=\left(\frac{11}{10}\right)^3=\frac{1331}{1000}<2\;.\tag{1}$$
Thus, if $2^k>k^3$ and $k\ge 10$, we have
$$2^{k+1}=2\cdot 2^k>\frac{(k+1)^3}{k^3}\cdot 2^k>\frac{(k+1)^3}{k^3}\cdot k^3=(k+1)^3\;,$$
exactly as desired.
Writing it out in one go like that may make it look more mysterious than it really is. All I really did was ask myself what happens to the two sides of the inequality $2^k>k^3$ when $k$ is replaced by $k+1$. Clearly the lefthand side is doubled. What happens to the righthand side (in terms of multiplicative increase) isn’t so obvious, but at least we can say that it gets multiplied by $\frac{(k+1)^3}{k^3}$:
$$\begin{array}{c} &2^k&>&k^3\\ \text{multiply by }2&\downarrow&&\downarrow&\text{multiply by }\frac{(k+1)^3}{k^3}\\ &2^{k+1}&\overset{?}>&(k+1)^3 \end{array}$$
Clearly the inequality in the bottom row will be true if $2\ge\frac{(k+1)^3}{k^3}$, so we just have to make sure that this is the case $-$ which is exactly what I did with the calculation $(1)$.
-
You have: $(k+1)^3 \stackrel{k\ge 10}{\lt} 2\cdot k^3 \stackrel{hyp.}{\lt} 2\cdot 2^k$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9723283648490906, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/58018-directional-derivatives.html
|
# Thread:
1. ## directional derivatives!?
when f(x,y,z)= x^2+y^2-z and r(t)= [t/root2, t/root2, t^2]
find the directional derivative of f along the curve r(t) at the point P where t= root2
i have spent hours trying to work this one out but it is just beyond me! even just to know where to start would be a great help
2. Originally Posted by Ash_underpar
when f(x,y,z)= x^2+y^2-z and r(t)= [t/root2, t/root2, t^2]
find the directional derivative of f along the curve r(t) at the point P where t= root2
i have spent hours trying to work this one out but it is just beyond me! even just to know where to start would be a great help
Recall that the directional derivative is defined as $\nabla f(x_0,y_0,z_0)\cdot\bold u$
To find the point $(x_0,y_0,z_0)$ find the component values of $\bold r(t)$ at $t=\sqrt{2}$ Thus, the point is $(1,1,2)$.
Now, lets find $\nabla f(x,y,z)$
$\nabla f(x,y,z)=\left<\frac{\partial}{\partial x}(x^2+y^2-z),\frac{\partial}{\partial y}(x^2+y^2-z), \frac{\partial}{\partial z}(x^2+y^2-z)\right>$ $=\left<2x,2y,-1\right>$
Thus, $\nabla f(1,1,2)=\left<2,2,-1\right>$
Now, let's find the unit vector $\bold u$.
Since $\bold r(\sqrt{2})=\left<1,1,2\right>$, we need to normalize it.
Its magnitude is $\sqrt{1+1+4}=\sqrt{6}$. As a result, $\bold u=\left<\frac{1}{\sqrt{6}},\frac{1}{\sqrt{6}},\fra c{2}{\sqrt{6}}\right>=\left<\frac{\sqrt{6}}{6},\fr ac{\sqrt{6}}{6},\frac{\sqrt{6}}{3}\right>$
Now, find $\nabla f(1,1,2)\cdot \bold u$:
$\nabla f(1,1,2)\cdot \bold u=\left<2,2,-1\right>\cdot\left<\frac{\sqrt{6}}{6},\frac{\sqrt{ 6}}{6},\frac{\sqrt{6}}{3}\right>=\frac{\sqrt{6}}{3 }+\frac{\sqrt{6}}{3}-\frac{\sqrt{6}}{3}=\color{red}\boxed{\frac{\sqrt{6 }}{3}}$
Does this make sense?
--Chris
3. cheers matey you make it sound so simple but this vector calculus section im doing at the minute is going right over my head!! thanks anyway
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204118847846985, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=579847&page=3
|
Physics Forums
Page 3 of 4 < 1 2 3 4 >
## is there a logical way of understanding how randomness could agree with causality
In probability theory each roulette result is considered indepedent from any past events, however this assumption does not disagree with the possibility that all future events are predetermined by past events (determinism) like the orbits in an elastic collision simulator. The question is whether all events are determined by past events, or a human has the freedom to choose more than one choices like an elastic collision simulator where the future orbits of the spheres are not determined by the past orbits because some balls can choose to go up or down instead of the otherwise predetermined down orbit. Double slit experiment indicated nothing more than determinism, because indeed it's impossible to predict where each next "electron" or "photon" (dot on the film) will appear, but after many dots appear, the wave interference tossils shape on the film. So, quantum seems a little useless to answer the question, it's better to think on it supposing that where the roulette ball landed, was determined from the moment it left the hand of the dealer (which is rather false), and then wonder, was that dealer's choice predetermined by the events that took place an hour ago?
Quote by luckis11 In probability theory each roulette result is considered indepedent from any past events, however this assumption does not disagree with the possibility that all future events are predetermined by past events (determinism) like the orbits in an elastic collision simulator. The question is whether all events are determined by past events, or a human has the freedom to choose more than one choices like an elastic collision simulator where the future orbits of the spheres are not determined by the past orbits because some balls can choose to go up or down instead of the otherwise predetermined down orbit. Double slit experiment indicated nothing more than determinism, because indeed it's impossible to predict where each next "electron" or "photon" (dot on the film) will appear, but after many dots appear, the wave interference tossils shape on the film. So, quantum seems a little useless to answer the question, it's better to think on it supposing that where the roulette ball landed, was determined from the moment it left the hand of the dealer (which is rather false), and then wonder, was that dealer's choice predetermined by the events that took place an hour ago?
There's a lot of factors that are "probable" and would be relatively predictable but only because they happen in such slow rate of time. If you throw a ball, you can predict a relative area that it's likely to land even thought there's plenty of air molecules that could fractal-ly distribute energy in random ordinances as to cause it to move slightly one way or another and it's really not that much force and speed, so there's smaller parameters for where it could go. Or say I launch a rocket. If it uses virtually no energy to lift itself off the ground, you can predict with like 99% certainty it won't even make it off the ground, and so the area it will end up in is where it started. However, if you give it a ton of NO2, there's like a 1 mile radius of where it could possibly end up.
I suppose at this point there just isn't enough evidence to really determined it's either, but so far there is no evidence that there is actually something that determines with 100% certainty where particles move and where everything will ultimately end up, and since there's nothing determining them, things are free to happen in random orders as far as our evidence shows.
You'd also have to consider how probability dies how, but also how force and energy distribute through an object. At macroscopic distances, particles don't really appear and disappear much because their wave function's die down at those distances. However, the exchange of energy and force happens on a molecular levels and so how energy and forces distribute is still random with areas of probability.
As far as our consciousness goes, we don't really know if it occupies the classical realm or the macroscopic realm or really what it is, so it's hard to say how it effects things.
Quote by jadrian not to be impolite, but i truly view randomness in reality as something you can trick your kids into accepting along with santa, the tooth fairy etc. when compared to causality the idea of true randomness existing in reality seems incredibly weak to me. is there any logical way to reconcile the two?
First, randomness is perfectly compatible with causality.
Second, science is based in data and scientific method, not in personal beliefs/ruminations. Yes that ancient Pope was incredibly sure that Earth did not turn around Sun but...
Quote by juanrga First, randomness is perfectly compatible with causality. Second, science is based in data and scientific method, not in personal beliefs/ruminations. Yes that ancient Pope was incredibly sure that Earth did not turn around Sun but...
what are you doing here. random is defined as something of which we cannot see its cause. so true randomness would have no cause. do you think thinks through?
Quote by juanrga First, randomness is perfectly compatible with causality. Second, science is based in data and scientific method, not in personal beliefs/ruminations. Yes that ancient Pope was incredibly sure that Earth did not turn around Sun but...
are you still holding fast to the free will axiom haha of the copenhagen interpretation?
do you think the chemistry in your body has magic involved, as opposed to a burning flame?
heres some info that might cause indigestion
Originally Posted by kith View Post
Just out of curiosity: how do you decide what to do in a given situation? ;-)
my respons- everything that occurs in my body is a chemical reaction. all the chemical reactions are mediated/controlled via enzymes which are produced in quantities resulting in positive and negative feedback chemical reactions which ultimately react with dna as the homeostatic instruction manual.
my brain has developed partly through instinctual developments from my dna ie arachnophobia, and partly as a response to my environment, always ultimately controlled by dna which grows our brain into a tool to cope with a complex environment, always looking out for its survival, and eventual reproduction, not because the genes goal is reproduction, but because our genes are replications of genes that had a proclivity to reproduce. do you know why jealosy is one of the strongest and most violence producing emotion? its because our dna has strongly embedded in our brains developement a defense against somebody else impregnating your reproductive partner with other than your genes, resulting in your genetic death if you do not reproduce because of foreign adultery.
my choices are the end result of a causal continuum of millions of neural interactions, ultimately leading me to make the best decision in the interest of my genes. why does a male preying mantis let itself get eaten by the female after mating? because the added nutrition to the female will result in a more favorable genetic outcome (more eggs with its genes inside) than running away.
we are exercising our brains on a website because of complex psychological reasons that ultimately benefit our many aspects that could be considered in the genes interest.
why am i writing this post? because my self sustaining chemical reaction has effectively directed me to do it for reasons you can ask an evolutionary minded psychologist.
the chemical reactions that occur in my body and brrain are fundamentally indistinguishable from a burning flame or pouring acid into a buffer solution.
so to think that there is somebody behind the wheel in my brain calling the shots is an infantile notion. i have no more choice than any other chemical reaction that we would regard as nonliving.
let me ask you a question. Do you think you are alive?
Quote by jadrian what are you doing here. random is defined as something of which we cannot see its cause. so true randomness would have no cause. do you think thinks through?
Random is not that. You confound determinism with causality.
Quote by juanrga Random is not that. You confound determinism with causality.
Ok, well "spontaneousity" is in an event which we can't see a cause, but randomness is that we can't see a clear pattern to predict future information off of, and if there's no way to predict future information, how could everything be determined? And if you say "well that's just because we don't know what's determining everything", then maybe this thread should be moved to the speculation section.
Quote by questionpost Ok, well "spontaneousity" is in an event which we can't see a cause
I do not know what you mean by "spontaneousity", but the standard term spontaneous to refer to certain kind of processes (spontaneous processes) is causal. The cause of spontaneous evolution A→B is traced to the instability of the initial state A, which can be quantified.
Quote by juanrga I do not know what you mean by "spontaneousity", but the standard term spontaneous to refer to certain kind of processes (spontaneous processes) is causal. The cause of spontaneous evolution A→B is traced to the instability of the initial state A, which can be quantified.
Except if it's spontaneous how did "A" get there in the first place? We see atoms in random statistical locations with no apparent cause for them being in the specific location that we measure them in.
Quote by questionpost Except if it's spontaneous how did "A" get there in the first place? We see atoms in random statistical locations with no apparent cause for them being in the specific location that we measure them in.
"Spontaneous" refer to the process, not to the initial state. In any case, the initial unstable state is also obtained in agreement with causality. That is the reason which scientists are able to prepare systems in unstable states in their labs.
It seems that you also confound randomness with causality: A→B is deterministic and causal; A→{B1,B2,B3,...} is not deterministic but causal.
Quote by jadrian when compared to causality the idea of true randomness existing in reality seems incredibly weak to me. is there any logical way to reconcile the two?
First, please follow the conventions of written English. Capitalize where necessary (eg., wrt the first letter of the first word in a sentence).
To reply to your question, yes, the experience of randomness and the assumption of determinism are reconcilable/compatible.
Quote by juanrga "Spontaneous" refer to the process, not to the initial state. In any case, the initial unstable state is also obtained in agreement with causality. That is the reason which scientists are able to prepare systems in unstable states in their labs. It seems that you also confound randomness with causality: A→B is deterministic and causal; A→{B1,B2,B3,...} is not deterministic but causal.
Is "if you flip a coin there's a 50% chance of heads or tails" causal? In either case, there's still not evidence for something actually "causing" us to measure particles in the specific locations we measure them in.
Quote by ThomasT First, please follow the conventions of written English. Capitalize where necessary (eg., wrt the first letter of the first word in a sentence).
This is not an English forum which you'd know if you knew how to read well. As long as you understand what's being said it doesn't matter.
Quote by questionpost Is "if you flip a coin there's a 50% chance of heads or tails" causal? In either case, there's still not evidence for something actually "causing" us to measure particles in the specific locations we measure them in.
Maybe you would read your own phrase: "if you flip".
I do not know what you mean by "evidence", but the available theories of localization are causal (although non-deterministic).
Blog Entries: 6 Recognitions: Gold Member Science Advisor I thought that I would weigh in on the OP question. This question has been under debate from the inception of QM. As with the twin "paradox" in relativity the first step in resolving it (other than dismissing QM) is to carefully parse the question under the new definitions of the new theory. What does one mean by "causality" or "determinism"? Here are some formal operational definitions based on the understanding in QM that we do not speak about values we do not observe. 1.) determinism of effect: Given a well defined quantum system and a known intermediate dynamic, can we assure a given future measurement of a specific value by controlling the initial conditions? In QM the answer is yes. 2.) determinism of cause: (Dual to the above) Given a well defined quantum system and a known intermediate dynamic, can we be assured of a specific value of a given past measurement by a future observation? In QM the answer is yes. These two seem to say the same thing but not quite. 3.) complete determinism i.e. classical determinism: Given a well defined system and known intermediate dynamic, can we know the outcome of every possible future measurement by controlling the initial conditions? or equivalently ...can we know the values of every possible past measurement by future observations? In orthodox QM this is not possible since it violates complementarity. The equivalence here and its lack in the first two shows how complementarity invalidates the classical notion of a system state. Even asking the question of whether the universe is a clockwork is invalidated in QM. It isn't that the answer is "no" (or "yes") but that the question is invalid. It is like asking "which twin is older" in SR negating the relativity of time and simultaneity. In QM one has relativity of state or relativity of "reality" in that one can only parse classical questions when working in a particular frame of commuting observables. In SR you can transform between inertial frames mixing time and space, showing how different observers answer the question of "which twin is older at a given t value on my time coordinate". In QM the transformation between "reality frames" mixes certainty with spontaneity, i.e. it mixes information with noise. The QM transformation rules don't tell us how what one set of measurements yield transform to what another set of measurements yield, but rather how the expectation values of one set of measurements transform to the expectation values of another set of measurements. These expectation values include such things as variance which express degrees of uncertainty in the measurement. (e.g. $E(x^2) \ne (E(x))^2$ One may feel less than satisfied with the loss of certainty, i.e. Einstein's worry of incompleteness, however QM is complete in a different way, it is a theory formulated in a more complete context (probabilistic descriptions which allow for P=1 certain subcases). In summary, QM is deterministic ( 1 and 2 above) in that dynamic evolution maps the three entities: {system,observable, measured value} in a 1 to 1 way between past and future cases. It indeed maps all such triples to correspondents. But it also conserves the logic of complementarity and the uncertainty principle when we consider what measurements we made/are making/will make and what expectation values were/are/will be associated with them. In the above mapping only one such triple (for complete observables or one set of compatible triples) is valid in a given instance of the system.
Quote by juanrga Maybe you would read your own phrase: "if you flip". I do not know what you mean by "evidence", but the available theories of localization are causal (although non-deterministic).
Finding the location of a particle or seeing perfectly how energy will distribute is like flipping a coin. Also, at this point, there is no evidence that we can see "causes" everything to act the way it does, because that would require us to find out what "causes" particles to appear in the exact locations that they do, and there isn't really a predicted lower-level of matter that can create quarks and electrons. A particle appearing in a location or even throwing a ball can cause something, but there isn't a specific causation pattern which you can always depend upon in which you even know of a specific probability of outcomes. So it's chance that A->{B, C, D...}
Quote by questionpost Finding the location of a particle or seeing perfectly how energy will distribute is like flipping a coin. Also, at this point, there is no evidence that we can see "causes" everything to act the way it does, because that would require us to find out what "causes" particles to appear in the exact locations that they do, and there isn't really a predicted lower-level of matter that can create quarks and electrons. A particle appearing in a location or even throwing a ball can cause something, but there isn't a specific causation pattern which you can always depend upon in which you even know of a specific probability of outcomes. So it's chance that A->{B, C, D...}
in order for randomness to be true in a sence you might have to regard the electron to be moving not at c, but at INFINITE speed to result in the conclusion that it is undefined. also the assumption of randomness in qm leading to what we consider very precise compared to everyday measurement but might be grossly imprecise compared with absolute prediction, does not mean we should be forced to accept randomness
Quote by jadrian not to be impolite, but i truly view randomness in reality as something you can trick your kids into accepting along with santa, the tooth fairy etc. when compared to causality the idea of true randomness existing in reality seems incredibly weak to me. is there any logical way to reconcile the two?
Not that I know of. So, I agree with you. That is, given the extant physical evidence, the assumption of a fundamental determinism seems to me to be more reasonable than the assumption of a fundamental indeterminism or randomness.
I think you can sleep well tonight with the assumption that the world isn't suddenly going to do anything ... really weird.
Page 3 of 4 < 1 2 3 4 >
Thread Tools
| | | |
|--------------------------------------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: is there a logical way of understanding how randomness could agree with causality | | |
| Thread | Forum | Replies |
| | Special & General Relativity | 5 |
| | Special & General Relativity | 13 |
| | Quantum Physics | 42 |
| | Introductory Physics Homework | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9597926735877991, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/162179/conditional-expectation-including-a-measurable-random-variable/162374
|
# Conditional Expectation including a measurable random variable
Consider a probability space $(\Omega,\mathcal{F},P)$, a random Variable $X$ on that space, a $\sigma$-Algebra $\mathcal{G}$ and a $\mathcal{G}$-measurable random variable $A$. For some function: $f: \mathbb{R}^2 \mapsto \mathbb{R}$ consider the conditional expectation:
$C(\omega):= E[f(A,X)|\mathcal{G}](\omega)$
I am interested in the question, whether $C$ can be expressed using the function:
$g(a,\omega):= E[f(a,X)|\mathcal{G}](\omega)$
Such that: $C(\omega) = g(A(\omega),\omega)$ ?
Can you give me some hints on where to look and what to read, in order find out, if its possible and even derive some properties of $g$?
Thanks
-
## 1 Answer
A useful tool in matters of conditional expectations is sometimes called the collectivist approach and may be summarized as follows:
To show that some specific object $O$ has a given property, study the collection $\mathcal C$ of objects with said property. Then the fact that $O$ is in $\mathcal C$ often becomes obvious, for example because $\mathcal C$ contains all the objects with some given feature shared by $O$.
Here, one is given a random variable $X:\Omega\to\mathbb X$, a sub-sigma-algebra $\mathcal G$ on $\Omega$, a random variable $Y:\Omega\to\mathbb Y$, measurable with respect to $\mathcal G$, and a bounded measurable function $f:\mathbb Y\times\mathbb X\to\mathbb R$. One defines a function $G_f:\mathbb Y\times\Omega\to\mathbb R$ by $G_f(y,\omega)=E(f(y,X)\mid\mathcal G)(\omega)$ and a random variable $Z_f:\Omega\to\mathbb R$ by $Z_f(\omega)=G_f(Y(\omega),\omega)$. One wants to show that $E(f(Y,X)\mid\mathcal G)=Z_f$.
Consider the collection $\mathcal C$ of bounded measurable functions $u:\mathbb Y\times\mathbb X\to\mathbb R$ such that $E(u(Y,X)\mid\mathcal G)=Z_u$. The goal is to show that $f$ is in $\mathcal C$.
Assume first that $u=\mathbf 1_{F\times E}$ for some measurable subsets $F$ and $E$ of $\mathbb Y$ and $\mathbb X$ respectively.
• If $y\in F$, $u(y,\cdot)=\mathbf 1_E$ hence $G_u(y,\omega)=P(X\in E\mid\mathcal G)(\omega)$ for every $\omega$.
• If $y\notin F$, $u(y,\cdot)=0$ hence $G_u(y,\omega)=0$ for every $\omega$.
Thus, $G_u(y,\omega)=P(X\in E\mid\mathcal G)(\omega)\cdot\mathbf 1_F(y)$ for every $\omega$, that is, $Z_u=P(X\in E\mid\mathcal G)\cdot\mathbf 1_F(Y)$. On the other hand, $u(Y,X)=\mathbf 1_F(Y)\cdot\mathbf 1_E(X)$ and $\mathbf 1_F(Y)$ is $\mathcal G$-measurable hence $$E(u(Y,X)\mid\mathcal G)=\mathbf 1_F(Y)\cdot E(\mathbf 1_E(X)\mid\mathcal G)=\mathbf 1_F(Y)\cdot P(X\in E\mid\mathcal G).$$ One sees that $Z_u=E(u(Y,X)\mid\mathcal G)$. Thus, every $u=\mathbf 1_{F\times E}$ is in $\mathcal C$.
The next step is to consider step functions $u=\sum\limits_{n=1}^Na_n\mathbf 1_{F_n\times E_n}$ for some $N\geqslant1$, measurable subsets $F_n$ and $E_n$ of $\mathbb Y$ and $\mathbb X$ respectively, and numbers $a_n$. A simple argument shows that every such function $u$ is in $\mathcal C$ (linearity?).
The last step is to note that any bounded measurable function $u:\mathbb Y\times\mathbb X\to\mathbb R$ is a limit of step functions as above, and that another standard argument shows that every such function $u$ is in $\mathcal C$ (dominated convergence?).
This finishes the proof that $f$ is in $\mathcal C$.
Finally, note that it is often the case, as here, that the first step (the functions $u=\mathbf 1_{F\times E}$) requires a relative amount of care but that the successive subsequent extensions are routine.
Edit: All this is quite classical. A congenial reference is the so-called little blue book Probability with martingales by David Williams.
-
WOW. Thank you. Is this something a text-book on pobability-theory would contain? Do you know of one? As I need this for a (non-mathematical) paper, could you provide your name, so that I can cite you? Or should I just use "did"? – user4514 Jun 24 '12 at 12:58
Thanks. See Edit. – Did Jun 24 '12 at 13:47
Where can I find more about "any measurable function is a limit of step functions". It is clear for arbitrary sets, but the step functions you introduced use product sets. Is this still straight forward? – user4514 Jun 24 '12 at 14:18
Yes, because the family of product sets generates the product sigma-algebra (hence one should probably add a step in the proof, to deal with indicator functions of general elements of the product sigma-algebra). – Did Jun 24 '12 at 14:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303339719772339, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/92989/matrix-conditions-under-which-spectral-radius-is-smaller-than-1
|
## Matrix conditions under which spectral radius is smaller than 1?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello everyone,
I would like to find out which conditions are necessary so that the spectral radius $\rho(M)<1$ where $M$ represents the following matrix:
$M = \left( \begin{array}{ccc} W & 0 & V \\ \hat{V} O & \hat{W} & 0 \\ \hat{O} \hat{V} O & \hat{O}\hat{W} & 0 \end{array} \right)$
Additionally, the following information is given:
• $W$ and $\hat{W}$: ($n\times n$)
• $V$ and $\hat{V}$: ($n\times 1$)
• $O$ and $\hat{O}$: ($1\times n$)
• $W$ or $\hat{W}$ are not symmetric.
• $\rho(W)<1$ and $\rho(\hat{W})<1$
I already tried to solve this by calculating $det(M-\lambda I)=0$ and deriving a condition on one or more of the above matrices.
$M$ can be written simplified as: $M = \left( \begin{array}{ccc} A & 0 & D \\ B & C & 0 \\ E & F & 0 \end{array} \right)$
When $K = \left( \begin{array}{ccc} A & 0 \\ B & C \ \end{array} \right)$, $L = \left( \begin{array}{ccc} D \\ 0 \ \end{array} \right)$ and $P = \left( \begin{array}{ccc} E & F \end{array} \right)$ then $M = \left( \begin{array}{ccc} K & L \\ P & 0 \ \end{array} \right)$.
If $K-\lambda I$ is invertible then $det(M-\lambda I) = det(K-\lambda I)det(-\lambda I-P(K-\lambda I)^{-1}L)$
$det(K-\lambda I) = det(A-\lambda I)det(C-\lambda I)$
$(K-\lambda I)^{-1} = \left( \begin{array}{ccc} (A-\lambda I)^{-1} & 0 \\ -(C-\lambda I)^{-1}B(A-\lambda I)^{-1} & (C-\lambda I)^{-1} \ \end{array} \right)$
$\implies det(M-\lambda I) = det(A-\lambda I)det(C-\lambda I)det(-\lambda I-(E-F(C-\lambda I)^{-1}B)(A-\lambda I)^{-1}D)$
$det(-\lambda I-...D) = det(-\lambda I-\hat{O}(I-\hat{W}(\hat{W}-\lambda I)^{-1})\hat{V}O(W-\lambda I)^{-1}V)$
$\implies$ For $\lambda = 0$, $det(-\lambda I-...D) = 0$
We know that the eigenvalues of $K$ are $< 1$ (because eigenvalues of $A$ and $C$ are $<1$). The last expression of my calculation suggests that expanding $K$ to $M$ adds one eigenvalue which is equal to 0. However, after doing some numerical calculations it shows that this is false. If I calculate $eig(K)$ and afterwards $eig(M)$ one eigenvalue of 0 is added but all the other eigenvalues change as well (they increase). This makes it for me impossible to derive a condition so that $\rho(M)<1$.
What is wrong in the above derivation? Am I assuming something which is not valid? Any help would be appreciated.
Best Regards, Tim
-
1
Could you please rewrite the first "version" of your problem using a different letter for each matrix (and not just hats and superscripts that mean nothing to us to differentiate them) and stating clearly what are their sizes (all of them)? – Federico Poloni Apr 3 2012 at 12:15
As requested, I simplified my matrix notations and state all their sizes. – Tim Waegeman Apr 3 2012 at 12:47
I'm afraid calculations of the kind you tried lead into a dark forest. Maybe some norm-based argument? – Felix Goldberg Apr 3 2012 at 15:41
Thanks for the tip. However, I'm unable to derive any condition by using a norm representation of the spectral radius. The limit(norm(M^k),k=infinity) does not result in a nice expression. – Tim Waegeman Apr 5 2012 at 9:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177222847938538, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/128924-infinite-dimensional-vector-space.html
|
# Thread:
1. ## Infinite Dimensional Vector Space.
There is a 3 part question that I have been working on.
The first 2 parts are show that for a finite dimensional vector space:
$S \circ T$ is invertible if and only if $S$ and $T$ are invertible.
$S \circ T = I$ if and only if $T \circ S = I$
Theese two I have successfully proven. It is the last part I am having trouble with which is:
Give an example for each that shows that the statements are false for infinite dimensional vector spaces.
Any help would be appreciated, Thanks!
2. Hint : consider the vector space $V$ of sequences of real numbers with the map $S: (a_1,\dots,a_n,\dots) \mapsto (0,a_1,\dots,a_n,\dots)$. Show that $S$ has a left inverse which is not a right inverse.
3. Thanks for the hint!
Here is what I got
Let
and $T: (b_1,...,b_n,...) \mapsto (b_2,...,b_{n+1},...)$
$S \circ T$ will give us the original sequence however
$T \circ S$ wont. This proves the second
" if and only if " does not hold for infinite dimensional matrices.
However Im still not sure how to prove the first part " is invertible if and only if and are invertible." does not hold for infinite dimensional matrices.
4. Well you have to be careful; $T\circ S$ is the identity map, not $S\circ T$. You are right that it proves the second part of the problem.
For the first part, notice that $T\circ S$ is invertible (it's the identity!) but that $T$ is not. So the part of the statement which says "If $T\circ S$ is invertible, then both $T$ and $S$ are invertible" is false. The part which says "If both $T$ and $S$ are invertible, then $T\circ S$ is invertible" is always true; the inverse of $T\circ S$ is then given by $S^{-1}\circ T^{-1}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378983974456787, "perplexity_flag": "head"}
|
http://pediaview.com/openpedia/Homology_groups
|
Homology groups
In mathematics (especially algebraic topology and abstract algebra), homology (in part from Greek ὁμός homos "identical") is a certain general procedure to associate a sequence of abelian groups or modules with a given mathematical object such as a topological space or a group. See homology theory for more background, or singular homology for a concrete version for topological spaces, or group cohomology for a concrete version for groups.
For a topological space, the homology groups are generally much easier to compute than the homotopy groups, and consequently one usually will have an easier time working with homology to aid in the classification of spaces.
The original motivation for defining homology groups is the observation that shapes are distinguished by their holes. But because a hole is "not there", it is not immediately obvious how to define a hole, or how to distinguish between different kinds of holes. Homology is a rigorous mathematical method for defining and categorizing holes in a shape. As it turns out, subtle kinds of holes exist that homology cannot "see" — in which case homotopy groups may be what is needed.
Construction of homology groups
The construction begins with an object such as a topological space X, on which one first defines a chain complex C(X) encoding information about X. A chain complex is a sequence of abelian groups or modules C0, C1, C2, ... connected by homomorphisms $\partial_n \colon C_n \to C_{n-1},$ which are called boundary operators. That is,
$\dotsb\overset{\partial_{n+1}}{\longrightarrow\,}C_n \overset{\partial_n}{\longrightarrow\,}C_{n-1} \overset{\partial_{n-1}}{\longrightarrow\,} \dotsb \overset{\partial_2}{\longrightarrow\,} C_1 \overset{\partial_1}{\longrightarrow\,} C_0\overset{\partial_0}{\longrightarrow\,} 0$
where 0 denotes the trivial group and $C_i\equiv0$ for i < 0. It is also required that the composition of any two consecutive boundary operators be trivial. That is, for all n,
$\partial_n \circ \partial_{n+1} = 0_{n+1,n-1}, \,$
i.e., the constant map sending every element of Cn + 1 to the group identity in Cn - 1. This means $\mathrm{im}(\partial_{n+1})\subseteq\ker(\partial_n)$.
Now since each Cn is abelian all its subgroups are normal and because $\mathrm{im}(\partial_{n+1})$ and $\ker(\partial_n)$ are both subgroups of Cn, $\mathrm{im}(\partial_{n+1})$ is a normal subgroup of $\ker(\partial_n)$ and one can consider the factor group
$H_n(X) := \ker(\partial_n) / \mathrm{im}(\partial_{n+1}), \,$
called the n-th homology group of X.
We also use the notation $\ker(\partial_n)=Z_n(X)$ and $\mathrm{im}(\partial_{n+1})=B_n(X)$, so
$H_n(X)=Z_n(X)/B_n(X). \,$
Computing these two groups is usually rather difficult since they are very large groups. On the other hand, we do have tools which make the task easier.
The simplicial homology groups Hn(X) of a simplicial complex X are defined using the simplicial chain complex C(X), with C(X)n the free abelian group generated by the n-simplices of X. The singular homology groups Hn(X) are defined for any topological space X, and agree with the simplicial homology groups for a simplicial complex.
A chain complex is said to be exact if the image of the (n + 1)-th map is always equal to the kernel of the nth map. The homology groups of X therefore measure "how far" the chain complex associated to X is from being exact.
Cohomology groups are formally similar: one starts with a cochain complex, which is the same as a chain complex but whose arrows, now denoted dn point in the direction of increasing n rather than decreasing n; then the groups $\ker(d^n) = Z^n(X)$ and $\mathrm{im}(d^{n + 1}) = B^n(X)$ follow from the same description and
$H^n(X) = Z^n(X)/B^n(X), \,$
as before.
Sometimes, reduced homology groups of a chain complex C(X) are defined as homologies of the augmented complex
$\dotsb\overset{\partial_{n+1}}{\longrightarrow\,}C_n \overset{\partial_n}{\longrightarrow\,}C_{n-1} \overset{\partial_{n-1}}{\longrightarrow\,} \dotsb \overset{\partial_2}{\longrightarrow\,} C_1 \overset{\partial_1}{\longrightarrow\,} C_0\overset{\epsilon}{\longrightarrow\,} \Z {\longrightarrow\,} 0$
where
$\epsilon(\sum_i n_i \sigma_i)=\sum_i n_i$
for a combination Σ niσi of points σi (fixed generators of C0). The reduced homologies $\tilde{H}_i(X)$ coincide with $H_i(X)$ for i≠0.
Examples
The motivating example comes from algebraic topology: the simplicial homology of a simplicial complex X. Here An is the free abelian group or module whose generators are the n-dimensional oriented simplexes of X. The mappings are called the boundary mappings and send the simplex with vertices
$(a[0], a[1], \dots, a[n]) \,$
to the sum
$\sum_{i=0}^n (-1)^i(a[0], \dots, a[i-1], a[i+1], \dots, a[n])$
(which is considered 0 if n = 0).
If we take the modules to be over a field, then the dimension of the n-th homology of X turns out to be the number of "holes" in X at dimension n.
Using this example as a model, one can define a singular homology for any topological space X. We define a chain complex for X by taking An to be the free abelian group (or free module) whose generators are all continuous maps from n-dimensional simplices into X. The homomorphisms $\partial_n$ arise from the boundary maps of simplices.
In abstract algebra, one uses homology to define derived functors, for example the Tor functors. Here one starts with some covariant additive functor F and some module X. The chain complex for X is defined as follows: first find a free module F1 and a surjective homomorphism p1: F1 → X. Then one finds a free module F2 and a surjective homomorphism p2: F2 → ker(p1). Continuing in this fashion, a sequence of free modules Fn and homomorphisms pn can be defined. By applying the functor F to this sequence, one obtains a chain complex; the homology Hn of this complex depends only on F and X and is, by definition, the n-th derived functor of F, applied to X.
Homology functors
Chain complexes form a category: A morphism from the chain complex (dn: An → An-1) to the chain complex (en: Bn → Bn-1) is a sequence of homomorphisms fn: An → Bn such that $f_{n-1} \circ d_n = e_{n} \circ f_n$ for all n. The n-th homology Hn can be viewed as a covariant functor from the category of chain complexes to the category of abelian groups (or modules).
If the chain complex depends on the object X in a covariant manner (meaning that any morphism X → Y induces a morphism from the chain complex of X to the chain complex of Y), then the Hn are covariant functors from the category that X belongs to into the category of abelian groups (or modules).
The only difference between homology and cohomology is that in cohomology the chain complexes depend in a contravariant manner on X, and that therefore the homology groups (which are called cohomology groups in this context and denoted by Hn) form contravariant functors from the category that X belongs to into the category of abelian groups or modules.
Properties
If (dn: An → An-1) is a chain complex such that all but finitely many An are zero, and the others are finitely generated abelian groups (or finite dimensional vector spaces), then we can define the Euler characteristic
$\chi = \sum (-1)^n \, \mathrm{rank}(A_n)$
(using the rank in the case of abelian groups and the Hamel dimension in the case of vector spaces). It turns out that the Euler characteristic can also be computed on the level of homology:
$\chi = \sum (-1)^n \, \mathrm{rank}(H_n)$
and, especially in algebraic topology, this provides two ways to compute the important invariant χ for the object X which gave rise to the chain complex.
Every short exact sequence
$0 \rightarrow A \rightarrow B \rightarrow C \rightarrow 0$
of chain complexes gives rise to a long exact sequence of homology groups
$\cdots \rightarrow H_n(A) \rightarrow H_n(B) \rightarrow H_n(C) \rightarrow H_{n-1}(A) \rightarrow H_{n-1}(B) \rightarrow H_{n-1}(C) \rightarrow H_{n-2}(A) \rightarrow \cdots. \,$
All maps in this long exact sequence are induced by the maps between the chain complexes, except for the maps Hn(C) → Hn-1(A) The latter are called connecting homomorphisms and are provided by the snake lemma. The snake lemma can be applied to homology in numerous ways that aid in calculating homology groups, such as the theories of relative homology and Mayer-Vietoris sequences.
History
Homology classes were first defined rigorously by Henri Poincaré in his seminal paper "Analysis situs", J. Ecole polytech. (2) 1. 1–121 (1895).
The homology group was further developed by Emmy Noether[1][2] and, independently, by Leopold Vietoris and Walther Mayer, in the period 1925–28.[3] Prior to this, topological classes in combinatorial topology were not formally considered as abelian groups. The spread of homology groups marked the change of terminology and viewpoint from "combinatorial topology" to "algebraic topology".[4]
Notes
1. For example L'émergence de la notion de groupe d'homologie, Nicolas Basbois (PDF), in French, note 41, explicitly names Noether as inventing the homology group.
2. Hirzebruch, Friedrich, Emmy Noether and Topology in Teicher 1999, pp. 61–63.
References
• Cartan, Henri Paul and Eilenberg, Samuel (1956) Homological Algebra Princeton University Press, Princeton, NJ, OCLC 529171
• Eilenberg, Samuel and Moore, J. C. (1965) Foundations of relative homological algebra (Memoirs of the American Mathematical Society number 55) American Mathematical Society, Providence, R.I., OCLC 1361982
• Hatcher, A., (2002) Algebraic Topology Cambridge University Press, ISBN 0-521-79540-0 [Amazon-US | Amazon-UK]. Detailed discussion of homology theories for simplicial complexes and manifolds, singular homology, etc.
• Homology group at Encyclopaedia of Mathematics
• Hilton, Peter (1988), "A Brief, Subjective History of Homology and Homotopy Theory in This Century", Mathematics Magazine (Mathematical Association of America) 60 (5): 282–291, JSTOR 2689545
• Teicher, M. (ed.) (1999), The Heritage of Emmy Noether, Israel Mathematical Conference Proceedings, Bar-Ilan University/American Mathematical Society/Oxford University Press, ISBN 978-0-19-851045-1 [Amazon-US | Amazon-UK], OCLC 223099225
• Homology (Topological space), PlanetMath.org.
Source
Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Homology groups", which is available in its original form here:
http://en.wikipedia.org/w/index.php?title=Homology_groups
• Finding More
You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page.
• Questions or Comments?
If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content.
All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8791260719299316, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/geometry/166089-line-slope.html
|
# Thread:
1. ## A Line With a Slope
A line with a slope of 4 passes through point B at (1,-3). Which of the following is the equation of the line?
A. y = 4x + 13
B. y = 4x - 7
C. y = -4x + 7
D. y= -4x - 13
2. $\displaystyle y-y_1=m(x-x_1)$
$m=4$
3. Originally Posted by HRoseJ
A line with a slope of 4 passes through point B at (1,-3). Which of the following is the equation of the line?
A. y = 4x + 13
B. y = 4x - 7
C. y = -4x + 7
D. y= -4x - 13
In light of the help already given, if you need more please show what you have done and say where you are stuck.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215810298919678, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/19796/how-does-newtons-2nd-law-correspond-to-gr-in-the-weak-field-limit
|
# How does Newton's 2nd law correspond to GR in the weak field limit?
I can only perform the demonstration from the much simpler $E = mc^2$.
Take as given the Einstein field equation:
$G_{\mu\nu} = 8 \pi \, T_{\mu\nu}$
... can it be proved that Newton's formulation of gravitational and mechanical force (e.g. $F = ma$) corresponds to Einstein's in the limit when masses are small and speeds are relatively slow?
-
## 1 Answer
Newton's gravitational theory is a weak-field, low-velocity limit of general relativity but the precise map between equations and limits is different than you think.
In general relativity, one must use both Einstein's equations as well as the condition that freely falling objects are moving along geodesics – time-like world lines that maximize the proper time on them, i.e. satisfy $$\delta\int {\rm d}t_{\rm proper}=0$$ and these two equations – Einstein's equations describing the gravitational field as created by the sources of gravity; and the geodesic equations describing how probes react to the gravitational field – may be used to derive $$\frac{GMm}{r^2} = m\vec{\ddot x }$$ or similar classical equations (please add the unit vector to the left hand side above). Of course, when one does so, he should know the natural description of classical gravity in terms of the gravitational potential, the Poisson equation it obeys, and other things. In the Newtonian limit, $g_{00}$ component of the metric tensor largely depends on the gravitational potential $\Phi$ as $g_{00}=1+2\Phi/c^2$, as can be seen by simplifying Einstein's equations in the non-relativistic limit where they reduce primarily to the Poisson equation, and this influences geodesics etc.
One may also study the motion of non-freely-falling objects in general relativity and derive the equations for objects influenced by many forces although the formalism may look "advanced" in the language of general relativity and one recycles some concepts of classical physics, anyway.
-
That freely falling objects move on geodesics follows from the field equations if one assumes a delta-function as a source and makes use of the Bianchi-identities. Thus, strictly speaking one must not use both. – WIMP Jan 22 '12 at 7:40
1
Good point, WIMP. Still, deriving mechanics out of field theory of solitons is a bit advanced way for a beginner who wants to see his or her old equations of mechanics... – Luboš Motl Jan 23 '12 at 11:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197959303855896, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/166605/convergence-in-binomial-series
|
# Convergence in binomial series
Let $r>0$, $\varepsilon>0$ and $\alpha>0$. Assume that $0<\varepsilon<x<r$. I want a power series in $x$ for $x^{\alpha}$. Here is my attempt.
We may assume that $r<1$. $$x^{\alpha}=(1+(x-1))^{\alpha}=\sum_{k=0}^{\infty}\binom{\alpha}{k}(x-1)^k$$ $$=\sum_{k=0}^{\infty}\binom{\alpha}{k}\sum_{j=0}^k \binom{k}{j}x^j(-1)^{k-j}.$$ Now I want to obtain one sum, say, $$=\sum_{n=0}^{\infty} c_n x^n.$$ How could I achieve this and this rearranged series will be uniformly convergent in $[\varepsilon,r]$? (A "little bit" smaller interval is also satisfactory.)
-
Why don't you just use Taylor formula? – Norbert Jul 4 '12 at 15:26
How, precisely, is $\binom\alpha k$ defined for non-integer $\alpha$? The only reasonable way I can think of is with the Gamma function. – Cameron Buie Jul 4 '12 at 15:29
@CameronBuie: $\binom\alpha k=\alpha(\alpha-1)\cdots(\alpha-k+1)/k!$. – Harald Hanche-Olsen Jul 4 '12 at 15:32
Interesting, Harald. Does that work out for a general binomial expansion? – Cameron Buie Jul 4 '12 at 15:39
@Norbert the last power series is about $0$. Calculating the derivative of $x^{\alpha}$ at $0$ you would obtain zero in the denominator. – vesszabo Jul 4 '12 at 16:25
## 1 Answer
Such a series would converge for $|x|<r$ and that is possible only for integer nonnegative values of $\alpha$.
-
right, the power series is convergent in a symmetric interval about $0$, so it will convergent for $|x|<r$ for, what function?. Why only for nonnegative integer values of $\alpha$? The binomial series absolutely and uniformly converges for $|x-1|<1-\delta$ ($\delta>0$ is "small", this is the reason of the condition $0<\varepsilon<x$ and $x<r<1$. The $\varepsilon$ condition is important. – vesszabo Jul 4 '12 at 16:37
@vesszabo the answer is about the series $\sum_{n=0}^{\infty} c_n x^n$. – Andrew Jul 4 '12 at 17:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8828324675559998, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/52894/how-did-lord-rayleigh-derive-determine-the-phase-function-for-his-scattering-mod/52897
|
How did Lord Rayleigh derive/determine the phase function for his scattering model?
I've been researching the question for quite some time, as I understand it the phase function is actually an approximation due to the particle-wave duality inherent in participating media such as the atmosphere or anything else.
As they are reemitted from particles, some EM waves mingle with their neighbours and amplify or kill off each other, therefore general approximate lobes are constructed from empirical data.
The Rayleigh scattering phase function is symmetrical as defined:
$$\Phi_R(\theta) = \frac{1}{4\pi}\frac{3}{4}(1 + \cos^2\theta)$$ However, I have no idea where this came from. I tried researching different phase functions, even those more modern from Henyey and Greenstein in 1941. but they just stated it is an approximation and gave the final form with no details.
Very interesting, thank you! The coefficients are $3/4$, the $1/4\pi$ is the normalization constant ($4\pi$ - the number of steradians on a sphere). I will leave the question open for a little bit more, perhaps someone would like to weight in with more references. – ScatteredFrom Feb 2 at 15:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9594473838806152, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showpost.php?p=763681&postcount=6
|
View Single Post
Recognitions:
Homework Help
Science Advisor
Quote by eljose ..in fact if you knew $$\pi(x^{a})$$ with a total error O(x^d) by setting a=Ad and making A--->oo (infinite) the total error would be e=1/A O(x^e) with e the smallest positive number...
This looks like nonsense to me. If you can find $$\pi(x^{a})$$ with a total error O(x^d) then your a and d are fixed. As this A changes, your error analyisis will no longer be correct. Before you tried to do some kind of change of variables to make the error look smaller, but actually did nothing at all, is this what you're doing again?
then there's the whole "smallest positive number" problem...
Quote by eljose ( i know this exist i have seen proofs with results a+e being e an infinitesimal number.
No you haven't, not in any number theory paper at least. You'll often see an error like $$O(x^{1+\epsilon})$$ and they say you can take any $$\epsilon>0$$. They are not saying that $$\epsilon$$ is an "infintessimal number", just that the big O bound of that form holds for any possible fixed $$\epsilon>0$$ that you like, though the big O constant may depend on epsilon (sometimes they write $$O_\epsilon (x^{1+\epsilon})$$ to make this dependance explicit, but not usually).
Quote by eljose i can not calculate the error it will be in general O(x^d) with d=a+b+c a=number of time used to evaluate the integral.
So you're saying the more time you spend on calculating the integral (presumably to get a more accurate value for it), the larger your error term? That sounds bad...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9291951656341553, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/182097/is-similarity-surjection
|
# Is similarity surjection?
How to prove that similarity $f:\mathbb R^n \to \mathbb R^n$, i.e there is $0<r<1$ such that $d(f(x),f(y))=r \space d(x,y)$ for any $x,y\in\mathbb R^n$, is surjection? Intuitively this is the case, but i can't prove it rigorously or give any counterexample. Thanks.
-
## 1 Answer
You can compose a similarity of $\mathbf R^n$ with scaling (a bijection) to obtain an isometry, so you can assume without loss of generality that $f$ is an isometry.
$\mathbf R^n$ as a metric space has the property that for any two points, there is a unique geodesic line connecting them: the straight line. Isometries preserve distances, so they must also preserve geodesics, and hence also straight lines, so they are affine.
Therefore, if you compose an isometry $f$ with translation by $-f(0)$ (again a bijection), you obtain a linear isometry of $\mathbf R^n$, so without loss of generality $f$ is a linear isometry.
To check that a linear isometry is bijective, you just need to check that it transforms the sequence of basis vectors to a linearly independent sequence. That's not hard to do: if you had some $\sum_{i<n}\alpha_if(e_i)=0$, then, because $f$ is a linear isometry, $\sum_{i<n}\alpha_ie_i=0$, so $\alpha_i$ are all zero, and we're done.
This also shows that any similiarity is a composition of a linear isometry, translation and scaling, which, with some more work, can be used to show that it is also a composition of several reflections and scaling.
-
Your answer is very helpful, thanks. – Deco Aug 13 '12 at 18:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9591009020805359, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=592251
|
Physics Forums
## Strange parabolic calculations - need simplification help.
I'm trying to find the equation of a parabola that goes through any three given points in the x-y plane denoted by (a,b), (c,d), and (e,f), which would be considered givens in these calculations. I need the form of the equation y = Ax^2 + Bx + C where A, B, and C are unknown.
I have gotten as far as to find C, which means that with those original 6 givens I know the C term of any parabolic equation:
C = $\frac{a^2(cf-de) + c^2(be-af) + e^2(ad-bc)}{a^2(c-e) + c^2(e-a) + e^2(a-c)}$
Which is screaming to be simplified but I can't get it any further. I know this is correct by checking with known parabolic functions and test points but getting the general function is proving difficult. Any ideas?
I think it's weird that there's a^2, c^2, and e^2 terms on the top and bottom, and in the numerator there's exactly two of each letter in the parenthesis, one in a negative term and one in a positive term. And that in the denominator the squared terms match some of the letters with the squared terms in the numerator, eg: a^2(cf-de) vs a^2(c-e) etc
Thanks
Perhaps a simpler way to solve the problem is to substitute your three points (x,y) into the equation, and obtain three equations in the three unknowns A,B,C; then solve this system. If a 3-equation system gives you problems, you could try subtracting one equation from the other two, to obtain two equations in the two unknowns A,B. Be aware that the assumption is that your equation has the given form; that is, the parabola's axis is always vertical, not rotated to one side.
Quote by Dodo Perhaps a simpler way to solve the problem is to substitute your three points (x,y) into the equation, and obtain three equations in the three unknowns A,B,C; then solve this system. If a 3-equation system gives you problems, you could try subtracting one equation from the other two, to obtain two equations in the two unknowns A,B.
That's exactly what I did to get to where I am now.
Quote by Dodo Be aware that the assumption is that your equation has the given form; that is, the parabola's axis is always vertical, not rotated to one side.
What kind of equation would that be? Is it still possible that any three given points (other than completely vertical or horizontal ones) could lie on a parabola with a vertical axis?
## Strange parabolic calculations - need simplification help.
Quote by Axecutioner What kind of equation would that be? Is it still possible that any three given points (other than completely vertical or horizontal ones) could lie on a parabola with a vertical axis?
I meant, as long as your equation is y = Ax^2 + Bx + C. A more general quadratic, of the form Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0 could be a parabola, possibly rotated.
As for your three points, yes, you should always find a solution as long as the three points are different (no repeated points). If they are on a line, you'll just get A=0. And yes, vertical lines are forbidden (horizontals will just produce A=B=0 and C=y).
You could easily approach this as a linear regression problem. Least squares will generate a quadratic polynomial that fits through all three points.
I have no idea what linear regression is. :P and that's interesting about the Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0 formula. But given 3 points and 6 unknowns, wouldn't it be impossible to solve? Meaning there are many different parabolas that fit those 3 points given variable axis slope?
Blog Entries: 1 Recognitions: Homework Help The problem is that the general quadratic form also produces hyperbolas and ellipses
Quote by Axecutioner Meaning there are many different parabolas that fit those 3 points given variable axis slope?
That is right; if you want one parabola, you have to fix something else. (Plus, as Office_Shredder says, that was a formula for a generic conic section.) That's why I mentioned your formula and a vertical axis being assumed; Ax^2 + Bx + C = y will get you parabolas "standing up" (opening either up or down).
Here is a worked out example, as I'm not sure exactly where are you stuck. If you are given the points (1,7), (3,11), (5,13), for example, you set up the system[tex]\begin{align*}
A+B+C &= 7 \\
9A + 3B + C &= 11 \\
25A + 5B + C &= 13 \end{align*}[/tex]Subtracting the first equation from the other two, you obtain the smaller system[tex]\begin{align*}
8A + 2B &= 4 \\
24A + 4B &= 6 \end{align*}[/tex]Divide the second equation by 2, making it $12A + 2B = 3$, and subtracting them the $2B$ term is cancelled, leaving $4A = -1$ or $A = -\frac 1 4$. Substituting this value in the first equation gives you $2B = 4 - 8(-\frac 1 4)$ or $B = 3$; and finally, using one of the equations from the first system (pick the simplest one), you get $C = \frac {17} 4$.
Dodo, I know how to do that given points. I'm trying to find an equation for the parabola given any three points as I stated in the OP. I know how to do it. All I'm asking for is simplifying the C= equation I put in the OP if it can be done and nobody has posted anything related to what I originally asked.
Quote by Axecutioner All I'm asking for is simplifying the C= equation I put in the OP if it can be done and nobody has posted anything related to what I originally asked.
C=(fac(a-c)+bce(c-e)+dea(e-a))/((a-c)(c-e)(a-e))
is the best that I can do to try to answer your specific question.
That (e-a) in the numerator worries me. My result equals yours, But it spoils the pattern. Maybe it should be e-a in the denominator and the whole thing needs a minus sign. That would make the pattern more uniform.
There is still a great deal of symmetry in my result, but I haven't found any way to reduce it further.
If you prefer you can use your original numerator with my denominator, the result is the same.
Recognitions: Homework Help There is no reason that that expression can be simplified. Just because it seems long and follows a nice pattern in both the numerator and denominator, doesn't mean anything. Many students have also made the mistake of cancelling the x out of $$\frac{x}{x+a}$$ because they felt it was simpler
Bill Simpson, thanks! I'm surprised the denominator could factor into (a-c)(c-e)(a-e) but it does work. How'd you figure that out? I like the numerator you set up too, will play around with it and see what works out. Mentallic, is there a way to separate that fraction out, in any way? Also: If you start with 3 points: (a,b) (c,d) (e,f) The slopes of the lines between the first two and last two are: $\frac{d-b}{c-a}$ and $\frac{f-d}{e-c}$ Therefore the change in slope between those two lines is: $\frac{\frac{f-d}{e-c} - \frac{d-b}{c-a}}{e-a}$, correct? So given that change in slope, and knowing that a parabola has a constant change in slope (second derivative is constant), then that change in slope formula must be the A term of Ax^2 + Bx + C? Edit: Oh, no, maybe not. Because when differentiating Ax^2 + Bx + C you get a 2 multiplied into the A term by power rule. So A would be half that formula? Or is it right?
Quote by Axecutioner How'd you figure that out? I like the numerator you set up too, will play around with it and see what works out.
I read a book many many years ago on symmetries and it changed the way I think and see things. Unfortunately I don't remember the title or the author. It is buried here somewhere, but I doubt I'll ever find it again.
Perhaps someone else can contribute the title or other titles on how symmetries can be used to understand some problems much more easily.
Interesting. If you ever find the title or author please let me know. Here's a different version of the C equation: $\frac{ade^2 - bce^2 + bc^2e - a^2de - ac^2f + a^2cf}{ae^2 - ce^2 + c^2e - a^2e - ac^2 + a^2c}$ I wonder if the top could factor out similarly to the way you got it to in the bottom. It'd sure make things easier...
Quote by Axecutioner Here's a different version of the C equation:
I believe I had that numerator and several others, but I settled on the one because it seemed to most clearly highlight the symmetry between the variables.
If you have not already done so then you might think about why there is this symmetry in the variables. It may be superficial and unimportant or there might be levels of enlightenment when you look at the reasons and consequences of the symmetry. I suppose there might even be a possibility of discovering other forms and perhaps even simpler forms to be found as a result. I'm not seeing any simpler forms yet.
I went ahead and used your original suggestion to solve for A using: C=(fac(a-c)+bce(c-e)+dea(e-a))/((a-c)(c-e)(a-e)) And let me tell ya, it's amazing how complicated it got, then how simple the answer is. I'll scan my work tomorrow and post it - way too long to type out. Again, thank you for that :D Was a big help.
Recognitions: Gold Member Homework Help Science Advisor Axecutioner: That the "C" term should be complicated is not so strange. I believe you would get simpler coefficents for the general expression if you afterwards rearrange them according to the following general formula: A(x+b)^2+c In this case, "c" also has a neat interpretation as an extremal value, "b" as the symmetry axis, and "A" has its standard interpretation.
Thread Tools
| | | |
|---------------------------------------------------------------------------------|----------------------------------------|---------|
| Similar Threads for: Strange parabolic calculations - need simplification help. | | |
| Thread | Forum | Replies |
| | High Energy, Nuclear, Particle Physics | 1 |
| | Introductory Physics Homework | 13 |
| | General Math | 19 |
| | Calculus & Beyond Homework | 2 |
| | Precalculus Mathematics Homework | 9 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520350098609924, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/281271/is-this-formula-for-the-number-of-nodes-for-a-complete-tree-or-a-full-and-comple?answertab=active
|
# Is this formula for the number of nodes for a complete tree or a full and complete tree?
In a lecture it was said that "How many nodes are there in a complete k-ary tree with height h?" and this was the answer:
$$\sum^{h}_{i = 0}k^i$$ where h is the height and k is the max number of children
It's supposed to be "how many nodes in a full and complete k-ary tree" right? Because the definition of a complete tree is:
a binary tree T with n levels is complete if all levels except possibly the last are completely full,and the last level has all its nodes to the left side.
So the number of leaves might not be k so this formula only applies to full and complete am I right? or did I misunderstand something?
-
2
Yes you're right. – Jernej Jan 18 at 8:38
## 1 Answer
Yes you are correct. It should be a full and complete tree.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559992551803589, "perplexity_flag": "middle"}
|
http://nrich.maths.org/7529/index?nomenu=1
|
## 'Odds, Evens and More Evens' printed from http://nrich.maths.org/
### Show menu
#### If you are a teacher, click here for a version of the problem suitable for classroom use, together with supporting materials. Otherwise, read on...
Here are the first few sequences from a family of related sequences:
$A_0 = 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29...$
$A_1 = 2, 6, 10, 14, 18, 22, 26, 30, 34, 38, 42...$
$A_2 = 4, 12, 20, 28, 36, 44, 52, 60...$
$A_3 = 8, 24, 40, 56, 72, 88, 104...$
$A_4 = 16, 48, 80, 112, 144...$
$A_5 = 32, 96, 160...$
$A_6 = 64...$
$A_7 = ...$
.
.
.
Which sequences will contain the number 1000?
Once you've had a chance to think about it, click below to see how three different students began working on the task.
Alison started by thinking:
"I have noticed that each number is double the number in the row above. I wonder if I can work out what would go in the rows above 1000?"
Bernard started by thinking:
"I have noticed that in $A_1$, the numbers which end in a 0 are 10, 30, 50... If I carry on going up in 20s I won't hit 1000, so I know 1000 isn't in $A_1$."
Charlie started by thinking:
"I have noticed that each number in $A_1$ is 2 more than a multiple of 4. I know 1000 is $250 \times 4$ so it can't be in $A_1$."
Can you take each of their starting ideas and develop them into a solution?
Here are some further questions you might like to consider:
How many of the numbers from 1 to 63 appear in the first sequence? The second sequence? ...
Do all positive whole numbers appear in a sequence?
Do any numbers appear more than once?
Which sequence will be the longest?
Given any number, how can you work out in which sequence it belongs?
How can you describe the $n^{th}$ term in the sequence $A_0$? $A_1$? $A_2$? ... $A_m$?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9614174365997314, "perplexity_flag": "middle"}
|
http://nrich.maths.org/784
|
### Cubic Spin
Prove that the graph of f(x) = x^3 - 6x^2 +9x +1 has rotational symmetry. Do graphs of all cubics have rotational symmetry?
### Sine Problem
In this 'mesh' of sine graphs, one of the graphs is the graph of the sine function. Find the equations of the other graphs to reproduce the pattern.
### Parabolic Patterns
The illustration shows the graphs of fifteen functions. Two of them have equations y=x^2 and y=-(x-4)^2. Find the equations of all the other graphs.
# Ellipses
##### Stage: 4 and 5 Challenge Level:
Here is a pattern for you to experiment with using graph drawing software. For example, you can download Graphmatica for free from here and it comes with a good Help file.
The equations of two of the graphs are: $$\frac{x^2}{36}+\frac{y^2}{16}=1$$ $$x^2 + y^2 = 1$$ Find the equations of the other 8 graphs in this pattern.
How do you know from their equations that all the graphs are symmetrical about both the y-axis and the x-axis?
Draw your own pattern of ellipses and circles.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9106451272964478, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/15751/filter-closed-vs-chain-closed/15758
|
## Filter-closed vs. chain-closed
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let A is a complete lattice.
I call a subset $S$ of A filter-closed when for every filter base $T$ in $S$ we have $\bigcap T\in S$. (A filter base is a nonempty, down directed set.)
I call a subset $S$ of A chain-closed when for every non-empty chain $T$ in $S$ we have $\bigcap T\in S$.
Conjecture $S$ is filter-closed if and only if $S$ is chain-closed.
-
Would you mind defining "chain" for completeness sake? I am guessing it is just a decreasing sequence, but it would be nice to nail this down before I think too hard about it. – David Speyer Feb 18 2010 at 19:54
The formulation of filter-closed is presumably wrong, one instance of A should be S – Gerald Edgar Feb 18 2010 at 20:19
A chain is subset of a poset on which the induced order is complete. – Michael Greinecker Feb 18 2010 at 20:20
A chain is a totally-ordered set. – Gerald Edgar Feb 18 2010 at 20:20
Thanks, Gerald Edgar. A changed to S. – porton Feb 18 2010 at 20:27
## 4 Answers
Indeed, your conjecture is correct.
Theorem. If L is a complete lattice and S is a subset of L, then S is chain-closed iff S is filter-closed.
Proof. Clearly filter-closed implies chain-closed, since every chain is a filter base. Conversely, suppose that S is chain-closed, and that A is a filter base contained in S. Note that S is trivially filter-closed with respect to any finite filter base. So suppose by induction that S is filter-closed with respect to any filter base of size smaller than |A|. Enumerate A = { aα | α < |A| }. Let bβ be the meet of { aα | α < β }. This is the same as the meet of the filter sub-base of A generated by this set. This filter sub-base has size less than |A|, and hence by induction every bβ is in S. Also, the bβ are a descending chain in S, since as we take more aα, the meet gets smaller. Thus, by the chain-closure of S, the meet b of all the bβ is in S. This meet b is the same as the meet of A, and so we have proved that S is filter-closed. QED
This argument is very similar to the following characterization of (downward) complete lattices (which I had posted as my original answer).
Theorem. The following are equivalent, for any lattice L.
• L is complete, in the sense that every subset of L has a greatest lower bound.
• L is filter complete, meaning that every filter base in L has a greatest lower bound.
• L is chain complete, meaning that every filter base in L has a greatest lower bound.
Proof. It is clear that completeness implies filter completeness, since every filter base is a subset of L, and filter completeness implies chain completeness, since every chain is a filter base. For the remaining implication, suppose that L is chain complete. We want to show that every subset A of L has a greatest lower bound in L. We can prove this by transfinite induction on the size of A. Clearly this is true for any finite set, since L is a lattice. Fix any infinite set A. Enumerate A as { aα | α < |A| }. By the induction hypothesis, for each β < |A|, the set { aα | α < β } has a greatest lower bound bβ. Note that { bβ | β < |A| } is a chain, because as we include more elements into the sets, the greatest lower bound becomes smaller. Thus, there is an element b in L that is the greatest lower bound of the bβ's. It is easy to see that this element b is also a lower bound of A. QED
One can describe the method as finding a linearly ordered cofinal sequence through the filter generated by the filter base. This proof used AC when A was enumerated, and I believe that this cannot be omitted.
One can modify the argument to show that for every infinite cardinal κ, then a lattice is κ-complete (every subset of size less than κ has a glb) iff every filter base of size less than κ has a glb iff every chain of size less than κ has a glb.
Note that if the lattice is bounded (meaning that it has a least and greatest element), then having greatest lower bounds for every set is the same as having least upper bounds for every set, since the least upper bound of a set A is the greatest lower bound of the set of upper bounds of A. Thus, a complete lattice is often defined as saying that every subset has a glb and lub.
There have been a few questions here at MO concerning complete lattices. See this one and this one.
Questions about the degree of completeness of a partial order often arise in connection with forcing arguments, and when one is speaking of partial completeness and partial orders (rather than lattices), and the situation is somewhat more subtle. For example, a partial order P is said to be κ-closed if every linearly ordered subset of P of size less than κ has a lower bound. It is κ-directed closed if every filter base in P of size less than κ has a lower bound. With these concepts, it is no longer true that a partial order is κ-directed closed if and only if it is κ-closed. One example arising in forcing would be the forcing to add a slim κ-Kurepa tree, which is κ-closed but not κ-directed closed. The difference between these two concepts is related to questions of large cardinal indestructibility, for Richard Laver proved that every supercompact cardinal κ can become indestructible by all κ-directed closed forcing, but no such cardinal can ever be indestructible by all κ-closed forcing, precisely because the slim κ-Kurepa tree forcing destroys the measurability of κ.
-
1
This is correct but there is a subtlety with the question. It is not sufficient to show that S is by itself a complete lattice, you also need that the meets of filters in S are the same as those computed in the enveloping complete lattice A. A small variation of your argument does this. – François G. Dorais♦ Feb 19 2010 at 0:10
Francois, you are right. I was mainly thinking just about whether completeness is equivalently characterized by filters and chains for a lattice. But the same idea works, and I'll edit my answer accordingly. – Joel David Hamkins Feb 19 2010 at 1:03
Note: It isn't generally true that S will be a complete lattice, for it needn't even be a lattice. For example, perhaps S is an antichain in the larger lattice. This is trivially chain-closed, since all chains in S have only one element. (And it is also trivially filter-closed, since all filter bases subset S also have only one element.) – Joel David Hamkins Feb 19 2010 at 3:13
"This filter sub-base has size less than |A|" - why? – porton Feb 23 2010 at 21:15
Can we add an additional equivalent statement to the formulation of the conjecture we have proved? (such as: the meet of T is in S for every non-empty subset T of S) – porton Feb 23 2010 at 21:17
show 11 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
How is the following related to the question...
Let $P$ be a partially-ordered set. Suppose every chain in $P$ has a least upper bound. Then every subset of $P$ which is directed has a least upper bound.
I needed this once, long ago, didn't find it, so included a proof in the paper. Many years later someone gave me a reference for it: Mayer-Kalkschmidt & Steiner, Duke Math. J. 31 (1964) 287-289
-
No, your answer is on a different problem than I asked. – porton Feb 18 2010 at 20:51
1
@Porton: It's not that different. Your chain-closed subset S is a poset where each chain has a greatest lower bound. By the (dual of the ) above, every filter in S has a greatest lower bound in S. All you need to check is if this greatest lower bound is indeed the same as the one in A. You will figure that out pretty quickly by looking at the proof... – François G. Dorais♦ Feb 18 2010 at 21:20
I made a post to sci.math regarding this question. You might check that post for some suggestions. Others might check the thread for more history on that problem.
Gerhard "Ask Me About System Design" Paseman, 2010.02.18
-
I don't see your post in sci.math! Note also that I already have asked this in sci.math a few months ago. – porton Feb 18 2010 at 20:44
A Google Groups search on "Paseman filter" brings it up quickly. The advice there was (essentially) to study the finite case. – Gerhard Paseman Feb 18 2010 at 23:05
3
The reason it is hard to find... The "From" line is not "Gerhard Paseman" but "Ask me about System Design". Dated December 30, 2009. – Gerald Edgar Feb 19 2010 at 14:55
I wrote more detailed proof based on the proof by Joel David Hamkins in this online article.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9589300751686096, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/66472/proving-absolute-value-inequality?answertab=oldest
|
# Proving Absolute Value Inequality
I had posted a portion of this earlier asking about how to interpret min(). I received some excellent answers, however, I have run into problems and feel stuck. I am posting the question in its entirety.
Let $x, x_{0},y,y_{0}$ be real numbers and $\varepsilon$ be a positive real number. If we have
$$|x-x_{0}|<\min \left(\frac{ \varepsilon }{2(|y_{0}|+1)}, 1\right) \text{ and } |y-y_{0}|< \frac{ \varepsilon }{2(|x_{0}|+1)},$$ then prove that $|xy-x_{0}y_{0}|<\varepsilon$.
A hint is provided:
Write $xy-x_{0}y_{0}$ in terms of $x-x_{0}$ and $y-y_{0}$ and use the triangle inequality twice.
I've been rearranging and writing out what I know etc. in an attempt to find a solution:
$$|x-x_{0}|(2|y_{0}|)+2|x-x_{0}|<\varepsilon$$
$$|y-y_{0}|(2|x_{0}|)+2|y-y_{0}|<\varepsilon$$
$$|x-x_{0}|< 1$$
-
4
HINT: $xy - x_0y_0 = xy - xy_0 + xy_0 - x_0 y_0$. – JavaMan Sep 21 '11 at 21:02
$|x(y-y_{0})-y_{0}(x-x_{0})|<\varepsilon$ – Malthus Sep 21 '11 at 21:52
I've got $|x(y-y_{0})-y_{0}(x-x_{0})|\leq |x(y-y_{0})|+|y_{0}(x-x_{0})|$ but I cant go further and relate it back to the inequalities I'm provided with. – Malthus Sep 21 '11 at 22:24
Also $|x(y-y_{0})|+|y_{0}(x-x_{0})|=|x||(y-y_{0})|+|y_{0}||(x-x_{0})|$ – Malthus Sep 21 '11 at 22:28
## 1 Answer
Try to rewrite your initial expression as follows:
$$\begin{eqnarray}|xy-x_0y_0|&\leq&|xy-x_0y+x_0y-x_0y_0|\\ &\leq&|y||x-x_0|+|x_0||y-y_0|\\ &\leq&(|y_0|+|y-y_0|)|x-x_0|+|x_0||y-y_0|\\ &\leq& |y_0||x-x_0|+|y-y_0|(|x_0|+|x-x_0|)\\ &<&|y_0|\frac{\varepsilon}{2(|y_0|+1)}+\frac{\varepsilon}{2(|x_0|+1)}(|x_0|+1)\\ &<&2\frac{\varepsilon}{2}=\varepsilon. \end{eqnarray}$$
And between the passages we have used the fact that the condition $$|x-x_0|< \min\left(\frac{\varepsilon}{2(|y_0|+1)},1\right),$$ implies both $$|x-x_0|<\frac{\varepsilon}{2(|y_0|+1)}\quad\text{and}\quad |x-x_0|<1.$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315316081047058, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/tags/electronics/new
|
# Tag Info
## New answers tagged electronics
1
### Is it viable or possible to make your own transistor?
I do believe that yes. The most hard part will be to obtain the materials. If you manage to get a good piece of n-type (or p-type) Silicon, big enough to allow you to work with home tools, you'll "just" have to do local oxidation (with temperature for example) and make some soldering. Of course, the quality of that transistor would be very doubtful as it ...
0
### When does Thevenin's theorem not apply (modelling a power source with a ohmic internal resistance)
There are actually two slightly different versions of Thevenin's theorem. I think what you are describing is the weaker of the two: you can replace any circuit with a single voltage/current source and a single resistor. That version holds for any two-terminal network made up only of voltage/current sources and ohmic resistors. It fails as soon as you add ...
0
### Working of a p-n junction diode when forward biased
Reverse Bias of P-N junction When the voltage is applied this way round it tends to pull the free electrons and holes apart, and increases the height of the energy barrier between the two sides of the diode. As a result it is almost impossible for any electrons or holes to cross the depletion zone and the diode current produced is virtually zero. A few lucky ...
0
### How does power consumption vary with the processor frequency in a typical computer?
For a given circuit in a given technology, power increases at a rate proportional to $f^3$ or worse. You can see by looking at the graph in @Martin Thompson's answer that power is superlinear in frequency. $P=c V^2 f + P_S$ is correct, but only superficially so because $f$ and $P_S$ are functions of $V$ and $V_{th}$ (the threshold voltage.) In practice ...
Top 50 recent answers are included
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410369396209717, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/76600/does-the-manifold-of-the-three-dimensional-group-of-rotations-so3-cause-a-separ/76660
|
## Does the manifold of the three dimensional group of rotations SO(3) cause a separation of space in the group of rigid motions SE(3)?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The group of three dimensional rotations $SO(3)$ is a subgroup of the Special Euclidean Group $SE(3) = \mathbb{R}^3 \rtimes SO(3)$. The manifold of $SO(3)$ is the three dimensional real projective space $RP^3$. Does $RP^3$ cause a separation of space in the manifold of $SE(3)$?
(edit) Sorry about lack of clarity. My question should be worded as 'does $SO(3)$ partition any four dimensional subspace of $SE(3)$ into exactly two disjoint pieces?'
I am basically interested in understanding whether a generalization of the Jordan curve separation theorem works in such non Euclidean spaces. In particular, I want to know if (non) orientability of $SO(3)$ affects the generalization, especially since it is used to construct $SE(3)$ as a product space with $\mathbb{R}^3$.
-
I don't understand the question. In particular, I don't understand the phrase "cause a separation of space". Certainly a codimension-3 submanifold does not separate the manifold into disconnected pieces when removed, which was my first read, and I don't have a second-read proposal. I recommend you look at mathoverflow.net/howtoask . In particular, do please define what you mean in more detail. Some context would also be very helpful. – Theo Johnson-Freyd Sep 28 2011 at 5:57
I think your question isn't well-formulated. In particular, no $4$-dimensional subspace of $SE(3)$ can separate, since $SE(3)$ is 6-dimensional. Your question is analogous to asking if a point in $\mathbb R^2$ separates $\mathbb R^2$. I suggest reading the section on the Jordan-Brouwer Separation Theorem in Guillemin and Pollack's "Differential Topology" text, as it should both help you formulate your question and answer it. – Ryan Budney Sep 28 2011 at 6:18
I am not asking if a four dimensional subspace of $SE(3)$ separates $SE(3)$. Rather, I would like to know if there is a four dimensional subspace of $SE(3)$ that is separated by three dimensional $SO(3)$, especially because $SO(3)$ is non orientable. Thanks. – Kurt Sep 28 2011 at 6:24
What does "separated by three dimensional $SO(3)$" mean? – Ryan Budney Sep 28 2011 at 6:28
1
FYI, the manifold $SO(3)$ is orientable, but the answer to your question would be unchanged if it was non-orientable -- say if you were interested in the same question with $SE(3)$ replaced by $M \times \mathbb R^3$ where $M$ is a non-orientable $3$-manifold. – Ryan Budney Sep 28 2011 at 17:13
show 6 more comments
## 3 Answers
Okay, now I think I understand your question. This is the question I will answer:
• Question: Let $X$ be a connected $4$-dimensional subspace of $SE(3)$ that contains $SO(3)$. Is it possible for $X \setminus SO(3)$ to be connected? Disconnected?
The answer to both questions is yes. So there is no Jordan separation theorem for $4$-dimensional subspaces of $SE(3)$ containing $SO(3)$.
Observation 1: As a space, $SE(3)$ is just the cartesian product of $SO(3)$ with $\mathbb R^3$. Explicitly, we will think of $SE(3)$ as the set $SO(3) \times \mathbb R^3$.
Observation 2: If $X := SO(3) \times \mathbb R$ embeds in $SE(3)$, therefore $SO(3) \times \{0\}$ disconnects it.
Observation 3: If $X := SO(3) \times S^1$, where $S^1 = \{ x \in \mathbb R^2 : |x|=1\}$, then the map $X \to SO(3) \times \mathbb R^3$ given by $(p,x) \longmapsto (p,x,0)$ is an embedding. In particular, $X \setminus (SO(3) \times \{1\})$ is connected.
So the answer to both your questions is yes.
I'd like to suggest looking at the proof of the generalized Jordan-Brouwer theorem in Guillemin and Pollack, or perhaps in an algebraic topology textbook like Bredon's. This will give you a very flexible set of tools that will let you know quite generally when you can expect a separation theorem, and when you can't.
Notice: my answer had nothing to do with the fact that $SO(3)$ has a non-trivial fundamental group, or whether or not it is orientable. The key part of the construction is that $SO(3)$ has co-dimension at least $2$ (And actually co-dimension $3$) in $SE(3)$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I cannot post comments yet, but I am interested in the answer to these questions. It appears $R^2 \times SO(3)$ will not partition $SE(3)$ into disconnected pieces because $R^2 \times SO(3)$ is not compact. What about the set $M \times RP^3$ where $M$ is the Mobius strip? That is a five dimensional surface. Does it partition $SE(3)$? Also the original question is unanswered, does $SO(3)$ partition $R \times SO(3)$ into disconnected pieces? Curious to know.
-
I am not entirely sure what you mean by separation of space. But, would n't it depend on the representation of SE(3) and SO(3). For example, one can take the view that SE(3) is a dual projective space R\hat{P}^3 by using dual quaternion representation for spatial rigid body displacement.
I am not a mathematician, so please ignore me if what I say does not make sense.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9375279545783997, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/124994-position-vector-2-a.html
|
# Thread:
1. ## position vector 2
At any time t , a particle has a position vector (t^2+1)i+(3t-2)j , relative to the origin O . Show that the acceleration is always directed horizontally . Hence , find the angle between the velocity and acceleration of the particle at any time , t .
v=2ti+3j , a=2i
Since a=2i , the particle is always accelerating along the x-axis .
$\cos \theta=\frac{2t(2)+3(0)}{\sqrt{4t^2+9}\cdot \sqrt{4}}$
$=\frac{2t}{\sqrt{4t^2+9}}$
so am i correct ?
2. Originally Posted by hooke
At any time t , a particle has a position vector (t^2+1)i+(3t-2)j , relative to the origin O . Show that the acceleration is always directed horizontally . Hence , find the angle between the velocity and acceleration of the particle at any time , t .
v=2ti+3j , a=2i
Since a=2i , the particle is always accelerating along the x-axis .
$\cos \theta=\frac{2t(2)+3(0)}{\sqrt{4t^2+9}\cdot \sqrt{4}}$
$=\frac{2t}{\sqrt{4t^2+9}}$
so am i correct ?
seems fine
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8865584135055542, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/95517/density-of-0-homogeneous-functions-in-h1-partial-omega
|
## Density of 0-homogeneous functions in $H^1(\partial \Omega)$
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Recall: A function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ is called $0$-homogeneous if $f(\lambda x)= f(x)$ for every $\lambda>0$ and every $x\in \mathbb{R}^n$.
Question: Let $B$ a convex balanced and absorbent bounded domain of $\mathbb{R}^n$. Is the space of $0$-homogeneous $C^\infty(\mathbb{R}^n\setminus{0})$ functions dense in $H^1(\partial B)$?
-
2
If "0-homogeneous" means $f(tx) = f(x)$ for all $t > 0$ and $\Omega$ is not convex, why would you expect it to be dense? – Deane Yang Apr 29 2012 at 20:14
Do you mean $C^\infty(\mathbb{R}^n\setminus\{0\})$? The only 0-homogeneous $C^0(\mathbb{R}^n)$ functions are the constants. If you mean that, consider the two cases $\Omega$ being the unit ball centered around the origin and $\Omega$ being the unit ball centered around the point $(2,0,0,\ldots)$. – Willie Wong May 1 2012 at 15:25
What Willie is hinting at is that you need to assume that $0$ is in the interior of $\Omega$. – Deane Yang May 1 2012 at 17:24
1
Do you intend $\partial \Omega$ to be "regular" in some sense -- piecewise smooth or rectifiable, for example? I guess that even if $\Omega$ is star shaped the boundary might be very rough. If it is, then how do you define $H^1(\partial \Omega)$? (This may be my ignorance -- possibly there is a general definition of $H^1(S)$ for closed subsets $S$ of $\mathbb{R}^n$ but I don't know it.) – Jeff Schenker May 1 2012 at 18:10
The point is that maybe you should try to figure this out on your own. You might not be able to get the best possible result yourself, but you should be able to figure out what conditions are sufficient. Just star-shaped is not enough. I also encourage you to work it out in detail for a two-dimensional domain. The key point is that the tangent plane at any point in the boundary has to be transversal to the ray from the origin. This allows you to use the inverse function theorem to prove what you want. I concede that this must be in the literature somewhere, but I don't know where. – Deane Yang May 1 2012 at 18:11
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394379258155823, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-applied-math/116544-heat-problem-heat-equation.html
|
# Thread:
1. ## Heat problem with a heat equation
Consider a thermally isolated copper bar whose extremities are maintained to 0°C. The initial distribution of temperature is given by $T(x)=100 \sin \left ( \frac{\pi x}{L} \right )$ where $L$ is the length of the bar.
L=10cm.
Cross section of the bar : 1cm^2. (I call it $A$)
Thermal conductivity of the copper : 0.92 cal/(s cm °C). (I call it $K$)
Specific heat of the copper : 0.093 cal/(g °C). ( I call it $c$)
Density of the copper : 8.96 g/cm^3. (I call it $\rho$)
-------------------------------------------------------------------
1)Graph the initial distribution of temperature : Done.
2)What will be the final distribution of temperature after a very large time. Done. (The bar almost reaches 0°C everywhere).
3)Do a sketch about the distribution of the temperature after different amounts of time. Done. (Basically $T(x)$ becomes equal to $Y\sin \left ( \frac{\pi x}{L} \right )$ where $Y<100$.)
4)What is the gradient of the initial temperature in the extremities of the bar. Done : What I did was to derivate $T(x)$ and evaluating it in $x=0$. I reached $\frac{100 \pi}{L}$.
5)What is the initial heat flux through the extremities of the bar?
What I did was to write down $\frac{dq}{dt}=KA \frac{ \partial T}{ \partial x} \big |_{x=0}$.
6)What is the initial heat flux in the center of the bar? (I reached 0).
What will be this value for any posterior time. (Always 0.)
Analyze this answer. (I believe that although there is a heat flux in the right direction, it cancels out with the heat flux in the left direction, hence the total net flux in the center of the bar always remains 0.)
7)What is the initial value of the derivative of the temperature with respect to time in the center of the bar.
(What I did was : Oh oh. I think I fell over the diffusion heat equation, namely $\frac{ d Q}{dt}=K \nabla ^2 T$. I've no idea about how to solve this. It seems like an ODE of second order with given initial conditions. So I'm stuck here.
8)Give an estimation of the necessary time for the bar to cool off (at around $0.01T_0$).
9)The derivative of the temperature with respect to time in the center of the bar must : stay constant, increase or decrease?
(What I answer : It must decrease otherwise the bar would not cool off).
----------------------------------------------------------------------
I have a doubt about 5), shouldn't it be $\frac{dq}{dt}=cA \frac{ \partial T}{ \partial x} \big |_{x=0}$? I guess I should check out the units.
Also I guess that $\rho$ appears in part 7)...
Can you confirm all what I did, or any part of it?
I thank you for any help, as tiny as it may seems to you, it is greatly appreciated by me.
2. Hi Arbolis. I'm a little rusty but let me try to answer some of them. First write it precisely:
$\frac{\partial u}{\partial t}=k\frac{\partial^2 u}{\partial x^2},\quad 0\leq x \leq 10,\quad t\geq 0$
$u(0,t)=0,\quad u(10,t)=0$
$u(x,0)=100\sin\left(\frac{\pi x}{10}\right)$
where $k=\frac{K}{CD}$ (I think).
To solve this PDE, you generally use separation of variables which is not too bad to learn. Doing that I get:
$u(x,t)=100 e^{-\left(\frac{\pi}{10}\right)^2 kt} \sin(\frac{\pi x}{10})$
Now that I have the solution, it's easy to calculate $\frac{\partial u}{\partial x}$ and substitute the point $(5,0)$ right?
Ok, so you got (1) and (2) and as far as three, then just do a Plot3D[u(x,t),{x,0,10},{t,0,20}] in Mathematica looks to me. and (4) is just $\nabla u$. Isn't (5) just $\frac{\partial u}{\partial x}$ at the end points for t=0? (6) is the same dif at x=5 looks to me. (7) and (8) you can do right? As far as (9), just take partials of u with respect to t I think would answer that one.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948382556438446, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/108421-venn-diagram-problem.html
|
Thread:
1. Venn Diagram Problem
In a group of 100 students, more students are on the fencing team than are members of the French club. If 70 are in the club and 20 are neither on the team nor in the club, what is the minimum number of students who could be both on the team and in the club?
My solution:
I let x = number of students in both the club and fencing team and A = the number of students on the fencing team
Then, the number of students in the club is 70 - x
And then we know:
A + (70-x) + 20 - x = 100
This means A = 10
So now I used the same formula (Total # = A + B - (A^B) + neither)
100 = 10 + (70-x) - x + 20
But this didn't work... why isn't this formula working?
For 3 sets, Total # = A + B + C - A^B - A^C - B^C + (AUBUC) + "others"
So I think my formula for 2 sets is right..
The answer is meant to be 61 but plugging into that formula doesn't work:
10 + 9 -61 + 20 =! 100
Help?
2. Originally Posted by fifthrapiers
In a group of 100 students, more students are on the fencing team than are members of the French club. If 70 are in the club and 20 are neither on the team nor in the club, what is the minimum number of students who could be both on the team and in the club?
Think of it this way. There are 80 students in French or fencing.
Of those 70 are in French. So there must be 10 in fencing that are not in French.
But the number in fencing must be more than 70.
So what in the minimum in both?
3. Hello, fifthrapiers!
In a group of 100 students, more students are in Fencing than in French.
If 70 are in French and 20 are neither in Fencing nor French,
what is the minimum number of students who could in both actitvities?
I placed the information into a chart . . .
. . $\begin{array}{c||c|c||c|}<br /> & \text{Fencing} & \sim\text{Fencing} & \text{Total} \\ \hline \hline<br /> \text{French} & & & \\ \hline<br /> \sim\text{French} & & & \\ \hline \hline<br /> \text{Total} & & & \\ \hline\end{array}$
The total number of students is 100.
70 are in French.
20 are in Neither.
. . $\begin{array}{c||c|c||c|}<br /> & \text{Fencing} & \sim\text{Fencing} & \text{Total} \\ \hline \hline<br /> \text{French} & & & 70 \\ \hline<br /> \sim\text{French} & &20 & \\ \hline \hline<br /> \text{Total} & & & 100 \\ \hline\end{array}$
We can fill in two more boxes:
. . $\begin{array}{c||c|c||c|}<br /> & \text{Fencing} & \sim\text{Fencing} & \text{Total} \\ \hline \hline<br /> \text{French} & & & 70 \\ \hline<br /> \sim\text{French} & {\color{blue}10} & 20 & {\color{blue}30} \\ \hline \hline<br /> \text{Total} & & & 100 \\ \hline\end{array}$
Let ${\color{red}x} \:=\:n(\text{Fencing}),\;\;{\color{red}y} \:=\:n(\text{French} \wedge \text{Fencing})$
. . $\begin{array}{c||c|c||c|}<br /> & \text{Fencing} & \sim\text{Fencing} & \text{Total} \\ \hline \hline<br /> \text{French} & {\color{red}y} & & 70 \\ \hline<br /> \sim\text{French} & 10 & 20 & 30 \\ \hline \hline<br /> \text{Total} & {\color{red}x} & & 100 \\ \hline\end{array}$
"More students in Fencing than in French": . $x \:>\:70$
Since $y+10 \:=\:x$, we have: . $y + 10 \:>\:70 \quad\Rightarrow\quad y \:>\:60$
Therefore, at least 61 students are in both activities.
4. Thank you, this makes sense.
My question now, though, is why did my method not work? And why did that formula not work? I've used it many times in the past and it has, but not on this one.
5. Originally Posted by fifthrapiers
My question now, though, is why did my method not work? And why did that formula not work? I've used it many times in the past and it has, but not on this one.
This is incorrect (Total # = A + B - (A^B) + neither)
$\left| \text{Total} \right| = \left| {A \cup B} \right| + \left| {A^c \cap B^c } \right| = \left[ {\left| A \right| + \left| B \right| - \left| {A \cap B} \right|} \right] + \left| {A^c \cap B^c } \right|$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632486701011658, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/irrational-numbers+alternative-proof
|
# Tagged Questions
0answers
111 views
### Direct proof that $\sqrt{2}$ is irrational? [duplicate]
Possible Duplicate: Irrationality proofs not by contradiction I've been puzzled for some days now, and I can't come up with an answer. I'm trying to come with a direct proof that $\sqrt{2}$ ...
3answers
387 views
### Is this proof that $\sqrt 2$ is irrational correct?
Suppose $\sqrt 2$ were rational. Then we would have integers $a$ and $b$ with $\sqrt 2 = \frac ab$ and $a$ and $b$ relatively prime. Since $\gcd(a,b)=1$, we have $\gcd(a^2, b^2)=1$, and the fraction ...
1answer
895 views
### Sum of irrational numbers
Well, in this question it is said that $\sqrt[100]{\sqrt3 + \sqrt2} + \sqrt[100]{\sqrt3 - \sqrt2}$, and the owner asks for "alternative proofs" which do not use rational root theorem. I wrote an ...
3answers
269 views
### How to prove $e$ isn't a $\frac {a}{b}$. Not irrationality with other ways or about transcendental, only about fractions
I would like a proof that $e$ isn't a fraction $\frac{a}{b}$, for $a,b \in Z$ and $mdc(a,b)=1$. Just a observation =) I'd like a proof with fractions, not about $e$ irrationality or if $e$ is ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.973545253276825, "perplexity_flag": "head"}
|
http://cstheory.stackexchange.com/questions/16470/big-theta-extension-of-brents-theorem
|
# Big-Theta extension of Brent's Theorem?
Is there an extension or translation of Brent's theorem into asymptotics aside from big-$O$?
Brent's Theorem: source
Running time of a parallel algorithm with $p$ processors (say, $f(n,p)$), $W(n)$ Work complexity, and $S(n)$ Step complexity takes $\leq \frac{W(n)}{p} + S(n)$ time. The $\leq$ lets me use $O$ directly, but not $\Omega$. If it's also true, I'd be able to say something like:
$f(n,p) \in \frac{\Theta(W(n))}{p} + \Theta(S(n))$
It seems like it is true. Is it? I'd love to have a reference.
-
3
You're asking for a lower bound on the conversion of an algorithm into a parallel algorithm ? that is likely to be hard. – Suresh Venkat♦ Feb 14 at 18:29
The algorithm shouldn't need to be converted. I just want to know whether Big-Theta holds. – Kyle Feb 14 at 19:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483824968338013, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/74141-gr-12-vector-question-involving-dot-product-two-vectors-part-2-a.html
|
# Thread:
1. ## Gr 12. Vector Question Involving Dot Product of Two Vectors(Part 2)
I don't know if I'm being abusive posting another thread so let me know if there is a limit on how much question/threads I can post a day in this forum or any other thanks! But I got another tricky question, much more trickier!
Find a unit vector that is parallel to the xy-plane and perpendicular to the vector 4i - 3j + k (4, -3, 1)
So how would I go about doing this questions? Thanks
2. Originally Posted by narutoblaze
I don't know if I'm being abusive posting another thread so let me know if there is a limit on how much question/threads I can post a day in this forum or any other thanks! But I got another tricky question, much more trickier!
Find a unit vector that is parallel to the xy-plane and perpendicular to the vector 4i - 3j + k (4, -3, 1)
So how would I go about doing this questions? Thanks
Let the sought vector be $\textbf u=a\,\textbf i+b\,\textbf j+c\,\textbf k\text.$ The two vectors must be orthogonal, so
$4a-3b+c=0$
But $\textbf u$ is parallel to the $xy$-plane, so we must also have
$c=0\text.$
Substituting, that means $4a-3b=0\Rightarrow a=\frac34b\text.$ Choose an arbitrary pair of numbers that satisfy the equation to get a vector $c\textbf u$, and then find the unit vector in the same direction.
3. Thanks Reckoner your the best and I could understand it!
I have another question hope you'll be able to answer it too!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422244429588318, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/222432/special-sequences-of-fractions
|
# Special sequences of fractions
Since I didn’t get any answers, I will restrict my question to one type of sequence and to prime numbers. The terms of this sequence are formulated as follows:
$a_1+a_2=a_3, a_1+a_3=a_4, a_3+a_4=a_5, a_3+a_5=a_6, …….$ and so on. Let’s have the positive fractions $a_1=1/f$ and $a_2=1/t$ where $f=2$ and $t$ any prime number greater than $2$. If $t=3$ we obtain the following terms $1/2, 1/3, 5/6, 11/12, 21/12, 31/12, 52/12,……$. No term of this sequence is divisible by $12$. The same happens when $t=5$, but when $t=7$ we obtain terms which are divisible by $14$.These terms are $a_8, a_{20}, a_{32}….$. If $t=11$ then the terms $a_{14}, a_{38}, a_{62},…$ are integers. If $t=13, 19, 29$, no term is an integer but if $t=23, 31,…$ we obtain terms which are integers. If $t=17$ then the term $a_{10}$ is divisible by $2t$ and the term $a_{42}=17340002$ is almost divisible, as are the other periodic terms. I applied congruent arithmetic to find which primes will have integer terms with no results, and also modified forms of the binomial which did not help either. The only thing which is constant is the period of the terms which are integer numbers after the first integer term appears. Any direction as to how to approach this problem will be greatly appreciated.
INFORMATIVE ADDENDUM
Pythagoras used rational numbers to place 6 ratios between 1 and 2. These ratios are formulated as follows. The arithmetic mean of 1 and 2 is 3/2 and the harmonic is 4/3. By dividing these two ratios we obtain $9/8$. More ratios are obtained from the powers of $(9/8)^n$, such as $(9/8)^2, (9/8)^3,(9/8)^2*(4/3), (9/8)^3*(4/3)$.
So we have put 6 ratios between 1 and 2: $1, (9/8), (9/8)^2, (4/3), (3/2), (9/8)^2*(4/3), (9/8)^3*(4/3), 2$.
Now we will put 12 ratios between 1 and 2. Let’s take the following sequence which is obtained from the method of Theon $2, 3, 5, 7, 12, 17, 29, 41, 70, 99, 169, 239, 408, 577, 985, 1393, 2378, 3363,…..$.
The first term of this sequence which is divisible by 3 is the term $a_{18}=3363$. The term $a_{17}=2378$ is also divisible by 2. From these we obtain $(3363/2378)≈2^{1/2}$ and $(2378/3363)≈1/2^{1/2}$.Now if we divide the numerators by 2 and the denominators by 3 we obtain $(2378/2)/(3363/3)≈(1/2)/(2^{(1/2)})/3$ or $1189/1121≈(9/8)^{1/2}$.
We can obtain more powers of $1189/1121$ by multiplying and dividing the terms $(a_{17}*a_{25})/(a_{18}*a_{26})$ as follows: $(2378*80782)/(2^2)/[(3363*114243)/(3^2)]=1.12499995$.
The first ratio is formulated by taking the harmonic mean of $1$ and $1.12499995$, which is $d^1=1.058823507$. By taking powers of $d^n$ we obtain the 12 ratios. So we have $1, d^1, d^2, d^3, d^4, d^5, d^6, d^7, d^8, d^9, d^{10}, d^{11}, d^{12}, 2$. If we want to put 24 ratios between 1 and 2 we then take the harmonic mean of $1$ and $1.058823507$ and we repeat the above process. This way we can put 12, 24, 48, 96, etc. ratios between 1 and 2.
Let’s formulate the terms of the sequence $k_1,k_2,\ldots,k_m$ where $m=n+3$ as follows:
$a_1+a_2=a_3$, $a_1+a_3=a_4$, $a_3+a_4=a_5$, $a_3+a_5=a_6$, $a_5+a_6=a_7$, $a_5+a_7=a_8$, . . . . . $a_n+a_{n+1}=a_{n+2}$, $a_n+a_{n+2}=a_{n+3}$.
And let’s have the positive fractions $a_1=1/f$ and $a_2=1/t$ where $f,t$ integers not both square numbers. From these two fractions we can formulate infinite terms of the above sequence. For an infinite number of pairs of fractions the above sequence has terms which are integers and for other pairs not. These integers appear as follows. If the third term is an integer then $2\cdot3+2$ terms is an integer. E.g. if $a_1=1/2$ and $a_2=1/4$ we have the terms $3/4, 5/4, 8/4, 11/4, 19/4, 27/4, 46/4, 65/4, 111/4, 157/4, 268/4$, . . . . .
The terms $k_3=8/4$, $k_{11}=268/4$, $k_{19}=9104/4$ are integers. The same thing happens if we have the pair of fractions $1/2$ and $1/10$ and so on.
My questions are: How can we tell which pairs of fractions will produce integer terms and which not? Also, will these terms which are integers appear indefinitely in these sequences or are there only a finite number of integers?
-
I'm a little confused - are the general terms $a_{2r}=a_{2r-1}+a_{2r-3}$ and $a_{2r+1}=a_{2r}+a_{2r-1}$? If so then you can solve the general $a_{2r+1}=2a_{2r-1}+a_{2r-3}$, which gives the odd numbers in the sequence - and then feed that into $a_{2r}=a_{2r-1}+a_{2r-3}$ to give the even numbers. – Mark Bennet Oct 28 '12 at 4:13
The bit about fractions seems to be a distraction. Linearity means you can multiply everything in your first sequence by 4, and it still satisfies the recurrence. You are then looking at an integer sequence mod 4, which will be ultimately periodic. If it hits 0 mod 4, your fractions give you an integer. – Mark Bennet Oct 28 '12 at 4:16
Dear Mark. Do you imply the sequence above has infinite terms which are integers? If that is the case then a multiple sqrt2 is equal to a rational number. The k_20=12875/4 and 12875/4≈2276*sqrt2. – Vassilis Parassidis Oct 28 '12 at 5:00
You start with two rational numbers $a_1$ and $a_2$ - the formulae you have determine the rest, so they are all rational. Solving the recurrence expresses $a_n$ in the form $A_i\alpha^n+B_i\beta^n$ - where $i$ represents the parity of $n$. This formula involves multiples of $\sqrt 2$, but these cancel to give the correct rational answer. – Mark Bennet Oct 28 '12 at 7:56
Because all the terms are determined by $a_1$ and $a_2$ the highest number appearing in any denominator is determined by the least common multiple of the initial denominators (the least common denominator) "d". Multiply the $a_n$ by $d$ and you have a sequence of integers which you are investigating $\mod d$. This is necessarily periodic, and the question is whether it ever hits the residue class equivalent to zero. If it does so once (once the period has started), it will do so infinitely often. – Mark Bennet Oct 28 '12 at 8:00
## 1 Answer
One set of general solutions we get for $a_1=\frac{1}{2n}$ and $a_2=\frac{1}{n}$ for any positive $n$.
The integer solutions are periodic denominators of the convergents of $\sqrt{2 n^2}$.
The period of the continued fraction of $\sqrt{2 n^2}$ determines the period of the denominators which are your integer solutions.
For example for $n=6$ we get using Wolframalpha:
Denominator[Convergents[Sqrt[2 n^2], 20]]
{1, 2, 33, 68, 1121, 2310, 38081, 78472, 1293633, 2665738, 43945441, 90556620, 1492851361, 3076259342, 50713000833, 104502261008, 1722749176961, 3550000614930, 58522759015841, 120595518646612}
ContinuedFraction[Sqrt[2 n^2], 20]
{8, 2, 16, 2, 16, 2, 16, 2, 16, 2, 16, 2, 16, 2, 16, 2, 16, 2, 16, 2}
Some Pari code generating your sequences, printing the integer solutions.
g(a1,a2,n)={local(v=vector(n)); print1("g[", a1,",", a2,"] = "); v[1]=a1;v[2]=a2; for(i=3,n,v[i]=v[i+i%2-3]+v[i-1]; if(denominator(v[i])==1, print1(",",v[i]); ) );print("");v}
g(1/12,1/6,100);
Output:
g[1/12,1/6] = 2, 68, 2310, 78472, 2665738, 90556620, 3076259342, 104502261008, 3550000614930, 120595518646612, 4096697633369878, 139167124015929240,...
Goodbye!
Maybe this helps a little bit.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 86, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9104012846946716, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/118263/list
|
## Return to Answer
Post Made Community Wiki by François G. Dorais♦
2 deleted 136 characters in body
My comments to the main question seemed like they merit a full answer.
If we are talking about introductory level courses then the approach should be naive. It is often all the set theory needed from the working mathematician, and many of my undergrad teachers didn't even know what are the axioms of ZFC (a truth I'd learned during my masters).
It is important to add some logic into the mix, what is a structure and what is an isomorphism of structures. I can give from my own limited experience:
I did my undergrad (and masters) in Ben-Gurion where a course has been tailored not from books, but by collecting pieces of set theory and logic together. I have some disagreements on its current structure, but the idea is that we teach set theory naively from "ZF+The real numbers are atoms" because that's how most people would see mathematics when they approach it naively. I, for example, disagree with the insistence of not mentioning the axiom of choice. In the set theory part we explain what are sets, their basic properties, we teach about induction over $\mathbb N$ and a bit of the theory of partial orders. We also teach the basics of cardinality (as much as you can squeeze from no-choice environment anyway).
We also teach very basic propositional calculus and predicate calculus. Nothing fancy, and we don't talk about proofs and soundness or anything much. We discuss isomorphisms and definability if time permits (e.g. this year) but not always we have this privilege (e.g. last year).
As an intro course I think it is fine, it doesn't go too deep into axioms and what are proofs and so on. Students still don't understand what they need all that for, but the majority of the students are computer science students (the course, however, was designed over 20 years ago when the computer science department was a subset of the math department), and they just want to learn programming and so.
For advanced undergrad courses it is perfectly reasonable to present the full axioms of ZFC and discuss advanced topics like forcing, large cardinals, infinitary combinatorics, and maybe even [very basic] inner model theory.
This, coupled with a course about logic and basic model theory, should give an undergrad an excellent grasp of the basics of set theory.
On the other hand, it is reasonable to suggest a course about categorical foundations in which ETCS and other structural set theories are presented and an algebraic approach to set theory is taken. I don't know what sort of perquisites should be for such course, though.I would expect some category theory at least, in which case it might be suitable as a basic grad-level course rather than an undergrad.
But to repeat what I said at first, for freshmen students (or as a firs course in set theory) the course should be presented in a naive approach, relying on the fact that we all understand what it is to put three files into a folder in the computer, or three books in our bag. Such course should present the basic structure of sets and some basic logic.
If a full year is given, it might be wise to make it into two parts: the first is very naive and basic sets manipulations and basic logic, with the added value of very basic combinatorial results from discrete mathematics course. The second part should focus on slightly more advanced topics such as basic order theory (partial orders and such), cardinals and cardinality, some applications of the axiom of choice, and some more advanced logic (from definability to elementary equivalence, depending on your taste and time limits).
1
My comments to the main question seemed like they merit a full answer.
If we are talking about introductory level courses then the approach should be naive. It is often all the set theory needed from the working mathematician, and many of my undergrad teachers didn't even know what are the axioms of ZFC (a truth I'd learned during my masters).
It is important to add some logic into the mix, what is a structure and what is an isomorphism of structures. I can give from my own limited experience:
I did my undergrad (and masters) in Ben-Gurion where a course has been tailored not from books, but by collecting pieces of set theory and logic together. I have some disagreements on its current structure, but the idea is that we teach set theory naively from "ZF+The real numbers are atoms" because that's how most people would see mathematics when they approach it naively. I, for example, disagree with the insistence of not mentioning the axiom of choice. In the set theory part we explain what are sets, their basic properties, we teach about induction over $\mathbb N$ and a bit of the theory of partial orders. We also teach the basics of cardinality (as much as you can squeeze from no-choice environment anyway).
We also teach very basic propositional calculus and predicate calculus. Nothing fancy, and we don't talk about proofs and soundness or anything much. We discuss isomorphisms and definability if time permits (e.g. this year) but not always we have this privilege (e.g. last year).
As an intro course I think it is fine, it doesn't go too deep into axioms and what are proofs and so on. Students still don't understand what they need all that for, but the majority of the students are computer science students (the course, however, was designed over 20 years ago when the computer science department was a subset of the math department), and they just want to learn programming and so.
For advanced undergrad courses it is perfectly reasonable to present the full axioms of ZFC and discuss advanced topics like forcing, large cardinals, infinitary combinatorics, and maybe even [very basic] inner model theory.
This, coupled with a course about logic and basic model theory, should give an undergrad an excellent grasp of the basics of set theory.
On the other hand, it is reasonable to suggest a course about categorical foundations in which ETCS and other structural set theories are presented and an algebraic approach to set theory is taken. I don't know what sort of perquisites should be for such course, though. I would expect some category theory at least, in which case it might be suitable as a basic grad-level course rather than an undergrad.
But to repeat what I said at first, for freshmen students (or as a firs course in set theory) the course should be presented in a naive approach, relying on the fact that we all understand what it is to put three files into a folder in the computer, or three books in our bag. Such course should present the basic structure of sets and some basic logic.
If a full year is given, it might be wise to make it into two parts: the first is very naive and basic sets manipulations and basic logic, with the added value of very basic combinatorial results from discrete mathematics course. The second part should focus on slightly more advanced topics such as basic order theory (partial orders and such), cardinals and cardinality, some applications of the axiom of choice, and some more advanced logic (from definability to elementary equivalence, depending on your taste and time limits).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9720138311386108, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/notation?sort=votes&pagesize=15
|
# Tagged Questions
The notation tag has no wiki summary.
8answers
1k views
### Is there a symbol for “unitless”?
I'm making a table where columns are labelled with the property and the units it's measured in: Length (m) |||| Force (N) |||| Safety Factor (unitless) ||| etc... I'd like not to write "unitless" ...
1answer
1k views
### Differentiating Propagator, Greens function, Correlation function, etc
For the following quantities respectively, could someone write down the common definitions, their meaning, the field of study in which one would typically find these under their actual name, and most ...
4answers
2k views
### What does Peter Parkers formula represent?
Okay, so the trailer for the new Spider Man movie is out and appearently our friendly physicist from the neightborhood came up with something. However I can't find out what this is. ...
2answers
298 views
### What the circled integral?
What the circled integral $$\oint$$ means? I saw this symbol in a lot of books about advanced physics. How is his definition? What kind of integral it is? It is used only in physics or also in ...
3answers
62 views
### which letter to use for a CFT?
In math, one says "let $G$ be a group", "let $A$ be an algebra", ... For groups, the typical letters are $G$, $H$, $K$, ... For algebras, the typical letters are $A$, $B$, ... I want to say ...
4answers
265 views
### Are covariant vectors representable as row vectors and contravariant as column vectors
I would like to know what are the range of validity of the following statement: Covariant vectors are representable as row vectors. Contravariant vectors are representable as column vectors. ...
2answers
176 views
### Standard notation reference
I'm searching for a compresensive and somewhat complete list of suggested standard notation (the symbols one ought to use in (theoretical) physics and also mathematics). Is there such a collection, ...
1answer
273 views
### Why is $L^2$ norm of the gradient called kinetic energy?
I'm reading Lieb-Loss's book 'Analysis', chapter 7. The authors refer to the following integral: $$\tag{1} \lVert \nabla f\rVert_2^2=\int_{\Omega}\lvert \nabla f(x)\rvert^2\, d^nx$$ as the kinetic ...
1answer
162 views
### Clarifications about Poisson brackets and Levi-Civita symbol
I need some clarifications about Poisson brackets. I know canonical brackets and the properties of Poisson Brackets and I also know something about Levi-Civita symbol (definition and basic ...
1answer
74 views
### $\pm$ (light-cone?) notation in supersymmetry
I would like to know what is exactly meant when one writes $\theta^{\pm}, \bar{\theta}^\pm, Q_{\pm},\bar{Q}_{\pm},D_{\pm},\bar{D}_{\pm}$. {..I typically encounter this notation in literature on ...
0answers
92 views
### Is it correct to sum over either index of the metric the same way?
I don't know if the following is correct, i want to compute the following derivative \frac{\partial }{\partial (\partial_{\mu}A_{\nu})}\left(\partial^{\alpha}A^{\beta}\partial_{\alpha}A_{\beta} ...
4answers
114 views
### Is there a recognised standard for typesetting quantum mechanical operators?
Firstly, I wasn't sure exactly where to put this. It's a typesetting query but the scope is greater than $\TeX$; however it's specific also to physics and even more specific to this site. I've ...
2answers
106 views
### Difference between slanted indices on a tensor
In my class, there is no distinction made between, $$C_{ab}{}^{b}$$ and $$C^{b}{}_{ab}.$$ All I know, and read about so far, is the distinction of covariant and contravariant, form/vector, etc. ...
2answers
372 views
### Bra-ket notation and linear operators
Let $H$ be a hilbert space and let $\hat{A}$ be a linear operator on $H$. My textbook states that $|\hat{A} \psi\rangle = \hat{A} |\psi\rangle$. My understanding of bra-kets is that $|\psi\rangle$ is ...
2answers
192 views
### What are $\partial_t$ and $\partial^\mu$?
I'm reading the Wikipedia page for the Dirac equation: $\rho=\phi^*\phi\,$ ...... $J = -\frac{i\hbar}{2m}(\phi^*\nabla\phi - \phi\nabla\phi^*)$ with the conservation of probability ...
3answers
234 views
### How to distinguish 4D and 3D vectors in handwriting?
Usually vectors are denoted with bold font in printbooks and with arrows above in handwriting. In Thorn's e al. Gravitation, 4D vectors are denoted with bold and 3D vectors with bold italic. How to ...
1answer
173 views
### Difference between $\partial$ and $\nabla$ in general relativity
I read a lot in Road to Reality, so I think I might use some general relativity terms where I should only special ones. In our lectures we just had $\partial_\mu$ which would have the plain partial ...
1answer
97 views
### What does the notation $c = [1:\beta]$ mean?
I have been reading a online-book/blog/material on Quantum Mechanics, when I encountered a notation on a page and I have no idea what it means. See if you can help. Here's the link and follows the ...
2answers
85 views
### Double Pendulum
The equations of motions for the double pendulum is given by $$\dot{\theta_1} = \frac{6}{ml^2}\frac{2p_{\theta1} - 3\cos(\theta_1 - \theta_2)p_{\theta2}}{16 - 9\cos^2(\theta_1 - \theta_2)}$$ and ...
1answer
180 views
### Meaning of $d\Omega$ in basic scattering theory?
In basic scattering theory, $d\Omega$ is supposed to be an element of solid angle in the direction $\Omega$. Therefore, I assume that $\Omega$ is an angle, but what is this angle measured with respect ...
1answer
49 views
### Scalar top quark (stop) pair production
A rather simple question: Starting from an electrically neutral state, pairs of top quarks are produced as top and anti-top, and denoted as $t\bar t$. Now the production of pairs of scalar top ...
2answers
142 views
### Inner Product Spaces
I am trying to reconcile the definition of Inner Product Spaces that I encountered in Mathematics with the one I recently came across in Physics. In particular, if $(,)$ denotes an inner product in ...
1answer
57 views
### Uncertainty writing
This will sound like a silly question, but I don't recall that my professors ever though me what this means. For example: X=1.2345(6) units This is uncertainty, that much I do know, but does it ...
2answers
251 views
### Meaning of subscript in $V=\frac{1}{2}\left(\frac{d^2 V}{{dq_i}{dq_j}}\right)_0$
This is probably a simple question, but what does the subscript $0$ mean in the following expression? $$V=\frac{1}{2}\left(\frac{d^2 V}{{dq_i}{dq_j}}\right)_0$$
2answers
305 views
### Correct application of Laplacian Operator
Not a physicist, and I'm having trouble understanding how to apply the Laplacian-like operator described in this paper and the original. We let: \hat{f}(x) = f(x) + \frac{\int H(x,y)\psi(y) ...
2answers
59 views
### Another question about Shankar's notation
I have another question on the notation in Shankar. I think it's sloppy, but I also may just be misunderstanding it. Again, this is at the very beginning of the math intro. He has: a\left| V ...
2answers
149 views
### In Dirac notation, what do the subscripts represent? (Solution for particle in a box in mind)
So the set of solutions for the particle in a box is given by $$\psi_n(x) = \sqrt{\frac{2}{L}}\sin(\frac{n\pi x}{L}).$$ In Dirac notation $<\psi_i|\psi_j>=\delta_{ij}$ assuming $|\psi_i>$ ...
2answers
96 views
### SI units with more than one prefix in fractions
Is it (in the view of SI) correct to note units with more then one prefix? I discuss this since several months with friends, but we could not find a proper source for our statements yet. Examples for ...
1answer
156 views
### Symbol for dashpot/damper (in a harmonic oscillator)
In diagrams that contain the dashpot symbol, sometimes the mass is attached to the "interior" end of the dashpot, other times the mass is attached to the "base" end. For example, consider the ...
1answer
36 views
### Why distinguish between row and column vectors?
Mathematically, a vector is an element of a vector space. Sometimes, it's just an n-tuple $(a,b,c)$. In physics, one often demands that the tuple has certain transformation properties to be called a ...
6answers
406 views
### Is H=H* sloppy notation or really just incorrect, for Hermitian operators?
I saw it in this pdf, where they state that $P=P^\dagger$ and thus $P$ is hermitian. I find this notation confusing, because an operator A is Hermitian if \$\langle \Psi | A \Psi \rangle=\langle A ...
2answers
242 views
### Question with Einstein notation
Let’s consider this equation for a scalar quantity $f$ as a function of a 3D vector $a$ as: $$f(\vec a) = S_{ijkk} a_i a_j$$ where $S$ is a tensor of rank 4. Now, I’m not sure what to make of the ...
4answers
168 views
### Is there a default notation for 4-vectors while handwriting?
In printed paper 3-vectors can be denoted bold italic while 4-vectors can be denote just bold. While handwriting 3-vectors are denoted by arrows above letters. Is there a similar way to denote ...
1answer
148 views
### Rocket drive and conservation of momentum
I am currently reading through some lecture notes of Physics 1 and in a chapter about the dynamics of the mass point, there is an example covering the rocket drive. Let $v$ be the velocity of the ...
2answers
209 views
### Question on notation in Shankar's Quantum Mechanics - math intro on vector spaces
I'm just beginning Shankar's 2nd edition Quantum Mechanics and having some trouble with notation. He defines his vectors as "$\left|V\right>$" . And with a scalar multiplier as "$a\left|V\right>$" . ...
2answers
160 views
### Why no basis vector in Newtonian gravitational vector field?
In my textbook, the gravitational field is given by$$\mathbf{g}\left(\mathbf{r}\right)=-G\frac{M}{\left|\mathbf{r}\right|^{2}}e_{r}$$ which is a vector field. On the same page, it is also given as a ...
1answer
691 views
### What does $\Psi^*$ mean in Schrodinger's Equation?
I am not a physics student. In one of my courses, some fundamental concepts of Quantum mech were needed, so i was gng through them when i stumbled upon this It says \text{probability} = ...
2answers
223 views
### Notation of plane waves
Consider a monochromatic plane wave (I am using bold to represent vectors) $$\mathbf{E}(\mathbf{r},t) = \mathbf{E}_0(\mathbf{r})e^{i(\mathbf{k} \cdot \mathbf{r} - \omega t)},$$ ...
1answer
59 views
### What is $k_B$ in the context of this question?
Answering the following question 1000 atoms are in equilibrium at temperature T. Each atom has two energy states, $E_1$ and $E_2$, where $E_2 > E_1$ . On average, there are 200 atoms in the ...
1answer
276 views
### state vector notation
I've never taken a quantum mechanics class, but I find myself now using principles developed in the quantum theory of angular momentum. One particularly confusing aspect that I'm struggling with is ...
1answer
31 views
### Vector $\vec{z}$ and its conjugate transpose $\overline{\vec{v}^\top}$ - is it the same as $\left|z\right\rangle$ and $\left\langle z \right|$
Lets say we have a complex vector $\vec{z} \!=\!(1\!+\!2i~~2\!+\!3i~~3\!+\!4i)^T$. Its scalar product $\vec{z}^T\!\! \cdot \vec{z}$ with itself will be a complex number, but if we conjugate the ...
1answer
260 views
### Wave function and Dirac bra-ket notation
Would anyone be able to explain the difference, technically, between wave function notation for quantum systems e.g. $\psi=\psi(x)$ and Dirac bra-ket vector notation? How do you get from one to the ...
1answer
432 views
### What does y with a line over it represent?
I've been asked to complete this chart and have never come across this symbol before, nor can I find anything about it on google: http://postimage.org/image/oe7hb9cy3/ What does the y with the line ...
2answers
124 views
### Notation for differential operators and wave function math
I know that $[\frac {d^2}{dx^2}]\psi$ is $\frac {d^2\psi}{dx^2}$ but what about this one $[\frac {d^2\psi}{dx^2}]\psi^*$? Is it this like $\frac {d^2\psi\psi^*}{dx^2}$ or this like \$\frac ...
0answers
44 views
### Reaction coordinate as a function of atomic positions
I'm going over some (molecular dynamics) related literature - specifically the derivation of the Weighted Histogram Analysis Method (WHAM). As a quick backdrop WHAM is a method for stitching ...
2answers
138 views
### Is the letter delta generally only used to express change in variable or quantity?
I was speaking with a friend of mine earlier and he said "Oh look, delta, the sign of uncertainty" (he doesn't study physics often so had only seen in in Heisenberg's Uncertainty Principle equations). ...
2answers
189 views
### Subshell notation for hydrogen cation?
Looking at $s$,$p$,$d$ configuration for atoms & ions: Since a hydrogen cation $H^+$ has no electron, how would the subshell notation be written? My best estimate would be $1s^0$.
3answers
162 views
### Why is 'the period' marked as letter T?
I'm not a native English speaker and I was wondering, why 'the period' got the letter $T$. I've asked myself the question when I was thinking about stuff related to the frequency. I.e.: $f$ - ...
2answers
182 views
### How is an arbitrary operator usually denoted in quantum mechanics?
Which symbols are usually used to denote an arbitrary operator in quantum mechanics, such as O in the following example? $O \mbox{ is Hermitian} \Leftrightarrow \Im{\left< O \right>} = 0$
2answers
75 views
### When do I apply Significant figures in physics calculations?
I'm a little confused as to when to use significant figures for my physics class. For example, I'm asked to find the average speed of a race car that travels around a circular track with a radius of ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337766766548157, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/13939/excels-chidist-function-in-matlab-for-large-values/13955
|
# Excel's CHIDIST function in MATLAB for large values
This question is related to Excel's CHIDIST function in MATLAB
The minimum answer larger than zero I can get in Excel is about 2.2e-308 that correspond to `realmin` in MATLAB. However, in MATLAB the minimum answer is 2.2e-16, or `eps(1.0)`. I understand this is because I first calculate `chi2cdf` that is close to 1 for large values, and the precision in this case is `eps(1.0)`.
Is there any other way to reproduce Excel's CHIDIST and not to loose the precision? Do you know exactly how CHIDIST works?
EDIT:
I have to do this test multiple times, and in many cases the resulting p-values are close to zero. I need to rank the p-values, this is why I need high precision in p-value calculation.
-
I am assuming that this is merely out of curiousity, as I can't imagine a "real" example where something like this matters. So if you are worried about this loss of precision for some real world problem, don't be. It is still interesting to know these sorts of things though :) – probabilityislogic Aug 6 '11 at 18:25
I'm unfamiliar with Matlab, but I'm confident it can calculate an incomplete Gamma integral to very high precision. See the 'upper' option at mathworks.com/help/techdoc/ref/gammainc.html. – whuber♦ Aug 6 '11 at 18:38
@probabilityislogic: Actually I do need this precision. Later I take log of p-values, so I get many Inf values, if p-value is 0. – yuk Aug 7 '11 at 2:23
– yuk Aug 7 '11 at 2:39
@yuk - You should really add this information to the question, that what you really require is a good approximation to the log of p-value. In any event, I still struggle to see why not simply giving an arbitrarily big value whill not suffice. Your minimum value is $-16\log(2.2)$, so just take $\log(0)=-20$ or bigger – probabilityislogic Aug 7 '11 at 3:45
## 1 Answer
A solution was developed in the comments following the question; the purpose of this post is to make that solution available as a reply so it can be searched, voted on, etc.
Chi-square distribution functions are scaled incomplete Gamma integrals. If you compute the integral from $0$ to $x$, you approach $1$ as $x$ gets large. Subtracting this value from $1$ loses ever more precision; with double-precision arithmetic, you're limited to results greater than about $10^{-16}$. The trick is to compute the integral from $x$ to $\infty$, which typically can be approximated accurately even for extremely small values.
The Matlab help for the incomplete Gamma integral indicates that this upper tail will be computed using its 'upper' option.
-
1
Thank a lot! To be more MATLAB specific, function `chi2cdf` calles `gamcdf`, which calles `gammainc`. In `gamcdf` I've changed the line `p = gammainc(z, a);` to `p = gammainc(z, a, 'upper');`, saved it under another name and changed the call in `chi2cdf`. Works great! – yuk Aug 8 '11 at 3:47
default
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9198353290557861, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/29611/how-much-is-important-the-role-of-planck-length-in-the-strings-theory
|
# How much is important the role of Planck length in the strings theory?
this is Planck length: $$\ell_p=\sqrt {\frac {G.\hbar}{c^3}} .$$
1. How much is important the role of this length in the strings theory?
2. is this planck's length or newton's length! or maybe both of them!?
-
## 1 Answer
Concerning the second question, the Planck length is the Planck length and not Newton's length (yes, the OP has asked this question). Newton didn't know Planck's constant which was only discovered 2+ centuries later so he could discuss neither Planck's constant nor the Planck length and other natural units which are functions of Planck's constant.
Max Planck realized the importance of Planck's constant $h$ – usually used in the form of the reduced Planck constant i.e. Dirac constant $\hbar=h/2\pi$ – for the black body formula he derived (it appeared previously in high-energy approximations of the black body formula). He was also able to figure out that any quantity (with any units) has a natural unit which may be written as a product of powers of three fundamental universal constants, $\hbar,c,G$. So he derived all the natural units or Planck units such as the Planck length, Planck mass, Planck time, and products of their powers.
What you wrote is the unique product of such powers of $\hbar,c,G$ that has the unit of length: you should be able to verify it is approximately $1.6\times 10^{-35}$ meters. Because it's a length that is calculated purely from constants that are totally natural, and "naturally equal to one", in quantum mechanics ($\hbar$), relativity ($c$), and a theory of gravity ($G$), it's a length that is likely to play a prominent role in any theory that addresses quantum mechanics, relativity, and gravity, i.e. any theory of quantum gravity.
String theory is a theory of quantum gravity (the only known theory of quantum gravity that is free of internal contradictions, in fact), so the Planck length is important in string theory, too. Well, because string theory contains extra dimensions, a more fundamental constant could be a "higher-dimensional Planck length" which is comparable to the usual Planck length you wrote down – but could also be very different if some of the extra dimensions of space are either hugely warped or very large (relatively to the Planck length).
Previous question about the Planck length and my answer about its relevance is here:
How to get Planck length
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9596648216247559, "perplexity_flag": "middle"}
|
http://polymathprojects.org/2011/02/13/can-bourgains-argument-be-usefully-modified/?like=1&_wpnonce=76a2d9cd23
|
# The polymath blog
## February 13, 2011
### Can Bourgain’s argument be usefully modified?
Filed under: Improving Roth bounds — gowers @ 6:23 pm
I’ve been feeling slightly guilty over the last few days because I’ve been thinking privately about the problem of improving the Roth bounds. However, the kinds of things I was thinking about felt somehow easier to do on my own, and my plan was always to go public if I had any idea that was a recognisable advance on the problem.
I’m sorry to say that the converse is false: I am going public, but as far as I know I haven’t made any sort of advance. Nevertheless, my musings have thrown up some questions that other people might like to comment on or think about.
Two more quick remarks before I get on to any mathematics. The first is that I still think it is important to have as complete a record of our thought processes as is reasonable. So I typed mine into a file as I was having them, and the file is available here to anyone who might be interested. The rest of this post will be a sort of digest of the contents of that file. The second remark is that I am writing this as a post rather than a comment because it feels to me as though it is the beginning of a strand of discussion rather than the continuation of one, though it grows out of some of the comments made on the last post. Note that since we are operating on the Polymath blog, anybody else is free to write a post too (if you are likely to be one of the main contributors, haven’t got moderator status and want it, get in touch and I can organize it).
The starting point for this line of thought is that the main difficulty we face seems to be that Bourgain’s Bohr-sets approach to Roth is in a sense the obvious translation of Meshulam’s argument, but because we have to make a width sacrifice at each iteration it gives a $(\log N)^{-1/2}$ type bound rather than a $(\log N)^{-1}$ type bound. Sanders’s argument gives a $(\log N)^{-1}$ type bound, but if we use that then it is no longer clear how to import the new ideas of Bateman and Katz. Therefore, peculiar as it might seem to jettison one of the two papers that made this project seem like a good one in the first place, it is surely worth thinking about whether the width sacrifice that Bourgain makes (and that is also made in subsequent refinements of Bourgain’s method, due to Bourgain and Sanders) is fundamentally necessary or merely hard to avoid.
After thinking about this question in somewhat vague terms for quite a while, I have now reached a more precise formulation of it. To begin with, I want to avoid the technical issue of regularity, which can be thought of as arising from the fact that a lattice is a discrete set and therefore behaves a little strangely at small distance scales. We sort of know that that creates only technical difficulties, so if we want to get a feel for what is true, then it is convenient to think of a Bohr set as being a symmetric convex body in $\mathbb{R}^d$ for some $d.$
The question I want to consider is this. Let $B$ be a convex body in $\mathbb{R}^d,$ let $\epsilon>0,$ and let $A$ be a subset of $B$ of relative density $\alpha$ that contains no 3AP with common difference of length greater than $\epsilon.$ (This last condition is needed, since every set of positive measure contains a non-trivial 3AP, by the Lebesgue density theorem. It can be thought of as admitting that our set-up isn’t really continuous but just looks continuous at an appropriate distance scale.) Is it possible to show that there is a density increase of around $\alpha^2$ on a structured subset of $B$ of comparable width?
Actually, what I really want is (I think — I haven’t checked this formulation as carefully as I should have) a trigonometric function $\tau(x)=e(x.y)$ such that $|\langle \mu_B(\alpha-1_A),\tau\rangle|\geq c\alpha^2$ for some absolute constant $c>0.$ The reason this would be nice is that we could then pass to a subset $B'$ of $B$ on which $A$ would have increased density, and the width of $B'$ would be comparable to that of $B.$ The set $B'$ would no longer be convex: it would look more like a union of parallel slabs cut out of $B.$ But it would be Freiman isomorphic to something like a “convex body” in $\mathbb{R}^d\times\mathbb{Z}.$ Going back to Bohr sets, we ought to have no trouble getting from a $d$-dimensional Bohr set to a $(d+1)$-dimensional Bohr set of the same width. And that would be much more like the Meshulam set-up where the codimension increases and that is all.
Reasons to be pessimistic.
Let me try to put as strongly as I can the argument that there is no hope of getting a density increase without a width sacrifice.
To begin with, think what a typical 3AP looks like. For the purposes of this argument, I’ll take $B$ to be a sphere in $\mathbb{R}^d.$ Since $B$ is convex, the average of two points in $B$ always lies in $B.$ Therefore, there is a one-to-one correspondence between pairs of points in $B$ and triples of points in $B$ that form a 3AP. What does the average of two random points of $B$ typically look like? Of course, it can be any point in $B,$ but if $B$ is high-dimensional, then a random point in $B$ is close to the boundary, and a random second point in $B$ is not only also close to the boundary, but it is approximately orthogonal to the first point (assuming that $B$ is centred at the origin). Therefore, the average of the two points typically lives close to a sphere of radius $1/\sqrt{2}$ times the radius of $B.$ Therefore, if we take $A$ to be the set $B\setminus(3B/4),$ we have a set of measure exponentially close to 1 (by which I mean exponentially in the dimension of $B$) with exponentially fewer 3APs than there are in $B.$
What this simple example shows is that if we want to obtain a density increase, it will not be enough to use the fact that $A$ contains few 3APs — we will have to use the fact that it contains no 3APs. Even having exponentially few 3APs doesn’t help. So a straightforward Roth-style Fourier manipulation doesn’t work.
Pushing this example slightly further, even the set $B\setminus((1-\alpha/d)B)$ has measure roughly $\alpha.$ What can we say about the 3APs it contains? They live close to the boundary of a sphere, and that forces them to have small common difference. So one way that we might exploit the fact that $A$ has no 3APs rather than just very few 3APs would be just to count 3APs with a small common difference (the smallness depending on both $\alpha$ and $d$). Since there are exponentially fewer of these than there are 3APs in general, we really would be using more than just that there are exponentially few 3APs.
But if we restrict attention to 3APs with small common difference, can we hope to find a “global” density increase, as opposed to the “local” density increase one would obtain by passing to a smaller-width Bohr set? Consider what happens, for instance, if one has a subset of $\mathbb{Z}_N$ with the wrong number of 3APs with small common difference. If “small” means “at most $m$” and $m$ is substantially less than $N,$ then one can take a fairly random union of intervals of length 10m, say. This will have many more than its fair share of 3APs of common difference less than $m,$ but because of the randomness we will not detect any global correlation with a trignometric function.
Reasons to be less pessimistic.
We seem to be in a difficult situation: one example appears to force us to consider small common differences (though in fact Bourgain doesn’t, because instead of restricting the difference of the 3AP he restricts its central element to lie in a small Bohr set), while another example appears to suggest that from the fact that there are no 3APs with small common difference one cannot conclude that there is global correlation with a trigonometric function.
However, there is a mismatch between the two examples, and at the moment I cannot rule out that the mismatch is pointing to something fundamental. The mismatch is this: the first example (where we take a sphere and remove the heart) works because we are in a high dimension, whereas the second works because we are in a low dimension.
Let me explain what I mean. The first example relied strongly on measure concentration, so it is clear that it needed us to be in a high dimension. As for the second, it relied on our being able to say that if you take two points $x,x+d$ in the set with $d$ small, then $x+2d$ is likely to be in the set. To achieve that, we took a union of balls of radius quite a bit larger than the smallness of the small common differences. But in high dimensions, if you want to be able to conclude from the fact that $x$ and $x+d$ both belong to some ball that $x+2d$ also probably belongs to that ball, then you need the radius of the ball to be much larger: a constant factor is nothing like good enough. (Why? Because the two points will almost always be on the boundary and as far away as the smallness condition allows. And since the boundary is “curved on average”, or something like that, $x+2d$ will not then be in the ball unless the radius is large enough for the boundary to feel flat at that distance scale.)
What interests me about this is that the smallness you need in order to deal with the first example seems to be very closely related to the smallness you need in order to make the second example work. In the first case, you need to drop down to a distance scale $D$ with the property that if you take two typical points $x,x+d$ in $B$ (which will be near the boundary) then $x+2d$ will typically belong to $B$ as well. If we now try to create an example of the second type out of balls of radius $D,$ then in order to get it to work, we need to have balls $B'$ that are large enough to have the property that … if $x,x+d$ belong to $B'$ then $x+2d$ probably does as well. In other words, we seem to be forced to take our sub-balls of $B$ to be as big as $B$ itself.
Who is correct, the optimist or the pessimist?
One problem with the optimist’s argument above is that it is qualitative. I argued qualitatively that the very high-dimensionality that makes the first example work stops the second example working. But it is conceivable that one might manage to turn that into a rigorous argument that showed that one could get away with dropping the width by a factor of 2 instead of something like $\alpha/d,$ and that, it turns out, would produce only $\log\log$-type savings in the final bound.
But if the optimist is correct, then a natural question arises: how would one go about turning that qualitative argument into a rigorous and quantitative proof that not having small APs leads to a global correlation with a trigonometric function? One’s first instinct is to think that it would be necessary to classify Bohr sets according to their “true” dimension, or something like that — which would be difficult, as the structure of a Bohr set depends in subtle ways on the various linear relations between the characters that define it. If we take it as read that any such classification would be bound to lose constants in a way that would destroy any hope of the kind of exact result we would need, what does that leave?
The main thing we have to decide is our “distance scale”. That is, we are given a Bohr set $B,$ and we need to define some other set $B'$ and restrict attention to 3APs with common difference in $B'.$ Or perhaps we will prefer to choose something more general like a probability measure $\lambda$ that is concentrated on “small” values, and choose our common difference $\lambda$-randomly. But how do we make that choice without understanding all about $B?$
The only possible answer I can think of is to define $B'$ or $\lambda$ in some simple way in terms of $B$ that is designed to give you sensible answers in the cases we understand. For instance, if $B$ is actually a subspace of $\mathbb{F}_3^n$ then we want to consider all differences, so we want $\lambda$ to be the characteristic measure of $B.$ And if $B$ is a $d$-dimensional sphere, then we want $\lambda$ to be a ball of radius chosen such that if you take a “shell” of $B$ of measure $\alpha,$ then an average 3AP will have common difference comparable to that radius.
It looks more and more as though it would be necessary to consider not just Bohr sets in isolation, but Bohr sets as members of ensembles. Fortunately, thanks to work of Ben Green and Tom Sanders, we have the idea of a Bourgain system to draw on there.
Let me give one further thought that makes me dare to hope a little bit. It seems that quite a lot of our problems are caused by the fact that high-dimensional Bohr sets have boundaries, and measure concentrated on those boundaries. But if we pass to a “shell” (by which I mean a set that is near the boundary) then it does not have a boundary. (By the way, any argument that seems to be making us consider a spherical shell is sort of interesting, given that it raises the hope of ultimately connecting with the Behrend lower bound.) In order to think about what happens when we are in a high-dimensional set with no boundary, let us now suppose we are in the group $\mathbb{Z}_N^d.$ We are immediately encouraged to note that if a set in this group contains no 3APs, then we get correlation with a trigonometric function by the usual Fourier argument.
What happens, though, if we restrict the common difference to be small in some sense? I’m not sure, but let me at least do the Fourier calculation. It is not hard to check that $\mathbb{E}_{x,d}f(x)f(x+d)f(x+2d)\lambda(d)$ transforms to $\sum_{r,s}\hat{f}(r-s)\hat{f}(-2r+s)\hat{f}(r)\hat{\lambda}(-s),$ which can also be written as $\sum_{r,s}\hat{f}(r+s)\hat{f}(-2r)\hat{f}(r-s)\hat{\lambda}(2s)$ or as $\sum_{r+s+t=0}\hat{f}(r)\hat{f}(s)\hat{f}(t)\hat{\lambda}(r-t).$
In low dimensions I would normally deal with such a sum by taking $\lambda$ to be an AP with smoothed edges so that it had absolutely summable Fourier coefficients (which could easily be arranged to be real and non-negative). And then I would simply use averaging to say that there must be some value of $r-t$ for which the sum is large. In high dimensions this is not good enough: the sum of the coefficients is exponential in the dimension, so the density increase we would get would be exponentially small. So how would we exploit the high-dimensionality? (Or perhaps it just isn’t the case that a local-ish 3APs count implies a global correlation.) I have just the vaguest of ideas here, which is that in high dimensions the set of places where the Fourier coefficients of $\lambda$ are large are fairly dissociated. Perhaps one can show somehow that it is not possible for $\sum_r\hat{f}(r+s)\hat{f}(-2r)\hat{f}(r-s)$ to be large for many $s$ that form a dissociated set, so that the only way for the whole sum to be large is if there is some $s$ for which it is very large. In other words, perhaps we can show that the naive averaging argument really is very inefficient.
I’m going to leave this here, but let me quickly make a remark about the pdf file that I linked to at the beginning of this post. It is not meant to be anything like a polished document, which means it shouldn’t even necessarily be assumed to be correct. In fact, at the top of page 11 I made quite an important mistake: the expression I wrote down is not the probability I said it was; to get that probability one needs to replace the third $\nu_B$ by an average of some characteristic functions rather than characteristic measures, and that means that the approach works rather less neatly than I had hoped it would.
## 15 Comments »
1. [...] me update this. There is a new post. Also see these [...]
Pingback by — February 13, 2011 @ 9:42 pm
2. Here is a toy problem that may possibly capture the kind of thing I am asking. I’ll work in the group $G=\mathbb{Z}_N^d.$ Let $B$ be the Bohr set $[-N/8,N/8]^d$ (there is nothing special about the number 8 here), let $\mu_B$ be the characteristic measure of $B,$ and let $\lambda=\mu_B*\mu_B.$ (There is not even anything particularly special about $\lambda$: I want something Bohr-set-like, but thought it might be convenient if it had more quickly decaying Fourier coefficients.) Suppose that $f:G\to G$ is a bounded function (that is, $\|f\|_\infty\leq 1$) such that on average the product of $f$ with a translate of $\lambda$ correlates with a trigonometric function. Does it follow that $f$ itself correlates with a trigonometric function?
I’m not quite sure how to make that precise when $f$ is an arbitrary function, but let us suppose that $f=1_A$ for a subset $A\subset G$ of density $\alpha.$ Then I can give a quantitative version of the question as follows. For each $x,$ define the trigonometric bias $\tau(x)$ of $A$ with respect to the translate $T_x\lambda$ (where $T_x\lambda(y)=\lambda(y-x)$ to be the maximum value of $|\langle T_x\lambda(1_A-\alpha),\chi\rangle|,$ where $\chi$ is a character on $G.$ Suppose now that the average trigonometric bias is at least $\rho\alpha.$ Does it follow that there exists a non-trivial character $\chi$ such that $|\hat{1}_A(\chi)|\geq c\rho\alpha$ (at least if $\rho\geq\alpha$)?
A simple argument gives a positive answer but with $c$ depending exponentially on the dimension $d.$ The point of the question is to try to remove the dependence on $d.$ And the reason it might conceivably be possible to do something like that is that obvious constructions such as partitioning $\mathbb{Z}_N^d$ into $2^n$ boxes of side length $N/2$ and making $A$ correlate with a random trigonometric function in each box simply don’t work: a typical translate of $\lambda$ intersects all the boxes, so we don’t get the local correlation.
I’m not claiming that a good result along these lines solves all our problems, but I do think that it would suggest strongly that this approach has possibilities.
Comment by — February 14, 2011 @ 10:06 am
3. Let me suggest an approach to proving what I want. I would like to show that if $g$ is a bounded function defined on $\mathbb{Z}_N^d$ (that is, it takes values of modulus at most 1), then with high probability if you choose a random $y\in\mathbb{Z}_N^d$ the inner product $\mathbb{E}_xg(x)\lambda(x-y)$ is close to $\mathbb{E}_xg(x).$ I haven’t quite found the calculation that shows this, but it feels as though the first expectation may be enough like taking an independent random sample of values of $g$ to make the result true. If so, it might be the beginning of the local-to-global principle (I don’t mean quite what most people mean when they use that phrase) that I am looking for.
Comment by — February 14, 2011 @ 12:03 pm
• It might have been clearer if I had said something like “$d$-wise independent” rather than “independent” above.
Comment by — February 14, 2011 @ 12:05 pm
• Sorry — it was a stupid question in that it is trivially false. The counterexample is to take $g$ to be a non-trivial trigonometric function that correlates with $\lambda.$ What I think I want is to show that this sort of example is essentially the only kind. No time now, but I’ll have to think about a better formulation of the question.
Comment by — February 14, 2011 @ 12:26 pm
• I think the kind of statement I’m going to want is that the only counterexamples are fairly “continuous” functions, which will necessarily have large Fourier coefficients (and not just any old Fourier coefficients either).
Comment by — February 14, 2011 @ 3:28 pm
4. I have a question as an “observer”. Is it clear why the epsilon gained by Michael and Netz necessary small or perhaps we can hope also for a large epsilon? (If so maybe we can start with the 2/3 exponent, anyway, or even the loglogn Roth). A related question is: Is the argument of Michael and Netz allows adding an epsilon to Bourgain 2/3? (I am aware better results are known but still this seems interesting.)
Comment by — February 14, 2011 @ 6:55 pm
• Tom Sanders has also asked this question. He is pretty sure there wouldn’t be much problem (apart from a certain necessary technical slog) in improving any of the 1/2, 2/3 or 3/4 bound by the Bateman-Katz epsilon. It seems that that epsilon is necessarily very small if you stick closely to their argument. Whether one can be more precise and say that there is a certain barrier beyond which one cannot go without a new idea I do not know.
Comment by — February 14, 2011 @ 7:54 pm
• Gil–
Someone asked similar questions via email earlier. Here was my comment then:
————
About the size of epsilon in the BK paper: Section 6 in particular requires epsilon to be rather microscopic. This is the structural theorem for sets with some additive energy but no additive smoothing. As epsilon grows, the sharpness of the nonsmoothing hypothesis goes away; and over the course of the proof of this theorem, the nonsmoothing parameter gets magnified a number of times, leaving a vacuous theorem unless the nonsmoothing parameter (and consequently epsilon) is very small.
The reason the nonsmoothing hypothesis can disappear as epsilon grows is that the size of the spectrum can increase substantially (– specifically it could be as large as (density)^{-3}). Then the random selection argument (–at least in its current form — which chooses sets of size about N) becomes a less effective way of estimating the number of m-tuples in the spectrum.
————-
I have no idea whether you have looked at our paper, so I’ll just ramble a bit more in case it’s helpful. [BK] includes a theorem for sets that are not “additively smoothing”. A set S is said to be additively smoothing if its difference set has more structure than itself. One way to detect this is by comparing the L^p norms of the fourier transform of the set. (In fact this is how we define it precisely.) (For p even) We know that the L^p norm is the number of p-tuples in the set S. By Holder’s inequality, an estimate on the number of (say) 4-tuples gives us a free estimate on the number of (say) 8-tuples. If there are only this many 8-tuples, we say the set is NOT smoothing. If there are many more, we say the set IS smoothing. For example, consider a set of size M randomly distributed in a subspace of size M^{1+c} for some 0<c<1. This is the additively smoothing example because M-M is all of the subspace, which has perfect additive structure. An non-smoothing example is something like a bunch of (smaller) subspaces with no relation to each other. Here the set of (popular) differences is just the set we started with, which has (trivially) no additional additive structure.
Epsilon being small allows us to conclude that the spectrum is nonsmoothing. The paper includes a structural theorem for nonsmoothing sets. Large epsilon means smoothing is a possibility. If there were an equally strong structural theorem for sets with additive smoothing, there might be some hope; but such a theorem would not be a trivial extension of our result.
Michael
Comment by Michael Bateman — February 14, 2011 @ 7:58 pm
• (My apologies in advance for the poor formatting; I realized my faux pas with not enough time to fix it before tomorrow.)
Here are some half-baked thoughts about possibly increasing the size of epsilon.
My knowledge of covering lemmas and related technology is a bit thin. We already have a structural theorem for sets S such that S is not smoothing. Could we possibly use this to prove a structural theorem for sets S such that, say, 2S is not smoothing? how about 4S? etc. This seems like an interesting question because of the following sketch.
As mentioned above, when epsilon grows the spectrum may become additively smoothing, which is a problem (at least for the existing technology). What I notice, however, is that even when epsilon is large, we may still conclude (something like?) one of the iterated difference sets is not additively smoothing. By using the random selection argument as in [BK], we can conclude that the number of 2m-tuples is less than
N^{4m + mO(epsilon)} .
Since the number of 4-tuples is known to be
N^{7-O(epsilon)}
in this case, it is possible for the number of 8-tuples to be as large as
N^{15+ O(epsilon)}
without reaching a contradiction. As epsilon grows, this ruins the non-smoothing of the spectrum.
Write E_{2m} to denote the number of 2m-tuples in the spectrum S. Suppose we have a smoothing-like condition at every “step”. More precisely, suppose E_{2m} is larger than the trivial Holder bound over E_m plus an additional factor of E_m ^{beta} for some small number beta. If this is true for m=3,4,5,…, M, where M depends on beta and epsilon, then we contradict the estimate E_{2M} < N^{4M + MO(epsilon)} mentioned above that we obtained from the random selection argument. Hence we must not have this additional beta smoothing for every m. If we write S for the spectrum, this seems to say something an awful lot like "2mS is additively non-smoothing".
Comment by Michael Bateman — February 15, 2011 @ 1:13 am
5. Dear Tim,
here are just some vague thoughts I had after reading your post and .pdf-file, which I’d like to write down here – basically in order to see whether I understood those things properly. Yet before getting started I should perhaps say that although this comment is unfortunately somewhat long, it doesn’t give any new ideas, it’s just sort of a reformulation of some things you’ve already said.
It seems that one may look at any argument proving a version of Roth’s theorem that involves density increments as a method (in a not too algorithmic sense of this word) to actually find a 3AP in a set satisfying the requested density bound by making successively better guesses as to where such an 3AP could be. Thus in the original argument, we start with a set $A$ living in $\mathbb{Z}_N$ of size $CN/loglogN$, compute certain Fourier coefficients, and then guess that (one of) the 3AP(s) of $A$ can actually be found in a certain subprogression, then we do the very same computations there, make a further guess, and so forth until our guesses become precise enough to actually locate 3AP inside $A$, at which point we are done. The verious improvements over Roth’s orinal result can then be regarded as providing descriptions of much better strategies for making these guesses.
So far our guesses can also be thought of as assigning zeros and ones to the elements of $A$ in each step, where those elements among which we still look for a 3AP get assigned ones, while the other ones receive zeros. If we formulate the strategy behind the proof like this, the idea of assigning reals from the interval $[0, 1]$ instead of just zeros and ones doesn’t appear unnatural: So in each stage we could construct a function $f$ from $A$ to $[0, 1]$, and an equation like $f(a)=0$ intuitively means “For the time being I won’t consider 3AP(s) through $a$ any further”, while $f(a)=1/10$ could mean “At the moment it’s not particularly likely, but still possible that I will end up producing a 3AP through $a$“, an equation like “f(a)=2f(a’)” roughly means “Currently it seems twice as likely that our eventual 3AP will pass through $a$ than that it passes through $a'$“, and so on.
Now for Roth’s original argument it is definitely essential for technical reasons to just work with $\{0, 1\}$-valued functions (as one “changes $N$” throughout the proof), even though this appears to lead to some more or less arbitrary but at the same time rather immaterial choices concerning the boundaries of the subprogressions to which one passes when iterating. But already for Bourgains argument (my understanding of which on a conceptual level is surely by far more limited than yours), it doesn’t seem to be clear why one has to do so, and if I happen to understand what you suggest to some extent correctly, then you propose to work with a guessing function $f$ whose values at the center of the Bohr set under consideration is much greater then the value of $f$ near the boundary. (Instead of taking $f$ to be the characteristic function of the Bohr set involved.)
At this point, it seems that maybe one can even dispose of working with Bohr sets on a strictly technical level and instead merely “think of them in the back of our head”, transforming at each step of the iteration the definition of the Bohr set of which we “really think” into an appropriate definition of a certain guessing function (which probably then has to somehow “codify” certain pieces of “regularity information” on that Bohr set, etc.)
Comment by Christian — February 14, 2011 @ 8:35 pm
6. What I meant with the last paragraph was just this: It is not even clear why one should assign strict zeros instead of just very small numbers to the points outside the Bohr set under consideration, which conceivably might then turn out to be “technically easier”, though it should’t be essential either.
Comment by Christian — February 14, 2011 @ 9:00 pm
7. Maybe, given the amount of discussion, this is a good place to concentrate both polymath6ing and openly discussing both Roth and the cupset problem, both lower bounds and upper bounds.
About 10-12 years ago Roy Mshulam and I thought quite extensively (without success) about trying to improve the Fourier argument even slightly. (We had more avant-garde ideas about the problem more recently that I discussed over my blog.) Anyway, we had a certain proposed conjecture (whose strength depends on various parameters) about general functions and I wonder how this conjecture survive the killer examples described by Tim and if the argument by Michael and Nets gives also it.
So lets consider $F_3^n$ and for a function $f$ and $q \ge 1$ and $1 \le k \le n$ we denote by $\|f\|_{q,k}$ the maximum over affine k-dimensional flats $L$ of $(3^{-k}\sum_{x\in L}|f(x)|^q)^{1/q}$.
Now the conjecture is that for some constant $C$ and some negotiable $\gamma,\alpha,q$ every set $E \subset F_3^n$ of size $n^{\gamma}$ the following holds:
(**) $\sum_{x\in E}~|\hat f(x)|\le c_1\|f\|_{q,n^\alpha}^2$
It turns out that if (**) holds with $\gamma \to \infty$, and $\alpha/q \to 1$ then the $3^n/n$ upper bound for cupsets can be improved to $3^n/n^\beta$ for every $\beta$.
In fact, if you fix $r$ between 2 and 3 you can have $\beta > min\{(3-r)/r(1-\alpha/q), (6-2r+\gamma(r-2)/r)\}.$ So you can adjust the above parameters in the conjecture to imply stronger and weaker forms of the cap conjecture. We could say something for very very specific E’s but not for anything near the general case. We can adjust (**) with $\alpha = q(1-c_2\log n/n)$ and $\gamma=c_3n/\log n$ to have (**) implies $(3-t)^n$ upperbounds so maybe some killer examples or some other examples already kill this. Also we can adjust the parameters to make $\beta -1$ tiny and one wonders if then (**) follows from the new results by Michael and Nets.
Comment by — February 20, 2011 @ 7:22 pm
• Gil, are you prepared to make public the proof that the conjecture implies the bound you say? Or is it meant to be an easy exercise? (Just staring at it I feel a bit mystified, but perhaps if I made a serious attempt to prove the implication — and not in my head — I would realize that it was not too hard.) Or is it possible to give a sketch that makes the implication seem reasonable?
Comment by — February 22, 2011 @ 12:08 pm
• Sure, here it is: http://www.ma.huji.ac.il/~kalai/line5.pdf
Comment by — February 22, 2011 @ 9:28 pm
RSS feed for comments on this post. TrackBack URI
Theme: Customized Rubric. Blog at WordPress.com.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 197, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9625998735427856, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/12/19/limits-of-functions/?like=1&_wpnonce=67418a1937
|
The Unapologetic Mathematician
Limits of Functions
Okay, we know what it is for a net to have a limit, and then we used that to define continuity in terms of nets. Continuity just says that the function’s value is exactly what it takes to preserve convergence of nets.
But what if we have a bunch of nets and no function value? Like, if there’s a hole in our domain — as there is at ${0}$ for the function $\frac{x}{x}$ — we certainly shouldn’t penalize this function just on a technicality of how we presented it. Well there may be a hole in the domain, but we still have sequences in the domain that converge to where that hole is. So let’s take a domain $D\subseteq\mathbb{R}$, a function $f:D\rightarrow\mathbb{R}$, and a point $p\in\overline{D}$. In particular, we’re interested in what happens when $p$ is in the closure of $D$, but not in $D$ itself.
Now we look at all sequences $x_n\in D$ which converge to $p$. There’s at least one of them because $p\in\overline{D}$, but there may be quite a few. Each one of these sequences has an opinion on what the value of $f$ should be at $p$. If they all agree, then we can define the limit of the function $\lim\limits_{x\rightarrow p}f(x)=\lim\limits_{n\rightarrow\infty}f(x_n)$ where $x_n$ is any one of these sequences. In the case of $\frac{x}{x}$ we see that at every point other than ${0}$ our function takes the value $1$. Thus on any sequence converging to ${0}$ (but never taking $x_n=0$) the function gives the constant sequence $1$. Since they all agree, we can define the limit $\lim\limits_{x\rightarrow0}\frac{x}{x}=1$.
If a function has a limit at a hole in its domain, we can use that limit to patch up the hole. That is, if our point $p$ is in the closure of $D$ but not in $D$ itself, and if our function $f$ has a limit at $p$, then we can extend our function to $D\cup\{p\}$ by setting $f(p)=\lim\limits_{x\rightarrow p}f(x)$. Just like we by default set the domain of a function to be wherever it makes sense, we will just assume that the domain has been extended to whatever boundary points the function takes a limit at.
On the other hand, we can also describe limits in terms of neighborhoods instead of sequences. Here we end up with formulas that look like those we saw when we defined continuity in metric spaces. A function $f$ has a limit $L$ at the point $p$ if for every $\epsilon>0$ there is a $\delta>0$ so that $0<|x-p|<\delta$ implies $|f(x)-L|<\epsilon$. Going back and forth from this definition to the one in terms of sequences behaves just the same as going back and forth between net and neighborhood definitions of continuity.
To a certain extent we’re starting to see a little more clearly the distinct feels of the two different approaches. Using nets tells us about approaching a point in various systematic ways, and having a limit at a point tells us that we can understand the function at that point by understanding any system along which we can approach it. We can even replace the limiting point by the convergent net and say that the net is the point, as we did when first defining the real numbers. Using neighborhoods, on the other hand, feels more like giving error tolerances. A limit is the value the function is trying to get to, and if we’re willing to live with being wrong by $\epsilon$, there’s a way to pick a $\delta$ for how wrong our input can be and still come at least that close to the target.
Like this:
Posted by John Armstrong | Analysis, Calculus
4 Comments »
1. [...] of Limits Okay, we know how to define the limit of a function at a point in the closure of its domain. But we don’t always want to invoke the [...]
Pingback by | December 20, 2007 | Reply
2. [...] at Infinity One of our fundamental concepts is the limit of a function at a point. But soon we’ll need to consider what happens as we let the input to a function grow without [...]
Pingback by | April 17, 2008 | Reply
3. Why talk about nets at all in a metric space? You can get along with sequences. Nets are only needed in more complicated topoligical spaces.
My feeling towards sequences is also split. On hand hand, they are easy to define and imagine, since they are countable. On the other hand, the sets of all sequences is mighty big, and thus the sequence definition of continuity is not really constructive.
In fact, going back and forth between the two definitions of continuity requires the axiom of choice (on countable sets). So epsilon-delta looks more constructive.
Of course, a single sequence (like the power series for exp) is constructive too.
Comment by | November 14, 2008 | Reply
4. First off, I’m not a constructivist, so considerations like that aren’t really very persuasive for me.
Secondly, notice that I do immediately move to talking about sequences. But I keep nets around because there are some concepts that naturally give rise to nets, not sequences. For example, the limit involved in Riemann integration is most clearly expressed as that of a net over the directed set of marked partitions, rather than that ugly limit over “mesh size” many authors keep using.
Comment by | November 14, 2008 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408605098724365, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/54073/a-quick-question-on-tensor-products-of-algebras
|
# A quick question on tensor products of algebras
It's known that for a field $k$, the tensor product of $k$-vector spaces commutes with direct sums. Is it also true that the tensor product of $k$-algebras commutes with finite products ('finite products' in the ordinary sense of ring products)?
-
## 1 Answer
The tensor product of vector spaces distributes over direct sums, so that $(V\oplus W)\otimes U\cong V\otimes U\oplus W\otimes U$ with a natural isomorphism. That isomorphism is an isomorphism of algebras if $U$, $V$ and $W$ are algebras.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8956236839294434, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/24338/how-fictitious-are-fictitious-forces
|
# How fictitious are fictitious forces?
How fictitious are fictitious forces?
More specifically, in a rotating reference frame i.e. on the surface of the earth does an object that is 'stationary' and in contract with the ground feel centrifugal and Coriolis forces? Or are these forces purely fictional and used to account for differences in observed behaviour relative to an inertial frame?
To give a practical example a turreted armoured vehicle is sitting stationary and horizontally somewhere in the UK. The turret is continually rotating in an anti-clockwise direction. Do the motors that drive the turret's rotation require more power as the turret rotates from east to west and less power as the turret rotates from west to east? i.e. are the turret motors cyclically assisted and hindered by the earths rotation?
-
Here's one that always confuses me: Alice is falling freely under gravity. For Bob, on observer on earth, Alice experiences a force mg, and thus accelerates towards earth with an acceleration g. In Alice's frame, she experiences a force mg downwards, but because we are in a non-inertial frame, there is a 'pseudo' force mg upwards, so two forces cancel out and in her frame she is not accelerating. Everything was fine until here. But according to Einstein's principle of equivalence, an inertial frame is equivalent to a frame falling freely under gravity. So, does this make the 'pseudo' force tha – Spot Apr 24 '12 at 16:16
It's not necessary to make this pseudo force construction here - the last line explains why the third line is no problem. In the beginning claim "she experiences a force mg downwards" you'd have to explain what "experiences" means because if you consider her to be a point particle, then as you said, she doesn't effectively feel any acceleration. The principle says exactly that, namely that if you're in free call, you locally don't know that there is a gravitational field around. Notice there is no global inertial frame in that example. Also, don't post questions as an answer (this is no forum) – Nick Kidman Apr 24 '12 at 16:28
You could probably post this as a separate question, since it's not really an answer to the question posted here. – tmac Apr 24 '12 at 16:28
## 4 Answers
No, they are not real forces.
Quoting from my answer here
Whenever we view a system from an accelerated frame, there is a "psuedoforce" or "false force" which appears to act on the bodies. Note that this force is not actually a force, more of something which appears to be acting. A mathematical trick, if you will.
Let's take a simple case. You are accelerating with $\vec{a}$ in space, and you see a little ball floating around. This is in a perfect vacuum, with no electric/magnetic/gravitational/etc fields. So, the ball does not accelerate.
But, from your point of view, the ball accelerates with an acceleration $-\vec{a}$, backwards relative to you. Now you know that the space is free of any fields, yet you see the particle accelerating. You can either deduce from this that you are accelerating, or you can decide that there is some unknown force, $-m\vec{a}$, acting on the ball. This force is the psuedoforce. It mathematically enables us to look at the world from the point of view of an accelerated frame, and derive equations of motion with all values relative to that frame. Many times, solving things from the ground frame get icky, so we use this. But let me stress once again, it is not a real force.
And here:
The centrifugal force is basically the psuedoforce acting in a rotating frame. Basically, a frame undergoing UCM has an acceleration $\frac{mv^2}{r}$ towards the center. Thus, an observer in that rotating frame will feel a psuedoforce $\frac{mv^2}{r}$ outwards. This psuedoforce is known as the centrifugal force.
Unlike the centripetal force, the centrifugal force is not real. Imagine a ball being whirled around. It has a CPF $=\frac{mv^2}{r}$, and this force is the tension in the string. But, if you shift to the balls frame (become tiny and stand on it), it will appear to you that the ball is stationary (as you are standing on it. The rest of the world will appear to rotate). But, you will notice something a bit off: The ball still has a tension force acting on it, so how is is steady? This balancing of forces you attribute to a mysterious "centrifugal force". If you have mass, you feel the CFF, too (from the ground, it is obvious that what you feel as the CFF is due to your inertia)
What really happens when you "feel" psuedoforces is the following. I'll take the example of spinning on a playground wheel.
From the ground frame, your body has inertia and would not like to accelerate(circular motion is acceleration as the direction of velocity changes).
But, you are holding on to the spinning thingy so you're forced to accelerate. Thus, there is a net inward force--centripetal force--a true force since it's from "holding on". In that frame, though, you don't move forward. So your body feels as if there is a balancing backwards force. And you feel that force acting upon you. It really is your body's "inertia" that's acting.
Yes, the turret's wheels are affected. Again, this is due to inertia from the correct perspective, psuedofoces are just a way to easily explain inertia.
Remember, Newton's definition of a force is only valid in an inertial frame in the first place. Psuedoforces make Newton's laws valid in non inertial frames.
-
I believe I understand the use of psuedo forces now. They are required to account for the effects of accelerations on the frame we are observing from in order to allow Newton's laws to be used effectively. Does the magnitude of the acceleration affect their use though? On Earth we are not aware of the fact that we are in a non-inertial frame as the accelerations we are experiencing are so small. What if the earth was spinning much faster and we could physically feel this centrifugal force? What if the earth is spinning so fast that friction can no longer maintain our 'stationary' position? – Ben Collins Apr 26 '12 at 10:36
@Ben yep. Psuedoforces are equal to the mass of the body in question times the acceleration of the frame, in the opposite direction. And yes, Earth would be a strange place. – Manishearth♦ Apr 26 '12 at 11:33
OK then let's get practical, back to the turreted vehicle on earth. The designer of the turret's traverse motors has a requirement to rotate the turret mass at a certain rate under all conditions. This requirement is stringent enough that the turret designer has to account for the effect of the coriolis force during design. If this is the case isn't that enough for us earth-bound folk to consider the force real in earth's frame? – Ben Collins Apr 26 '12 at 11:55
@Ben its still not a force, so not a real force. But, it has the same effects of a force, so you consider it, and treat it like a force. It's more of a technicality that its's fictitious. – Manishearth♦ Apr 26 '12 at 11:59
I think the statement "Psuedoforces are equal to the mass of the body in question times the acceleration of the frame, in the opposite direction" is the most enlightening comment so far. So on Earth, the centrifugal 'force' always acts directly 'upwards' and is equal to mv^2/r. The coriolis force however, Earth's tangential vecolicity is constant and therefore there is no tangential pseudoforce. Therefore where does Corioils originate? I assume it's something to do with the fact that the earth's radius about its axis of spin is not constant? – Ben Collins Apr 27 '12 at 13:39
Centrifugal and Coriolis forces are indeed so called pseudo forces that account for differences in observed behaviour relative to an inertial frame.
So if you see an object standing on the surface of the Earth, you can be sure that static friction is holding it at rest relative to Earth's surface.
Great example of the effect of pseudo forces is so called Foucalt's pendulum. Since there is no static friction for pendulum, pendulum's plane of oscillation rotates. Foucalt's pendulum is also a proof that Earth is not an inertial frame of reference.
The problem of observing pseudo forces is in the fact they are very small compared to gravity. Centripetal acceleration due to rotation of the Earth around its axis is of the order $10^{-2}$ m/s$^2$ (depending on the position), while centripetal acceleration due to rotation of the Earth around Sun is $6 \times 10^{-3}$ m/s$^2$. So you have an effect when rotating a turret, but I doubt you would be able to measure it.
So what makes forces pseudo? Well, you might have heard that Newton's laws are valid only in inertial frame of reference. If you watch the movement of the turret from outside the Earth (inertial frame of reference), you can observe that turret is making complex movements and constantly accelerating. Gravitational and frictional forces acting on turret are responsible for these movements.
However if you are standing on the Earth it seems to you that turret is at rest. But gravitational and frictional forces are still acting on it, so this does not add up. The sum of forces different than zero, and turret at rest, breaks 2nd Newton's law! 2nd Newton's law is no longer valid because you are no longer in inertial frame of reference.
In order to "patch" 2nd Newton law in non-inertial frames of references, you introduce pseudo forces. After introduction of pseudo forces, 2nd Newton law is valid even if you are no longer in inertial frame of reference. You can feel those forces only because your intuition requires additional forces in order to explain your observations.
-
Then these forces are in fact very real? We are all constantly experiencing them yet they are so small they are virtually impossible for us to detect without precise measuring equipment? Is 'ficticious force' therefore a misleading term or does it have some other implication? – Ben Collins Apr 24 '12 at 10:52
I will add some text into my answer to attend your additional question. – Pygmalion Apr 24 '12 at 10:54
+1 for explaining the friction/etc aspect of it clearer than I did :) – Manishearth♦ Apr 24 '12 at 11:28
@NickKidman: Could you clarify that? (for one, you haven't logically defined $f$). And $\vec F\neq\frac{\mathrm d\vec p}{\mathrm dt}$ in a non-inertial frame, so Newton's laws are obviously invalid there. – Manishearth♦ Apr 24 '12 at 11:33
1
(edited) I just want to point out that "Newton's laws are valid only in inertial frame of reference" is common abuse of language (it always bugs me when I read it, sorry). The Second law says "In an inertial frame: F=ma", an axiom whose validaty doesn't depend on a reference frame f you're working with. To put it in logical terms, if $f$ means "We are now working in an interial frame" and the law is $(f→"F=ma")$ then $((f→"F=ma")∧(¬f)→¬"F=ma")$ is not false but you're saying $(¬f→¬(f→"F=ma"))$ which is not sound (it could only be true if never $f$). Its because $"F=ma"$ is not the law itself. – Nick Kidman Apr 24 '12 at 11:50
show 8 more comments
Place a stationary object on a piece of graph paper and accelerate the graph paper anyway you want over time, while recording the position of the object on the graph paper and keeping the object stationary relative to you:
Q: Did you see the object accelerate while you were moving the graph paper?
A: Nope, so there isn't a physical force on it.
Q: What is the trajectory of the object on the graph paper and your conclusion?
A: The trajectory is a curve and so it was accelerating in the coordinate system of the graph paper. We can model this as an unphysical force acting upon the object in this coordinate system. This fictitious force will depend upon how this coordinate system is accelerating wrt one moving at a constant velocity.
-
Why is the trajectory a curve? I may have only accelerated the graph paper in one direction for a brief moment. – Ben Collins Apr 26 '12 at 10:29
@ben well a curve is a generalization and a line is a special case of a curve. I'm sure you get the general idea ;) – John McVirgo Apr 26 '12 at 11:04
This example doesn't seem analogous to the example in my question. In my example, static friction is keeping the vehicle stationary in it's frame on the Earth, while in yours you are suggesting that that static friction is overcome and the object slips? Could you rephrase the example please? – Ben Collins Apr 27 '12 at 12:42
Use Wikipedia (Coriolis effect) and look at the pictures there. You will understand all of this apparent force thing.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944773256778717, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/04/29/the-integral-test/?like=1&source=post_flair&_wpnonce=5b12459335
|
# The Unapologetic Mathematician
## The Integral Test
Sorry for the delay. Students are panicking on the last day of classes and I have to write up a make-up exam for one who has a conflict at the scheduled time…
We can forge a direct connection between the sum of an infinite series and the improper integral of a function using the famed integral test for convergence.
I’ve spent a goodly amount of time last week trying to craft a proof hinging on converting the infinite sum to an improper integral using the integrator $\lfloor x\rfloor$, and comparing that one to those using the integrators $x$ and $x-1$. But it doesn’t seem to be working. If you can make a go of it, I’ll be glad to hear it. Instead, here’s a proof adapted from Apostol.
We let $f$ be a positive decreasing function defined on some ray. For our purposes, let’s let it be $\left[1,\infty\right)$, but we could use any other and adapt the proof accordingly. What we require in any case, though, is that the limit $\lim\limits_{x\rightarrow\infty}f(x)=0$. We define three sequences:
$\displaystyle s_n=\sum\limits_{k=1}^nf(k)$
$\displaystyle t_n=\int\limits_1^nf(x)dx$
$d_n=s_n-t_n$
First off, I assert that $d_n$ is nonincreasing, and sits between $f(n)$ and $f(1)$. That is, we have the inequalities
$0<f(n+1)\leq d_{n+1}\leq d_n\leq f(1)$
To see this, first let’s write the integral defining $t_{n+1}$ as a sum of integrals over unit steps and notice that $f(k)$ gives an upper bound to the size of $f$ on the interval $\left[k,k+1\right]$. Thus we see:
$t_{n+1}\displaystyle\sum\limits_{k=1}^n\int\limits_k^{k+1}f(x)dx\leq\sum\limits_{k=1}^n\int\limits_k^{k+1}f(k)dx=\sum\limits_{k=1}^nf(k)=s_n$
From here we find that $f(n+1)=s_{n+1}-s_n\leq s_{n+1}-t_{n+1}=d_{n+1}$.
On the other hand, we see that $d_n-d_{n+1}=t_{n+1}-t_n-(s_{n+1}-s_n)$. Reusing some pieces from before, we see that this is
$\displaystyle\int\limits_n^{n+1}f(x)dx-f(n+1)\geq\int\limits_n^{n+1}f(n+1)dx-f(n+1)=0$
which verifies that the sequence $d_n$ is decreasing. And it’s easy to check that $d_1=f(1)$, which completes our verification of these inequalities.
Now $d_n$ is a monotonically decreasing sequence, which is bounded below by ${0}$, and so it must converge to some finite limit $D$. This $D$ is the difference between the sum of the infinite series and the improper integral. Thus if either the sum or the integral converges, then the other one must as well.
We can actually do a little better, even, than simply showing that the sum and integral either both converge or both diverge. We can get some control on how fast the sequence $d_n$ converges to $D$. Specifically, we have the inequalities $0\leq d_k-D\leq f(k)$, so the difference converges as fast as the function goes to zero.
To get here, we look back at the difference of two terms in the sequence:
$\displaystyle0\leq d_n-d_{n+1}\leq\int\limits_n^{n+1}f(n)dx-f(n+1)=f(n)-f(n+1)$
So take this inequality for $n=k$ and add it to that for $n=k+1$. We see then that $0\leq d_k-d_{k+2}\leq f(k)-f(k+2)$. Then add the inequality for $n=k+2$, and so on. At each step we find $0\leq d_k-d_{k+l}\leq f(k)-f(k+l)$. So as $l$ goes to infinity, we get the asserted inequalities.
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 1 Comment »
1. [...] other one I want to hit is the so-called -series, whose terms are starting at . Here we use the integral test to see [...]
Pingback by | April 29, 2008 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926073431968689, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/54377/in-dirac-notation-what-do-the-subscripts-represent-solution-for-particle-in-a?answertab=votes
|
# In Dirac notation, what do the subscripts represent? (Solution for particle in a box in mind)
So the set of solutions for the particle in a box is given by $$\psi_n(x) = \sqrt{\frac{2}{L}}\sin(\frac{n\pi x}{L}).$$
In Dirac notation $<\psi_i|\psi_j>=\delta_{ij}$ assuming $|\psi_i>$ is orthonormal. My question is, are $\psi_i$ and $\psi_j$ simply corresponding to different values of $n$ for the above set of solutions?
For instance would $$\psi_1(x) = \sqrt{\frac{2}{L}}\sin(\frac{1\pi x}{L})$$ and $$\psi_2(x) = \sqrt{\frac{2}{L}}\sin(\frac{2\pi x}{L})?$$
-
Yes, n is the quantum number of your problem, it labels the eigenvalue of the energy. In dimension without spin, I don't what more you could expect. – Learning is a mess Feb 19 at 9:17
## 2 Answers
Your equation is the solution to Schrodinger's equation that describes a particle in a 1-D "box" of length L. The particular solution you have written represents the odd states. The integer-valued parametre, n, labels the quantum levels at which the particle can be inside the box. In each of these quantum states, 1, 2, 3,...$i$,...$j,...n,...$ the particle has different amount of energy, which are given by the equation
$E_n=\frac{h^2}{2mL^2}n^2$ where $n=1,2,3,...,i,...j,...n,...$.
A quantum state corresponding to the energy level $j$ is sybolically indicated by $|j>$ and is given by your equation with replacing $n$ by $j$, as you have done for $n=1$ and $n=2$. The particular expression you have written, $<\psi_i|\psi_j>$, represents the transition probability amplitude for the particle to jump from quantum state $|\psi_j>$ to $|\psi_i>$. The latter is zero since these wave-functions form an orthonormal set of quantum states, and you are showing this with the $\delta_{ij}$ symbole.
-
The labels $i,j$ can correspond to any label that labels the set of orthonormal states $\{| \psi_i \rangle\}_i$. In your case, your states are energy eigenstates with different eigenvalues, so they are indeed orthogonal (by some linear algebra theorem), and I guess you've normalized them properly, so they are one possible choice. But you can change basis orthogonally
$$|u_i \rangle \equiv O_{ij} |\psi_j \rangle$$
and you'd still have the relation $\langle u_i|u_j \rangle = \delta_{ij}.$
For example, on a finite lattice you have momentum eigenstates $|\mathbf{p} \rangle$, but you can also Fourier transform to get position eigenstates $|\mathbf{x} \rangle.$ The latter aren't energy eigenstates, but are still orthogonal.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306368827819824, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/64003/floquet-transform-of-the-derivative-of-a-function/64025
|
Floquet transform of the derivative of a function
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a closed-form expression for the Floquet transform of the derivative of a function $f$ (analog to the well-kown property of the Fourier transform)?
-
2 Answers
If I'm not mistaken, the definition of the Floquet transform is
$(Uf)(r)=\sum_{R\in L}e^{ik\cdot R}f(r-R)$,
where the sum runs over vectors $R$ on a $d$-dimensional periodic lattice $L$. The vector $k$ is fixed. Now I just take the derivative of both sides with respect to $r$,
$\frac{\partial}{\partial r}(Uf)(r)=(U\frac{\partial}{\partial r}f)(r)$.
In words: the derivative of the Floquet transform is the Floquet transform of the derivative.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Dear Carlo
It's indeed true that the derivative of the Floquet transform equals the Floquet tranform of the derivative. I should have pointed it out more precisely, but I am actually interested in the fact if the Floquet tranform of the derivative of a function $f(r)$ can be expressed in terms of the Floquet tranform of the function $f(r)$.
So, is there a relation between $(U \frac{\partial }{\partial r}f)(r)$ and $(U f)(r)$ (like there is for the Fourier transform, i.e. $(F \frac{\partial }{\partial r}f)(r)=i\omega (F f)(r)$)?
Many thanks in advance !
Jeff
-
Since Answers should be answers to the original question, this should have been a comment to Carlo's answer... – Dirk May 5 2011 at 18:26
Aren't we asking for the impossible here? How could the derivative be expressed in terms of the function? – Michael Renardy May 5 2011 at 18:28
I have to second Micheal's objection here-unless we're misunderstanding the question. – Andrew L Jun 2 2011 at 20:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9010950326919556, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/183733-sketch-limit-proof-legal.html
|
# Thread:
1. ## Sketch of a limit proof. Is this legal?
I'm going to start by saying I wish I knew all the tags to make this look nice.
The question is prove the lim (1/n - 1/(n+1)) = 0
My sketch of this is ((1(n+1)-n)/n(n+1))=0 which leads to ((n+1-n)/n(n+1)) which leads to (1/n(n+1)) and finally (1/((n^2)+n)) which I wholeheartedly expect converges to 0.
Now how do I make this formal enough to convince someone else, or am I off the mark?
2. ## Re: Sketch of a limit proof. Is this legal?
Originally Posted by CountingPenguins
I'm going to start by saying I wish I knew all the tags to make this look nice.
The question is prove the lim (1/n - 1/(n+1)) = 0
My sketch of this is ((1(n+1)-n)/n(n+1))=0 which leads to ((n+1-n)/n(n+1)) which leads to (1/n(n+1)) and finally (1/((n^2)+n)) which I wholeheartedly expect converges to 0.
Now how do I make this formal enough to convince someone else, or am I off the mark?
Dear CountingPenguins,
I hope you want to prove, $\lim_{n\rightarrow\infty}\left(\frac{1}{n}-\frac{1}{1+n}\right)=0.$ If that is the case the best method is to use the definition of limit of a function.
3. ## Re: Sketch of a limit proof. Is this legal?
Originally Posted by CountingPenguins
I'm going to start by saying I wish I knew all the tags to make this look nice.
The question is prove the lim (1/n - 1/(n+1)) = 0
My sketch of this is ((1(n+1)-n)/n(n+1))=0 which leads to ((n+1-n)/n(n+1)) which leads to (1/n(n+1)) and finally (1/((n^2)+n)) which I wholeheartedly expect converges to 0.
Now how do I make this formal enough to convince someone else, or am I off the mark?
The precise definition of a limit is that $\displaystyle \lim_{x \to \infty}f(x) = L$ if $\displaystyle x > N \implies |f(x) - L| < \epsilon$.
So in this case, you are hoping to show $\displaystyle \lim_{n \to \infty}\left(\frac{1}{n} - \frac{1}{n + 1}\right) = 0$, in other words, that $\displaystyle n > N \implies \left|\frac{1}{n} - \frac{1}{n + 1} - 0\right| < \epsilon$.
$\displaystyle \begin{align*} \left|\frac{1}{n} - \frac{1}{n + 1}\right| &< \epsilon \\ \left|\frac{1}{n(n + 1)}\right| &< \epsilon \\ \frac{1}{|n(n + 1)|} &< \epsilon \\ |n(n + 1)| &> \frac{1}{\epsilon} \\ |n^2 + n| &> \frac{1}{\epsilon} \\ \left|n^2 + n + \left(\frac{1}{2}\right)^2 - \left(\frac{1}{2}\right)^2 \right| &> \frac{1}{\epsilon} \\ \left|\left(n + \frac{1}{2}\right)^2 - \frac{1}{4}\right| &> \frac{1}{\epsilon} \\ \left|n + \frac{1}{2}\right|^2 + \left|-\frac{1}{4}\right| &> \frac{1}{\epsilon} \textrm{ by the triangle inequality} \\ \left|n + \frac{1}{2}\right|^ 2 &> \frac{1}{\epsilon} - \frac{1}{4} \\ \left|n + \frac{1}{2}\right| &> \sqrt{\frac{1}{\epsilon} - \frac{1}{4}} \\ n + \frac{1}{2} &> \sqrt{\frac{1}{\epsilon} - \frac{1}{4}}\textrm{ since everything is positive} \\ n &> \sqrt{\frac{1}{\epsilon} - \frac{1}{4}} - \frac{1}{2}\end{align*}$
So by making $\displaystyle N = \sqrt{\frac{1}{\epsilon} - \frac{1}{4}} - \frac{1}{2}$ the proof will follow.
4. ## Re: Sketch of a limit proof. Is this legal?
$\frac{1}{n(n+1)}<\frac{1}{{{n}^{2}}}<\epsilon ,$ so having $N=\left\lfloor \frac{1}{\sqrt{\epsilon }} \right\rfloor +1,$ also works for all $\epsilon>0.$
5. ## Re: Sketch of a limit proof. Is this legal?
Thank you for your response. How did you know to add and then subtract (1/2)^2? Does that knowledge come with experience or is there a trick to it? Please let me know. Thanks.
6. ## Re: Sketch of a limit proof. Is this legal?
So by making $\displaystyle N = \sqrt{\frac{1}{\epsilon} - \frac{1}{4}} - \frac{1}{2}$ the proof will follow
And natural $N$ is $\displaystyle \left \lfloor \sqrt{\frac{1}{\epsilon} - \frac{1}{4}} - \frac{1}{2} \right \rfloor +1$
7. ## Re: Sketch of a limit proof. Is this legal?
Originally Posted by CountingPenguins
Thank you for your response. How did you know to add and then subtract (1/2)^2? Does that knowledge come with experience or is there a trick to it? Please let me know. Thanks.
Completing the square. It's a powerful method when dealing with quadratic inequalities.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323275685310364, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/215937/equivalence-relations-and-partitions?answertab=oldest
|
# Equivalence Relations and Partitions
My university "textbook" for discrete math is Schaum's Outline. In this outline he goes over Equivalence Relations and Partitions, and I got confused at a particular theorem.
From the book:
Theorem 2.6: Let R be an equivalence relation on a set S. Then S/R is a partition of S. Specifically:
(i) For each a in S, we have a ∈ [a].
(ii) [a] = [b] if and only if (a,b) ∈ R. (iii) If $[a] \ne [b]$, then [a] and [b] are disjoint. Conversely, given a partition {Ai} of the set S, there is an equivalence relation R on S such that the sets Aiare the equivalence classes. This important theorem will be proved in Problem 2.17.
EXAMPLE 2.13
(a) Consider the relation R = {(1,1),(1,2),(2,1),(2,2),(3,3)} on S = {1,2,3}. One can show that R is reflexive, symmetric, and transitive, that is, that R is an equivalence relation.
Also: [1] = {1,2},[2] = {1,2},[3] = {3} Observe that [1] = [2] and that S/R = {[1],[3]} is a partition of S. One can choose either {1,3} or {2,3} as a set of representatives of the equivalence classes.
My confusion arises from the S/R = {[1],[3]}. I don't understand how one can subtract a relation from a set of integers. What fundamental understanding am I missing?
-
– Peter Tamaroff Oct 17 '12 at 22:50
## 2 Answers
There's no subtraction there. Set subtraction is denoted by a backslash. The forward slash denotes forming the quotient set.
-
Numbers origin as cardinality of sets (how many elements it has).
• Addition on numbers corresponds to disjoint union
• Subtraction corresponds to subtraction of a subset
• Multiplication corresponds to direct product ($A\times B$ contains all $\langle a,b\rangle$ pairs)
And, somehow Division corresponds to a quotient by a regular equivalence relation i.e. such that each partition has the same cardinality.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8912684917449951, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/38385/proving-t-1-sqrt12hg-v2-v-g-for-a-thrown-ball
|
# Proving $t=(1+\sqrt{1+2hg/v^2 } ) (v/g)$ for a thrown ball
If we throw a ball from the hight point $h$ from the earth, with initial velocity $v’$, how to prove that the time it takes the ball to reach the earth is given by:
$$t=\frac{v}{g}(1+\sqrt{1+\frac{2hg}{v^2} } )$$
-
You work with the equations of motion. BTW, is this a homework type problem? – ja72 Sep 26 '12 at 13:32
Why the question was tagged as a homework ? It's not – rib Sep 26 '12 at 14:56
– Qmechanic♦ Sep 26 '12 at 15:38
@Qmechanic Sorry no,but i know that you shouldn't tag the question as a homework unless if the OP say that blatantly. – rib Sep 26 '12 at 15:47
I'm just trying to get a decision either way. If you think the homework tag does not apply, please edit it out again. – Qmechanic♦ Sep 26 '12 at 16:22
show 1 more comment
## 1 Answer
For a free falling object without air resistance you have two equations
$$y = h + v'\,t - \frac{1}{2} g t^2$$ $$v = v' - g\,t$$
with $h$ the initial height, $v'$ the initial velocity (upwards is positive), $y$ the height at time $t$, and $v$ the velocity.
Solve them when $y=0$ for $v$ and $t$.
Reference: projectile motion.
-
equations should be $v_{f}=V_{0}+gt$ and $h=v_{0} t+1/gt^{2}$ the equation for the high is a second order equation yu can solve it immediatly for 't' – Jose Javier Garcia Sep 26 '12 at 15:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913379430770874, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=3584793
|
Physics Forums
## Tricky complex square matrix problem
I have a complex square matrix, $\textbf{C}$, which satisfies:
$\textbf{C}\textbf{C} = (\textbf{I} \odot \textbf{C})$
where $\textbf{I}$ is the identity matrix and $\odot$ denotes the Hadamard (element-by-element) product. In other words, $\textbf{C}\textbf{C}$ is a diagonal matrix whose diagonal entries are the same as the diagonal entries of $\textbf{C}$, which is not necessarily diagonal itself.
Furthermore, $\textbf{C}$ is Hermitian:
$\textbf{C}^{H}=\textbf{C}$
and $\textbf{C}$ must be full rank (because actually, in my problem, $\textbf{C} \triangleq (\textbf{A}^{H}\textbf{A})^{-1}$, where $\textbf{A}$ is complex square invertible).
I want to determine whether $\textbf{C} = \textbf{I}$ is the only solution (because this would imply that $\textbf{A}$ is unitary). (This is equivalent to proving that $\textbf{C}$ is diagonal). By expanding out terms, I've shown that $\textbf{C} = \textbf{I}$ is the only invertible solution for $(3 \times 3)$ matrices, but I can't seem to obtain a general proof.
Any help or insight would be very much appreciated - I'm completely stumped!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
you can take the square root of your equation since C is positive definite ($C=(A^\dagger A)^{-1}$) on the left you have C and you obtain (in components): $$C_{ij}=\sqrt{C_{ij}}\delta_{ij}$$ from which you can conclude that C is the identity matrix
Quote by aesir you can take the square root of your equation
Brilliant! Thank you very much indeed! I had really be scratching my head over that one. Many thanks again for your help!
Recognitions:
Science Advisor
## Tricky complex square matrix problem
You can show this from "first principles". Let the matrix be
$$\left(\begin{array}{cc} a & b \\ b* & c \end{array}\right)$$
where a and c are real.
Mlutiplying the matrices out gives 3 equations
a^2 + bb* = a
c^2 + bb" = c
ab + bc = 0
Subtracting the first two equations, either a = c, or a+c = 1
From the third equation, either b = 0, or a+c = 0
So either b = 0, or a = c = 0
But from the first two equations, if a = c = 0 then b = 0 also.
So, the first two equations reduce to a^2 = a and c^2 = c, and the only solution which gives a matrix of full rank is C = I.
Thanks AlephZero. That is the approach I took in order to obtain a proof for $(2 \times 2)$ and $(3 \times 3)$ matrices. (If I understand correctly, your $a$, $b$ and $c$ are scalars.) However, aesir's solution is valid for the general $(n \times n)$ case, which is especially important for me. A final question on positive definiteness: If $\textbf{A}$ is not square, but instead is tall (with linearly independent columns) then is it correct to say that $(\textbf{A}^{H}\textbf{A})^{-1}$ is now positive semi-definite? My reasoning is that $\textbf{z}^H\textbf{A}^H\textbf{A}\textbf{z} = \left\Vert{\textbf{Az}}\right\Vert^2 \geq 0$ for any $\textbf{z}$ (with equality when $\textbf{z}$ lies in the null space of $\textbf{A}$). (Therefore aesir's square root still exists in this case).
Quote by weetabixharry ... A final question on positive definiteness: If $\textbf{A}$ is not square, but instead is tall (with linearly independent columns) then is it correct to say that $(\textbf{A}^{H}\textbf{A})^{-1}$ is now positive semi-definite? My reasoning is that $\textbf{z}^H\textbf{A}^H\textbf{A}\textbf{z} = \left\Vert{\textbf{Az}}\right\Vert^2 \geq 0$ for any $\textbf{z}$ (with equality when $\textbf{z}$ lies in the null space of $\textbf{A}$). (Therefore aesir's square root still exists in this case).
I don't think so.
It is true that if $\textbf{z}$ is in the null space of $\textbf{A}$ then $\textbf{z}^H\textbf{A}^H\textbf{A}\textbf{z} = \left\Vert{\textbf{Az}}\right\Vert^2 = 0$, but this means that $(\textbf{A}^{H}\textbf{A})$ is semi-positive definite, not its inverse (which does not exists if the null space is non-trivial). BTW if $\textbf{A}$ has linearly independent columns its null space is $\{0\}$
Quote by aesir I don't think so. It is true that if $\textbf{z}$ is in the null space of $\textbf{A}$ then $\textbf{z}^H\textbf{A}^H\textbf{A}\textbf{z} = \left\Vert{\textbf{Az}}\right\Vert^2 = 0$, but this means that $(\textbf{A}^{H}\textbf{A})$ is semi-positive definite, not its inverse (which does not exists if the null space is non-trivial). BTW if $\textbf{A}$ has linearly independent columns its null space is $\{0\}$
Ah yes, of course. Thanks for clearing that up!
So are the following two statements correct?
(1) $(\textbf{A}^H\textbf{A})$ is positive definite when the columns of $\textbf{A}$ are independent (which requires that $\textbf{A}$ is tall or square). Therefore $(\textbf{A}^H\textbf{A})^{-1}$ is also positive definite.
(2) When the rank of $\textbf{A}$ is less than its number of columns (which includes all fat matrices), $(\textbf{A}^H\textbf{A})$ is positive semidefinite. In this case, $(\textbf{A}^H\textbf{A})^{-1}$ does not exist.
Quote by weetabixharry Ah yes, of course. Thanks for clearing that up! So are the following two statements correct? (1) $(\textbf{A}^H\textbf{A})$ is positive definite when the columns of $\textbf{A}$ are independent (which requires that $\textbf{A}$ is tall or square). Therefore $(\textbf{A}^H\textbf{A})^{-1}$ is also positive definite. (2) When the rank of $\textbf{A}$ is less than its number of columns (which includes all fat matrices), $(\textbf{A}^H\textbf{A})$ is positive semidefinite. In this case, $(\textbf{A}^H\textbf{A})^{-1}$ does not exist.
Yes, that's true.
In case (2) you can say a little more. If you split the vector space in null{A} and its orthogonal complement $V_1$ you have $$A^H A = \left(\begin{array}{cc} B^HB & 0 \\ 0 & 0 \end{array} \right)$$
that has a positive definite inverse if restricted from $V_1$ to $V_1$
Thread Tools
| | | |
|-----------------------------------------------------------|----------------------------------|---------|
| Similar Threads for: Tricky complex square matrix problem | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 4 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 3 |
| | Linear & Abstract Algebra | 0 |
| | Precalculus Mathematics Homework | 5 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433854222297668, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/106039/calculate-intersection-of-vector-subspace-by-using-gauss-algorithm
|
## Calculate intersection of vector subspace by using gauss-algorithm [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There are two vector subspaces in $R^4$. $U1 := [(3, 2, 2, 1), (3, 3, 2, 1), (2, 1, 2 ,1)]$, $U2 := [(1, 0, 4, 0), (2, 3, 2, 3), (1, 2, 0, 2)]$
My idea was to calculate the Intersection of those two subspaces by putting all the given vector elements in a matrix (a vector is a column). If I run the gauss-algorithm, this leads to a matrix \begin{pmatrix} 1 & 0 & 0 & 0 & -6 & -4 \\ 0 & 1 & 0 & 0 & 3 & 2 \\ 0 & 0 & 1 & 0 &6 & 4 \\0 &0 & 0 & 1 & -1 & -1 \end{pmatrix}
So I see that the dimension of $U1 + U2$ equals 4, as there are 4 linear independent vectors. Is it somehow possible to get a basis of $[U1] \cap [U2]$ from this matrix? I know that it has to be one dimensional as the dimension of $U2$ equals 2.
-
Welcome to MO! As the FAQs explain this site is mainly intended for question arising from mathematical research. Your question, while being a solid mathematical question, does not fall into this category. Thus, it is not on-topic on this site. However, there is a similar site, with a broader scope, where this question is on-topic math.stackexchange.com I suggest to reask your question there, since here it will be closed in not too long. – quid Aug 31 at 14:37
Thank you, I'm going to do that! – Benji Aug 31 at 14:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929905891418457, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/41727/an-analytical-framework-for-considering-the-geometric-mean
|
# An analytical framework for considering the geometric mean
Is there an analytical method of looking at the geometric mean that will allow one to break it down to its various components?
The focus of the question is more for financially related returns, but I am open to consider other fields as well to see if it is applicable.
So the aim of the question is to get some sort of results along the lines of
GEOMETRIC MEAN = f(ARITHMETIC MEAN,X ,Y,Z)
where $X,Y,Z$ are all factors/variables that help determine the value of the geometric mean.
I know there is a difference in the way its calculated etc, but I guess what I am looking for is what specific factor drives the difference between arithmetic and geometric mean. Most of the time, in financially related contexts arithmetic mean tends to be larger in magnitude than geometric mean due to the variability of return values, i.e. where $x_i$ does not equal $x_j$ where the arithmetic mean is the $\sum \limits_{i=1}^n\frac{x_i}{n}$, and the geometric mean is $\prod \limits_{i}^n {(x_i+1)}^{\frac{1}{n}} - 1$, but just saying that all the returns are not the same is what causes the difference to not quite feel exact enough.
Is there something that is better that will show me exactly what causes the gap between geometric and arithmetic means?
Approximations using Taylor series would be ok too...
-
1) Geometric mean is the exponent of the arithmetic mean of the logarithm'ized values. 2) The formula you display for geometric mean is not genuine one. It adds 1 to values, which suggests being a trick to cope with zero values; and this affects the result. – ttnphns Nov 2 '12 at 8:46
@ttnphs It's not a 'trick' to deal with zeros - h.l.m is dealing with returns data; the $x_i$ are like interest rates, -- the way it's presented is how returns are normally presented – Glen_b Nov 2 '12 at 11:05
## 1 Answer
What you hope to do is not possible in general, except approximately.
Edit: Approximate Taylor series result:
$g(X) = g(\mu_X + X-\mu_X) = g(\mu_X) + (X-\mu_X)\cdot g'(\mu_X) + \frac{(X-\mu_X)^2}{2!}\cdot g''(\mu_X) + ...$
$E(g(X)) = g(\mu_X) + E(X-\mu_X)\cdot g'(\mu_X) + E(X-\mu_X)^2/2!\cdot g''(\mu_X) + ...$
i.e. $E(g(X)) \approx g(\mu_X) + \sigma^2_X/2\cdot g''(\mu_X)$
The following works in terms of a population, but the result can be seen to apply to a sample by treating the ECDF as a CDF.
Let $R_i$ be the $i^\rm{th}$ member of a population of returns.
Let $Y = 1+R$ and $Z = \log(Y)$.
Let $g(Y) = \log(Y)$
$E(\log(Y)) \approx \log(\mu_Y) - \sigma^2_Y/(2 \mu^2_Y)$
$\exp(E(\rm{log(Y)})) \approx \mu_Y \exp(-\frac{\sigma^2_Y}{2 \mu^2_Y})$
i.e. $\rm{GM}(Y) \approx \rm{AM}(Y) \exp(-\frac{\rm{Var}(Y)}{2 \rm{AM}(Y)})$
Approximate result based on lognormal $(1+R)$:
If, as is often assumed to approximately hold, $Z \sim \rm{N}(\mu_Z, \sigma^2_Z)$, then
$\rm{GM}(Y) \approx \rm{AM}(Y) (\frac{1}{\sqrt{\rm{Var}(Y)/\rm{AM}(Y)^2 + 1}})$
(If I haven't made an error there somewhere(!), then for $\rm{Var}(Y)$ small the two should give similar answers.)
As you can see from either formula, the arithmetic and geometric means get further apart when the coefficient of variation of the $Y$'s is larger (the variance of the $Z$'s is larger).
Because these are based on population approximations, the variance formula would normally use the $n$ denominator, but since we're already approximating, that's of no matter.
-
Why is it only possible as an approximate? – h.l.m Nov 4 '12 at 20:22
Essentially you can't compute the quantity exactly from a fixed subset of 'factors' unless one or more of the factors effectively contains the information in the geometric mean. You can approximate it in various ways, such as via a Taylor expansion. – Glen_b Nov 4 '12 at 21:39
Would it please be possible if you could edit your answer to view the results of this Taylor expansion to be able to view how one might be able to approximately analytically view the geometric mean? – h.l.m Nov 4 '12 at 22:10
Could you rephrase your request? I couldn't guess what you wanted to say. Are you asking for how to use Taylor expansion to approximate geometric means in terms of arithmetic means (and some other quantities)? – Glen_b Nov 4 '12 at 23:32
Yes I am asking for how to use Taylor expansion to approximate geometric means in terms of arithmetic means. – h.l.m Nov 5 '12 at 1:51
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466421008110046, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/164445-right-left-square-matrix-inverses.html
|
# Thread:
1. ## Right and Left Square matrix inverses
Hi guys,
I would like to intuitively understand why the following is true:
Let A be a square, invertible matrix. let C be A's left inverse, and B A's right inverse.
Then it follows: B=C
I am aware of the simple proof:
C=C(AB)=CAB=(CA)B=B
yet, this gives away no intuition about the logic behind this claim.
The left inverse is acting on A's columns, the right one is acting on A's rows.
This seems almost like magic.
How would you explain this in child's terms?
Thanks.
2. I cannot imagine anything being simpler than that! The "intuition" about the logic is precisely the meaning of "inverse"- it has nothing to do with "columns" or "rows".
3. I believe some authors define the inverse such that it commutes with the original matrix. In that case, it is superfluous to talk about right and left inverses, because they are, by definition, the same thing.
4. I'm going by the natural definition of left or right inverse. in square matrices they are the same and only then can you say "inverse" without specifying left or right.
Sure we can look at this in a high level manner, as a general algebraic structure satisfying associativity and existence of unit element. But then we'd be ignoring the beautiful properties of matrices.
AB means a series of linear transformations on columns of A, according to B.
if AB=I, one cannot immediately see why BA = I as well. BA meaning a series of linear transformations on rows of A according to B, which is, so to say, a completely different operation.
The simple proof above, simply does not answer my question. at least not directly.
5. Tell you what: why don't you give us a list of assumptions (axioms) that you both believe and understand, and that are relevant to the question at hand. List also any other theorems you buy into. Then state the theorem you don't understand, and then perhaps we could produce a constructive proof of the theorem that might enable you to understand.
6. Given only the definition of matrix multiplication. Why is the following true for square matrices:
If AB=I then BA=I
7. Hmm. That could be challenging, although it seems intuitive that you should be able to prove the result from the one assumption. I must admit that I don't know off-hand how to do it, but maybe some ramblings may stimulate some thinking for you.
So the usual definition of matrix multiplication for square matrices is as follows. Let $A,B$ be square, $n\times n$ matrices. The product matrix $C=AB$ is the matrix consisting of entries as follows:
$\displaystyle C_{ij}=\sum_{k=1}^{n}A_{ik}B_{kj}.$
Let us assume that $AB=I.$ We wish to show that $BA=I.$
Now, define the Kronecker delta symbol $\delta_{ij}$ as follows:
$\delta_{ij}=\begin{cases}1\quad i=j\\ 0\quad i\not=j\end{cases}.$
It is standard to show that
$I_{ij}=\delta_{ij}.$ Thus, by assumption, we have that
$\displaystyle \sum_{k=1}^{n}A_{ik}B_{kj}=\delta_{ij}.$
Now, we know that $B^{T}A^{T}=(AB)^{T}=I^{T}=I.$ This you can prove using the definition of matrix multiplication. How would that look in summation notation?
8. Originally Posted by leolol
Given only the definition of matrix multiplication. Why is the following true for square matrices:
If AB=I then BA=I
Try lifting $A$ to a linear trnasformation.
9. Originally Posted by Drexel28
Try lifting $A$ to a linear trnasformation.
Or consider that if $AB=I$ then $1\det\left(AB\right)=\det\left(A\right)\det\left(B \right)$ and so $\det\left(A\right),\det\left(B\right)\ne 0$ and so $A,B$ are invertible.
Thus, we note that $AB=I\implies B=A^{-1}$ and so $B^{-1}=\left(A^{-1}\right)^{-1}=A$ and so $I=BA$.
10. I don't think we are allowed to use determinants.
11. Originally Posted by Ackbeet
I don't think we are allowed to use determinants.
Then, consider the fact that we can view the matrices $A,B$ as endomorphisms, for simplicities sake from $\mathbb{R}^n$ to itself, then note that the existence of a left inverse for $B$ implies that $B$ is injective, thus $B\left(\mathbb{R}^n\right)$ is $n$-dimensional, and so from an elementary theorem it follows that $B(\mathbb{R}^n}=\mathbb{R}^n$ and so $B$ is surjective, thus bijective and so $B$ is invertible. Similarly, since $A$ possesses a right inverse we know that $A$ is surjective and it's easy to show that this is impossible if $A$ is not injective and thus $A$ is also invertible.
The rest of my proof stands.
12. I'm not sure I followed all that, so you tell me: does this proof satisfy the conditions of post # 6?
13. Originally Posted by Ackbeet
I'm not sure I followed all that, so you tell me: does this proof satisfy the conditions of post # 6?
No, it does not. But this is really a statement about endomorphisms of finite dimensional spaces whether the OP would like to admit it or not. Even if there was a purely matrix analytic proof it would obscure the reason for it.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499411582946777, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/15297/how-do-we-construct-in-a-vector-space-a-chain-of-countable-dimensional-subspa/15300
|
## How do we construct (in a vector space) a chain of countable dimensional subspaces that can only be bounded by an subspace of uncountable dimension?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In more rigorous language: " V: a vector space having an uncountable base S: The set of subspaces of V that have countable dimension. Can we construct explicitly a chain in the poset S (ordered by inclusion), such that this chain has NO upper bound in S? "
Apparently, this chain must have uncountable terms. Also,because S doesn't satisfy Zorn's lemma, we know such chain must exist in S.
But how do we construct it?
-
1
It's very rare that something proven to exist by Zorn's lemma can be constructed – Harry Gindi Feb 15 2010 at 1:41
## 3 Answers
The other answers asked you first to well-order the whole vector space (or a basis for it), and those answers are perfectly correct, but perhaps you don't like well order the whole space. So let me describe a construction that appeals directly to the Axiom of Choice.
Let V be your favorite vector space having uncountable dimension. For each countable dimension subpace W, let aW be an element of V that is not in W. Such a vector exists, since W is countable dimensional and V is not, and we choose such elements by the Axiom of Choice.
Having made these choices, the rest of the construction is now completely determined. Namely, we construct a linearly ordered chain of countable dimensional spaces, whose union is uncountable dimension. Let V0 be the trivial subspace. If Vα is defined and countable dimensional, let Vα+1 be the space spanned by Vα and the element aVα. If λ is a limit ordinal, let Vλ be the union of all earlier Vα. It is easy to see that { aVβ | β < α} is a basis for Vα. Thus, the dimension of each Vα is exactly the cardinality of α. In particular, if ω1 is the first uncountable ordinal, then Vω1 will have uncountable dimension, yet be the union of all Vα for α < ω1, which all have countable dimension, as desired.
If you forbid one to use the Axiom of Choice, then it is no longer true that every vector space has a basis (since it is consistent with ZF that some vector spaces have no basis), and the concept of dimension suffers in this case. But some interesting things happen. For example, it is consistent with the failure of AC that the reals are a countable union of countable sets. R = U An, where each An is countable. (The irritating difficulty is that although each An is countable, one cannot choose the functions witnessing this uniformly, since of course R is uncountable.) But in any case, we may regard R as a vector space over Q, and if we let Vn be the space spanned by A1 U ... U An, then we can still in each case make finitely many choices to witness the countability and conclude that each Vn is countable dimensional, but the union of all Vn is all of R, which is not countable dimensional.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Some subset of a basis of $V$ can be put into bijection with the first uncountable ordinal $\omega_1$. Consider the subspaces spanned by each initial segment of $\omega_1$, all of which have countable dimension.
-
I'm not sure if this will satisfy your sense of "construct" but order your uncountable base in some well-ordering and let your chain be {first basis element}, {first two basis elements}, etc up through all countable ordinals. Then this will not have an upper bound.
-
Thank you James! – Z_3 Feb 15 2010 at 2:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360558390617371, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/65621/irreducible-markov-harmonic-function-based-on-stationary-distribution?answertab=votes
|
# Irreducible Markov: harmonic function based on stationary distribution
Let $P$ be the transition matrix of an irreducible Markov chain on a finite state space $\Omega$. Let $\pi_1$ and $\pi_2$ be two stationary distributions for $P$. Is the function $$h(x)={\pi_1(x) \over \pi_2(x)}, x \in \Omega$$ harmonic for $P$.
I tried brute force to show $h(x)=\sum_{y \in \Omega} P(x,y)h(y), \forall x \in \Omega$, but I can't reduce the RHS to the LHS.
I know that for an irreducible Markov chain the stationary distribution is unique. I would like to use the precedent result as an alternative proof of the uniqueness of the stationary distribution.
-
So do you allow here the use of irreducibility? – Ilya Sep 28 '11 at 14:38
@Gortaur: Irreducibility can be used. Except this result from irreducibility: For an irreducible Markov chain, the stationary distribution is unique. – Nicolas Essis-Breton Oct 3 '11 at 12:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8449078798294067, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/tagged/intersection-theory
|
## Tagged Questions
1answer
177 views
### Axiomatic intersection theory
Is there an axiomatic intersection theory? What I expect is something like: An intersection theory is a functor from the category of schemes(or other spaces) to the category of al …
0answers
129 views
### Bezout’s theorem for non-proper intersections?
Is there a version of Bézout's theorem for non-proper intersections? For my specific problem, the setup is as follows: Let $P_1,P_2,P_3,P_4\in\mathbb{C}[z_1,z_2,z_3,z_4]$, and su …
0answers
74 views
### Reference request: Samuel’s multiplicity and degree
I am looking for references for the following simple facts. Let $Y\subset \mathbb{P}^n$ be a variety (or pure-dimensional algebraic set). For $P\in Y$ denote by $e_p(Y)$ the (Sam …
0answers
43 views
### Divisibility of all entries in an intersection form
What are situations where one can conclude that all entries of an intersection form are divisible by an integer? More precisely: $F \subset S$ is a proper connected (usually red …
0answers
74 views
### Relation between the index of (the sum of) lattices in euclidean space, and their orthogonal complement.
I am trying to figure out something concerning the index of lattices. The question came about after reading the paper of W.Fulton and B.Sturmfels, ("Intersection theorey on toric …
0answers
79 views
### Non-proper intersection of projective schemes
Let $X, Y$ be projective varieties in $\mathbb{P}^n$ for $n>10$. Assume that dimensions of $X,Y$ are greater than $n/2$. My first question is as follows: Is there a …
1answer
213 views
### euler class of the normal bundle and self intersection number
Let $S$ be a compact submanifold of $X$ smooth manifold. I know that $T_X|_S=T_S\oplus N_{S/X}$ where $N_{S/X}$ is the normal bundle. I have read that the euler class $e(N_{S/X})$ …
1answer
190 views
### Local model of virtual fundamental cycle
The following baby version of virtual fundamental cycle is well known: Let $M\subset V$ be the zero locus of a section $s$ of a vector bandle $E \to V$, in general $s$ is not tran …
2answers
356 views
### Top chern class under finite, unramified, dominant morphism
Situation: Let $\Bbbk$ be an algebraically closed field. Assume that $\pi:Y\to X$ is an finite, dominant, unramified morphism between nonsingular varieties of dimensions $n$. Let \$ …
1answer
239 views
### Commutativity of the Chow ring in positive characteristic
I was looking in Ravi Vakil's notes on Intersection Theory, Class 20, where he introduces the bivariant intersection theory, in particular the Chow ring $A^\ast (X)$. On p. 2, he …
0answers
148 views
### Flat morphisms whose fibers are affine spaces
Let $f:X \to Y$ be a flat morphism, such that each fiber is isomorphic to the affine space $\mathbb{A}^n$. Then is is true that $f$ is a Zariski affine bundle? If not, is it at lea …
1answer
163 views
### Lefschetz Fixpoint theorem for non-orientable manifolds
The classical lefschetz fixpoint theorem is stated for oriented compact manifolds $M$ and a smooth map $f:M\to M$ as follows: the intersection number $I(\Delta, \mathrm{graph}(f))$ …
1answer
393 views
### Self-intersection and the normal bundle
Let $X/k$ be a surface nonsingular and proper over an algebraically closed field $k$. Let $C \subset X$ be a nonsingular curve. Then it is clear that the self-intersection \$(C \cdo …
3answers
434 views
### Cohomology of vector bundles via Intersection Theory
Let $X$ be a smooth projective variety over a fixed field $k$ (take $k = \mathbb{C}$ if necessary). For a vector bundle $E$ on $X$, $ch(E)$ will be in the Chow ring. \$\textbf{Ques …
1answer
312 views
### Schemes with no nonconstant maps to lower dimensional schemes
Fix an algebraically closed field $k$ (arbitrary characteristic), all schemes will be of finite type over $k$. (Property *): I'm interested in (classes of) examples of schemes $X$ …
15 30 50 per page
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8798202872276306, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4271272
|
Physics Forums
Page 1 of 8 1 2 3 4 > Last »
## Why don't virtual particles cause decoherence?
I was recently told virtual particles don't cause decoherence. Why not? Do they just never interact with their environment (apart from transferring energy/force) so they can never collapse a wavefunction?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Homework Help Science Advisor Interaction with real particles can be mediated via virtual particles, and cause decoherence. I think it is misleading to distinguish between real and virtual particles here.
Recognitions: Science Advisor Decohence is due to factorizing the full Hilbert space H in Hsystem, Hpointer and Henvironment and then "tracing out" the environment degrees of freedom. The remaining "subsystem" can be described by an "effective density matrix" which is nearly diagonal in the pointer basis, so it seems as if it collapsed to the a pointer state with some classical probability. Virtual particles are artifacts of perturbation theory, i.e they are not present in the full theory w/o using this approximation. Using virtual particles does not introduce the above mentioned factorization of H. And last but not least virtual particles are not states in any Hilbert space Hsystem, Hpointer or Henvironment , but they are "integrals over propagators". It' like apples and oranges.
Blog Entries: 19
Recognitions:
Science Advisor
## Why don't virtual particles cause decoherence?
Quote by tom.stoer Decohence is due to factorizing the full Hilbert space H in Hsystem, Hpointer and Henvironment and then "tracing out" the environment degrees of freedom. The remaining "subsystem" can be described by an "effective density matrix" which is nearly diagonal in the pointer basis, so it seems as if it collapsed to the a pointer state with some classical probability. Virtual particles are artifacts of perturbation theory, i.e they are not present in the full theory w/o using this approximation. Using virtual particles does not introduce the above mentioned factorization of H. And last but not least virtual particles are not states in any Hilbert space Hsystem, Hpointer or Henvironment , but they are "integrals over propagators". It' like apples and oranges.
Or in slightly oversimplified terms, virtual particles don't cause decoherence simply because virtual particles don't exist.
they are not existing in the physical (space time) sense !.
Blog Entries: 19
Recognitions:
Science Advisor
Quote by tom.stoer It' like apples and oranges.
I have a better analogy. If you have one apple, then in the equation
1 apple = (2 apples) + (-1 apple)
1 apple is a real apple, while 2 apples and -1 apple are virtual apples.
Recognitions:
Homework Help
Science Advisor
Quote by Demystifier Or in slightly oversimplified terms, virtual particles don't cause decoherence simply because virtual particles don't exist.
If they do not exist, please provide a more appropriate way to describe all particles ever detected. They are all virtual, see Bill_K's post (or this one from me) for an explanation.
Blog Entries: 19
Recognitions:
Science Advisor
Quote by mfb If they do not exist, please provide a more appropriate way to describe all particles ever detected. They are all virtual, see Bill_K's post (or this one from me) for an explanation.
See my post
http://www.physicsforums.com/showpos...8&postcount=12
The confusion stems from the unfortunate fact that physicists use two different DEFINITIONS of the word "virtual particle". According to one, it as any off-shell particle. According to another (more meaningful, in my opinion), it is any internal line in a Feynman diagram. The two definitions are not equivalent.
Recognitions:
Homework Help
Science Advisor
Quote by Demystifier According to another (more meaningful, in my opinion), it is any internal line in a Feynman diagram. The two definitions are not equivalent.
Where is the difference? An internal line in a Feynman diagram is not exactly on-shell, and particles not exactly on-shell are internal lines in Feynman diagrams.
Some particles are just more off-shell than others.
Recognitions: Science Advisor Please have a look at the formal definition: an internal line is not a state but a propagator; and it's therefore not a particle
Recognitions: Homework Help Science Advisor In that case, our universe has no particles. There are no particles (!) which will not interact with anything else in the future.
Recognitions: Science Advisor No, the only problem is that you try to interpret a mathematical artifact
Recognitions: Homework Help Science Advisor A mathematical artifact like our world? In the QFT sense of real particles, do you see* any real particles in the world? *actually, you must not be able to see it, as it must not interact with anything
Recognitions:
Science Advisor
Quote by mfb A mathematical artifact like our world? In the QFT sense of real particles, do you see* any real particles in the world? *actually, you must not be able to see it, as it must not interact with anything
Are you aware of the fact that you can formulate QFT non-perturbatively w/o Feynman diagrams? Do you see any relevance for propagators in non-rel. QM and density operators?
Mukilab asked why virtual particles do not cause decoherence.
The answers is simple: usually there is no need to introduce perturbation theory and propagators when studying density operators. The formalism is different. So there are no propagators in this context, and therefore they cannot cause anything.
Quote by mfb A mathematical artifact like our world? In the QFT sense of real particles, do you see* any real particles in the world? *actually, you must not be able to see it, as it must not interact with anything
But isn't that the essence of the difficulty here? Namely:
Our world is a physical entity, including the ability to measure things. One of the things that can be measured is the state of incoming/outgoing particles in a scattering experiment. In the model, this corresponds to the in/out state containing noninteracting particles. Yes, in reality, they may be slightly off-shell. They must be since they haven't been around for an infinite time. In this sense the model is an idealization.
What happens whilst the particles are interacting, however, cannot be measured. In a model based on perturbation theory, this interaction includes the exchange of what we're calling virtual particles. If we could solve QED (say) exactly, presumably we wouldn't even need to chop up the interaction into these virtual particle contributions.
As soon as we wish to talk of an in state particle as something whose properties we can prepare or an out-state particle as something whose properties we can measure, it's no longer appropriate to model that particle by a propagator (and hence by our definition no longer appropriate to call it a virtual particle). For example, for a photon, I'd like to be able to prepare/measure its momentum/polarization, but the photon propagator $-i\frac{g^{\mu\nu}}{k^2+i\epsilon}$ doesn't have the right ingredients to allow me to do this.
Recognitions:
Science Advisor
Quote by sheaf )... For example, for a photon, I'd like to be able to prepare/measure its momentum/polarization, but the photon propagator $-i\frac{g^{\mu\nu}}{k^2+i\epsilon}$ doesn't have the right ingredients to allow me to do this.
Very good point, the photon propagator does not carry momentum in the sense we measure it.
In addition gauge boson propagators are gauge-dependent objects and are therefore unphysical. So virtual particles DO depend on the specific gauge fixing. Temporal gauge, Lorentz gauge, Coulomb gauge, ... result in different propagators and 'potentials', so you can't interpret these entities directly. The difference becomes visible in QCD, where you have ghost propagators only in some gauges!
Recognitions:
Homework Help
Science Advisor
Quote by tom.stoer Are you aware of the fact that you can formulate QFT non-perturbatively w/o Feynman diagrams? Do you see any relevance for propagators in non-rel. QM and density operators?
I am aware of that. Could you answer my question, please? It would help me to understand where our views differ:
Do you think there are any real particles? If yes, in which way?
Do they have a fundamental difference to, say, a short-living top quark at the LHC? Or an even shorter-living W boson in the weak decay of a neutron?
Quote by sheaf Yes, in reality, they may be slightly off-shell. They must be since they haven't been around for an infinite time. In this sense the model is an idealization.
You got my point. I don't think there is a fundamental difference between an electron measured in a detector and a W boson mediating a weak decay.
Page 1 of 8 1 2 3 4 > Last »
Thread Tools
| | | |
|---------------------------------------------------------------------|-----------------|---------|
| Similar Threads for: Why don't virtual particles cause decoherence? | | |
| Thread | Forum | Replies |
| | Quantum Physics | 12 |
| | Quantum Physics | 7 |
| | Quantum Physics | 3 |
| | General Physics | 3 |
| | Quantum Physics | 6 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9182133078575134, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/24653/is-this-really-how-a-capacitor-works-why-doesnt-it-behave-like-a-resistor/24655
|
Is this really how a capacitor works? Why doesn't it behave like a resistor?
My book says a capacitor is two conducts being connected by an insulator. Now let's take a parallel plate capacitor to simplify the problem I have.
Suppose I got two parallel plate capacitor in series and I hook the circuit up with a battery.
As soon as I hook it up, electrons flow (forget conventional current for now) into on the right plate and builds up on that capacitor (and spreads on the surface of conducting plate) and remains stuck there because there is an insulator that blocks the electrons from going anywhere.
Now here is my confusion, how does the left plate of $\ C_1$ even build the positive charges and how do the current even run through the circuit if there an insulator blocking the electrons from moving? Is there even current through the circuit?
-
3 Answers
Yes current does flow until $q/V$ equals $C$.
In the case of two capacitors in series, the effective capacitance is $1/(1/C_1 + 1/C_2)$, because the voltage $V$ is effectively divided between them.
Maybe an easier way to see this is to define inverse capacitance $K = 1/C = V/q$ which is, for a particular capacitor, the amount of voltage $V$ needed to put a given charge $q$ on one plate (and $-q$ on the other).
Then it's easier to see that $K = K_1 + K_2$, because the voltages across them sum, because they are in series. The charge on the right plate of $C_1$ comes from the left plate of $C_2$.
-
How does current flow in the first place when the insulator blocks it? – jak Apr 30 '12 at 20:21
1
@jak: That's the same as asking how does current flow into a single capacitor, or even how does current flow in a conductor. It flows in a conductor because there is a potential gradient that pulls the charge carriers. In a capacitor, the potential gradient is transmitted across the gap. An open switch is just a capacitor with very small capacitance. – Mike Dunlavey Apr 30 '12 at 20:26
1
@jak: That's right. The air doesn't stop the potential from being felt across the gap, and if the plates were really really large, current could flow for quite a long while before it slowed down and stopped. – Mike Dunlavey Apr 30 '12 at 20:47
1
@jak: You can always fall back on a hydraulic analogy. Think of a capacitor inserted in a wire as a springy diaphram inserted in a water pipe. In your case, you have two of them, that's all. – Mike Dunlavey Apr 30 '12 at 21:01
1
@jak: Yes. Conductors are always full of charge, just like water in a pipe. We only say the capacitor is "charged" when enough of the carriers have been moved to one plate, and the lack of them has been moved to the other, making them want to get back, and that "wanting" is the voltage. – Mike Dunlavey Apr 30 '12 at 21:27
show 5 more comments
Current does flow through the circuit but only for a short time, as a transient effect, while the charge builds up on the plates, and after that current flow stops. You can think of electrons in the wire to the left as being attracted to the positive terminal on the battery, so they move towards it. However, because the circuit is open, soon enough the electrons near the plate leave a "hole" of positive charge on the plate, which soon enough becomes as attractive to the electrons in the wire as the battery, so the situation stabilizes. Needless to say, the analogous (but charge-opposite) process happens on the wire to the right, so the plate becomes negatively charged.
Simultaneously (i.e. with a delay too short to measure), the electrons on the middle wire then get subjected to the attraction of an increasingly positive charge to their left and the repulsion of an increasingly negative charge to the right, so they tend to move left and accumulate on $C_1$'s right plate (leaving a corresponding excess of positive charge on $C_2$'s left plate). After the transient, these charges exactly cancel out the charges on the other plates (the ones connected to the battery) so electrons in the wire do not feel any force and no current flows.
EDIT: Wikipedia's capacitor article has a nice section on a hydraulic analogy to a rubber membrane completely sealing a pipe, but able to stretch to a finite amount.
-
+1--Transience of the current seems to go to the heart of the question asked. – daniel Sep 18 '12 at 0:20
In a simple DC circuit current doesn't flow and there is no voltage difference between the left plate of C! and the right plate of C1.
Picture it as a single capcitor with an unconnected plate in the middle
-
Will the system blow up if you never unhook the battery then? – jak Apr 30 '12 at 20:22
1
The voltage across the capacitor will initially rise until it equals the battery voltage - then it just sits there – Martin Beckett Apr 30 '12 at 20:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509420990943909, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/12909/will-moving-differentiation-from-inside-to-outside-an-integral-change-the-resu
|
# Will moving differentiation from inside, to outside an integral, change the result?
I'm interested in the potential of such a technique. I got the idea from Moron's answer to this question, which uses the technique of differentiation under the integral.
Now, I'd like to consider this integral:
$$\int_{-\pi}^\pi \cos{(y(1-e^{i\cdot n \cdot t}))}\mathrm dt$$
I'd like to differentiate with respect to y. This will give the integral:
$$\int_{-\pi}^\pi -(1-e^{i\cdot n \cdot t})(\sin{(y(1-e^{i\cdot n \cdot t}))}\mathrm dt$$
...If I'm correct. Anyways, I'm interested in obtaining the results to this second integral, using this technique. So I'm wondering if solving the first integral can help give results for the second integral. I'm thinking of setting $y=1$ in the second integral. This should eliminate $y$ from the result, and give me the integral involving $x$.
The trouble is, I'm not sure I can use the technique of differentiation under the integral. I want to know how I can apply this technique to the integrals above. Any pointers are appreciated.
For instance, for what values of $y$ is this valid?
-
2
– Fredrik Meyer Dec 3 '10 at 14:16
@Fredrik: Wikipedia's statement covers the Riemann integral case, but the statement I give below is for an integral with respect to an arbitrary measure (including the Lebesgue integral); in particular it covers the case of differentiating a series term-by-term. – Qiaochu Yuan Dec 3 '10 at 14:35
1
Richard Feynman has remarks on solving problems this way in some of his books. As a physicist, all his functions were well behaved. – Ross Millikan Dec 3 '10 at 16:50
## 1 Answer
Wikipedia doesn't seem to have a precise statement of this theorem. Here's a very general statement.
Theorem (Differentiation under the integral sign): Let $U$ be an open subset of $\mathbb{R}$ and let $E$ be a measure space (which you can freely take to be any open subset of $\mathbb{R}^n$ if you want). Let $f : U \times E \to \mathbb{R}$ have the following properties:
• $x \mapsto f(t, x)$ is integrable for all $t$,
• $t \mapsto f(t, x)$ is differentiable for all $x$,
• for some integrable function $g$, for all $x \in E$, and for all $t \in U$,
$$\left| \frac{\partial f}{\partial t}(t, x) \right| \le g(x).$$
Then the function $x \mapsto \frac{\partial f}{\partial t}(t, x)$ is integrable for all $t$. Moreover, the function $F : U \to \mathbb{R}$ defined by
$$F(t) = \int_E f(t, x) \mu(dx)$$
is differentiable, and
$$F'(t) = \int_E \frac{\partial f}{\partial t}(t, x) \mu(dx).$$
In practice the only condition that isn't easily satisfiable is the third one. It is satisfied in this case, so you're fine (for all $y$).
-
I'm wondering, additionally, if I can easily add a third variable to this. The idea is that I'd like to differentiate twice under the integral; once with respect to one variable, and then once with respect to a different variable. I'm wondering if any additional complications arise. (I'm debating on whether I should ask this in a seperate question) – Matt Groff Dec 3 '10 at 17:29
2
Yes; apply the theorem twice. (in the first application E will be an open subset of R^2 instead of an open subset of R.) – Qiaochu Yuan Dec 3 '10 at 17:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9247341156005859, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/49913/simple-elastic-collision/49918
|
# Simple elastic collision
If a particle with mass $m$ collides with a wall at right angles, and the collision is perfectly elastic. The particle hits the wall at $v\ ms^{-1}$. There is no friction or gravity. So the particle will rebound at $-v\ ms^{-1}$?
What will the change in momentum be?
I did:
$$initial\ momentum = final\ momentum$$ $$mv = m(-v)$$ $$mv = -mv$$
But this doesn't seem right because it's like saying $1=-1$?
-
– Qmechanic♦ Jan 11 at 13:24
On top of all these good answers, remember that momentum is a vector, not a scalar quantity. So change in momentum is also a vector. – Mike Dunlavey Jan 11 at 14:00
@qmechanic, but the wall has no momentum, because no matter what mass it is, it's velocity is 0? – Jonathan. Jan 11 at 15:12
@Jonathan.: It's not right to say that the the wall has no momentum after the collision (within the two-body idealization). Letting the mass ratio $M/m \to \infty$ go to infinity, the infinitely heavy wall (after the collision) will indeed have zero velocity (in the limit) but nevertheless carry the missing momentum $2mv$. – Qmechanic♦ Jan 11 at 15:24
@Qmechanic, I dont understand, momentum is $mv$, and $v = 0$, and anything times 0 is 0? – Jonathan. Jan 11 at 23:37
show 1 more comment
## 3 Answers
The initial and final momentum are not the same because the ball is not an isolated system. The wall exerts a force on it. In principle the ball and the wall (and the planet it's connected to!) form an isolated system with a conserved momentum, but you'd have to take into account how much the wall moves after the collision.
The change of momentum is final momentum - initial momentum, and you have the correct values for the initial and final momentum.
-
So the change in momentum is $-2mv$. I have the answer as positive $2mv$, is it possible to just use the magnitude? I would have thought that that implied to went into the wall? – Jonathan. Jan 11 at 10:54
1
I wouldn't attach too much significance to the sign of the change. If your sign convention is that velocity towards the wall is positive, then momentum before is $+mv$ and after it's $-mv$, so the momentum has decreased by $2mv$. I would interpret this as meaning that the change is $-2mv$. – John Rennie Jan 11 at 11:04
In presence of a force the momentum is not conserved, and the wall is a potential repulsive force. Instead, the momentum changes from a positive to a negative value, so the difference is positive.
-
Your equation: $\text {initial momentum = final momentum}$, applies only to the total momentum. It does not apply to individual masses separately.
Here the initial momentum of the mass $m = \text {initial total momentum} = mv$ (since the wall is not moving)
The final total momentum is the sum of the momenta of the wall and the momentum of the mass $m$
The final total momentum is thus the initial total momentum = $mv = -mv + x$
Change in the momentum of $m = -mv-mv =-2mv$.
Change in the momentum of the wall $= + 2mv$.
Total change in the momentum of the system $= (-2mv + 2mv) = 0$ (by law of conservation of momentum). You may add the units to the quantities.
Comment: The diagram shows the velocity after collision as $-v \:\mathrm{ms^{-1}}$ with an arrow pointing to the left. That would be incorrect. $-mv$ with arrow pointing to the right or $mv$ with arrow pointing to the left would be correct.
-
Hi user. Welcome to Physics.SE. Here, we use an unique TeX markup called MathJax, same as Math.SE. The markup is very much helpful in understanding equations, etc. Please have a look here for an introductory, or atleast have a look at our FAQ for an overview. For now, Manish has revised your post :-) – Ϛѓăʑɏ βµԂԃϔ Jan 16 at 12:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228817224502563, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/invariant-theory+matrices
|
# Tagged Questions
0answers
54 views
### Computing $\mathbb{C}[x,y]^G$ or $\mathbb{C}[x,y,z]^G$ where $G$ is a finite subgroup of $GL_n(\mathbb{C})$
My question is related to this link: Ring of Invariant $\mathbf{Question \;1}$. Let $$A = \left( \begin{array}{cc} 0 & -1 \\ 1& 0 \\ \end{array} \right).$$ Then $C= \langle A\rangle$ ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.892327070236206, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/10778/communicating-vessels-formula
|
# communicating vessels formula
i having trouble with this formula (asked first in math.Se, i Didn't know the existence of physics.se) $$Z1(t) = Ze+(\sqrt{Z1-Ze}-\frac{2S0}{S1}\sqrt{2g(1+\frac{S1}{S2})}.t)²$$
Z1 and Z2 are the heights of the vessels. S1 and S2 are the sections of the vessels. S0 is the section of the tube between them (for the exchange). Ze is the final height of the two vessels.
Z1(t) is the height at t
but i only get incorrect values and don't know if the formula is wrong of if its me.
for testing is use Z1 = 45
Z2 = 5
S0=2$\pi$0.3=1.884
S1=2$\pi$10=62.8
S2=2$\pi$10=62.8
$Ze = \frac {S1.Z1 + S2.Z2} {S1+S2}=\frac {62.8.45 + 62.8.5} {62.8+62.8}=25$
thanks
-
Checking the source of the formula, it seems that it's only valid until Z1(t) reaches the value Ze. Until that point the shape of the curve looks reasonable, though I haven't checked the derivation. – mmc Jun 4 '11 at 21:26
2
The sections $S$ are areas. One possible mistake could be that you have used the formula for the perimeter of a circle, where you should be using the formula for the area of a circle, cf. math.about.com/od/formulas/ss/areaperimeter_5.htm – Qmechanic♦ Jun 4 '11 at 21:36
@Qmechanic ,thanks a lot , this is exactly why the result was wrong – eephyne Jun 5 '11 at 10:49
Please add an answer identifying the mistake, and accept it, so that this question gets marked as answered in the system. – Christoph Jun 14 '11 at 20:29
## 1 Answer
like the comment aheadd said , i was doing it wrong , using perimeter instead of area ($2\pi R$ instead of $\pi R^2$).
if you wan to watch the result look here
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209709763526917, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/31581/compton-scattering-vs-photoelectric-effect/31682
|
# Compton scattering vs. photoelectric effect
Say a photon hits some atom.
What determines whether there will be a photoelectric effect (photon is absorbed, electron is released) or whether there will be a Compton scattering (the photon is scattered at some angle, and the electron is released with another direction)?
-
## 3 Answers
For a given system that the electron is in, the primary determinant is the energy of the photon. As @DJBunk points out, this is a quantum mechanical process, so the "choice" is fundamentally random. A given interaction will occur with a probability proportional to its cross section. Figure 1 of this lecture shows how the cross section for each possible process varies with photon energy. This plot is for the interaction of photons with electrons in copper. At low energies, the photoelectric effect is the dominant effect. From about 200 keV to about 10 MeV, Compton scattering is the dominant effect. Above 10 MeV, the dominant effect is pair production. At a given photon energy, the relative probability of two processes would be the ratio of their cross sections.
The dependence of each cross section on photon energy should be similar in form for any system; the exact numbers will vary from system to system. Table 2 of that lecture gives the dependence on the atomic number, for example.
-
@Coling McFaul - Honestly I actually don't think you answered this question either - Bruno is asking why one process happens over another and you are simply replacing this with a function of energy, but now the question is what determines the process as a function of energy? – DJBunk Jul 8 '12 at 15:57
@DJBunk, I believe that I have answered it. I just made an edit, adding your comments from your answer about randomness. I hope that clarifies my answer. – Colin McFaul Jul 9 '12 at 20:23
When a photon interacts with an atom, a variety of processes can occur. You mention the photoelectric effect and Compton scattering (non-resonant inelastic scattering), but you can also have elastic scattering or resonant inelastic scattering (if the incident photon energy is tuned to an atomic transition energy). This list is still by no means exhaustive.
For each of these processes, it is possible to calculate (or measure) a cross-section that determines the relative frequency at which an event occurs given a large number of photons incident on the atom.
Now, to get to your actual question. You ask what determines which event occurs. This is a fundamental question in quantum mechanics, and often is called the "measurement problem". Consider a universe consisting of only a photon flying towards an atom. If we were to run time forward until long after the photon would have reached the atom, the system will be in a superposition of states including all possible processes (with the correct weighting to give the relative probabilities). It isn't until the system interacts with a larger ("classical") measurement device that one of the many processes is selected ("the wavefunction collapses"). According to the usual interpretation of quantum mechanics, which branch occurs is simply determined by the probability, with nothing in particular causing the selection.
Of course, there are conceptual difficulties around where the boundary between "classical" and "quantum" systems should be. You might find it interesting to read about "decoherence" as one possible mechanism for apparent wavefunction collapse.
-
All we can ascribe to a process like you are describing ($\gamma + p \rightarrow products$) is a probability. The sum over all process will give us %100. I couldn't find a plot of anything like the process you described, but I did find this plot of Higgs decay products as a example of the probabilistic nature of quantum mechanics. On the bottom axis is the possible higgs mass (now determined to be pretty close to 125 GeV) and on the vertical axis it says given the Higgs mass, how likely it will decay to those products.
Keep in mind calculating probabilities for hands dealt in a poker game is pretty much the same thing:
All we can do is calculate the probability for a single outcome, and all the possible outcomes add up to %100.
-
1
I suppose I should explain my downvote. This is completely irrelevant to the question. There does exist a plot that describes the cross-section for different interactions as a function of photon energy. It has nothing to do with the Higgs. – Colin McFaul Jul 8 '12 at 2:55
@ColinMcFaul - Thanks for the explanation of your downvote. I probably didn't do a great job explaining it but my purpose of the higgs plot was to give an example of the probabilistic outcomes in quantum mechanics, and particle physics in particular. I couldn't find a plot of the form that you posted so I posted the closest type of plot I could find. – DJBunk Jul 8 '12 at 14:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447759389877319, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.